text
stringlengths 0
12.5k
| meta
dict |
|---|---|
---
address: |
$^1$SUBATECH, Laboratoire de Physique Subatomique et des Technologies Associées\
University of Nantes - IN2P3/CNRS - Ecole des Mines de Nantes 4 rue Alfred Kastler, F-44072 Nantes, Cedex 03, France
author:
- 'Aman D. Sood$^1$'
- 'Ch. Hartnack$^1$'
- Jörg Aichelin$^1$
title: '$K^{+}$ and $K^{-}$ potentials in hadronic matter can be observed'
---
Introduction {#introduction .unnumbered}
============
One key question in the analysis of sub-threshold kaon production is how to obatin information on the properties of strange mesons in dense nuclear matter [@Aichelin:1986ss]. The principal problem for extracting precise information on these properties is, however, that almost all observables depend simultaneously not only on the $K^-$ potential but also on several other input quantities which are only vaguely known eg. life time of $\Delta$ and in-medium modification of the cross section. The situation were much better if experiment provides an observable which depends on the $K$ potentials only and which is not spoiled by other little or unknown quantities. Here we aim show that the ratio of the $K^+$ and $K^-$ momentum spectra at small momentum in light systems can be such an observable.
0.5cm ![\[fig1\] Logarithmic ratio of $p_{T}$ spectra of $K^+$ and $K^-$ for different strengths of potential. Various lines are explained in the figure.[]{data-label="fig2"}](fig_1.eps "fig:"){width="6cm"}
In order to study this observable and in order to make sure that it does not depend on other input quantities we have separated the $K^-$ into 2 classes (by tracing back $K^-$ to its corresponding anti strange partner $K^+$).\
(a) $K^-$ coming directly from reactions like $BB \to BB K^+ K^-$ called direct contribution and abbreviated in the figures (Dir)\
(b) $K^-$ coming from $\pi Y$ or $BY\to K^-$ abbreviated in the figures by Y.\
For the present study we use IQMD model the details of which are described in ref. [@hartphysrep].
Results and Discussion {#results-and-discussion .unnumbered}
======================
In fig. \[fig2\] we display the logarithm of the ratio of the $p_{T}$ spectra of $K^+$ and $K^-$. Top, middle, and bottom panels show this ratio for all $K^-$, for the directly produced $K^-$ and for those produced in secondary collisions, respectively. Different lines are for different strengths of potential which we vary by multiplying the K potential by a constant factor x. The total yields depend on the choice of x. The ratio is nearly constant without KN potential (x=0, dotted magenta line). When we switch on the potential the slope of the ratio changes very sharply in the low momentum region and decreases with increasing strength of the potential, whereas it remains nearly constant in the high momentum region. Comparing top and middle panel, we see that the influence of those $K^-$ (which come from secondary collisions) on the spectral form at small $p_t$ is not essential. This means that this ratio is almost exclusively sensitive to the potential and does not depend on the little or unknown cross sections.
0.5cm ![\[fig1\] Same as fig. \[fig2\] but only $K^-$N potential is varied for a fixed $K^+$N potential. []{data-label="fig3"}](fig_2.eps "fig:"){width="6cm"}
-0.3cm
0.5cm ![\[fig1\] Density at which the finally observed kaons are produced. The various panels are explained in the text.[]{data-label="fig4"}](fig_3.eps "fig:"){width="6cm"}
-0.3cm
Fig. \[fig3\] presents as well the ratio of the $K^+$ and $K^-$ spectra but this time $K^+$N potential is taken as given by the theoretical predictions (x =1) whereas for the $K^-$ we vary the potential assuming that the $K^+$ potentials can be determined by other means. This time we have chosen a linear scale. We observe, as expected, that the dependence of the slope on the $K^-$ potential becomes weaker as compared to a variation of both potentials but still varies by a factor of two and is hence a measurable quantity. This ratio depends on the $K^-$N potential only and presents therefore the possibility to measure directly the strength of the $K^-$N potential. It is therefore the desired ’smoking gun’ signal to determine experimentally the $K^-$ potentials in matter at finite densities. It is interesting to see at which density the kaons are produced which are finally seen in the detector. This is displayed in fig. \[fig4\]. On the left (right) hand side we display as a function of $p_T$ the average density at which the $K^+$ ($K^-$) are produced which are finally seen in the detectors. The top panel shows the density for all events in which a $K^+$ and a $K^-$ is produced, the middle part that for those events in which the $K^+$ and a $K^-$ are produced simultaneously and the bottom part for those events in which the $K^-$ is produced in a secondary collision. Independent of the potential the kaons are produced at densities around normal nuclear matter density. The density for the directly produced kaons is slightly lower than that of the other events because the higher the density the higher is also the probability that the $K^-$ is reabsorbed in a $\Lambda$.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work has been supported by a grant from Indo-French Centre for the Promotion of Advanced Research (IFCPAR) under project no 4104-1.
[50]{} J. Aichelin and C. M. Ko, Phys. Rev. Lett. [**55**]{}, 2661 (1985); S. W. Huang et al., Prog. Part. Nucl. Phys. [**30,**]{} 105 (1993); ibid. Phys. Lett. B [**298,**]{} 41 (1993). C. Hartnack et al., Phys. Rep.- to be published \[arXiv:nucl-th/1106.2083\].
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Bryan A. Plummer, Kevin J. Shih, Yichen Li, Ke Xu, Svetlana Lazebnik, , Stan Sclaroff, , Kate Saenko'
bibliography:
- 'egbib.bib'
title: 'Revisiting Image-Language Networks for Open-ended Phrase Detection'
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We strengthen our previous results [@lmzoll] regarding the moduli spaces of Zoll metrics and Zoll projective structures on $S^2$. In particular, we describe a concrete, open condition which suffices to guarantee that a totally real embedding ${\mathbb R\mathbb P}^2\hookrightarrow {\mathbb C\mathbb P}_2$ arises from a unique Zoll projective structure on the $2$-sphere. Our methods ultimately reflect the special role such structures play in the initial value problem for the $3$-dimensional Lorentzian Einstein-Weyl equations.'
author:
- 'Claude LeBrun[^1] and L.J. Mason[^2]'
date: 'February 11, 2010'
title: 'Zoll Metrics, Branched Covers, and Holomorphic Disks'
---
A [*Zoll metric*]{} on a smooth manifold $M$ is a Riemannian metric $g$ whose geodesics are all simple closed curves of equal length. This terminology commemorates the fundamental contribution of Otto Zoll [@zoll], who exhibited an infinite-dimensional family of such metrics on $M=S^2$. It is easy to prove [@beszoll] that a manifold admitting Zoll metrics is compact and has finite fundamental group, so the only two-dimensional candidates for $M$ are $S^2$ and ${\mathbb R\mathbb P}^2$; conversely, the standard metrics on both of these surfaces are obviously Zoll. However, Green’s proof [@grezoll] of the Blaschke conjecture shows that, after rescaling, every Zoll metric on ${\mathbb R\mathbb P}^2$ is actually a pull-back of the standard one via some diffeomorphism. By contrast, Zoll’s examples show that the situation for the $2$-sphere is fundamentally different. Indeed, in the decade following Zoll’s work, Funk [@funk] gave a formal-power-series argument indicating that, modulo isometries and rescalings, the general Zoll perturbation of the standard metric on $S^2$ depends on one [odd]{} function $f: S^2 \to {\mathbb R}$. However, a rigorous proof of Funk’s conjectural picture was only found half a century later, when Victor Guillemin [@guillzoll] brought the power of Nash-Moser implicit function theorems to bear on the problem.
More recently, twistor techniques have given us new insights into global aspects of the problem. Indeed, the present authors have elsewhere shown [@lmzoll] that Zoll surfaces can in principle be completely understood in terms of families of holomorphic disks in ${\mathbb C\mathbb P}_2$. These same techniques are also naturally adapted to the study of more general [*Zoll projective structures*]{}. Recall that a projective structure is by definition an equivalence class $[\nabla ]$ of affine connections $\nabla$ on a manifold $M$, where two connections are declared to be equivalent iff they have the same geodesics, considered as [unparameterized]{} curves. A projective structure is said to be Zoll iff its geodesics (again, as unparameterized curves) are all embedded circles. It can then be shown [@grogro; @lmzoll] that a Riemannian metric $g$ on a compact surface $M$ is Zoll iff the equivalence class $[\nabla ]$ of its Levi-Civita connection is a Zoll projective structure. A compact surface $M$ can admit a Zoll projective structure $[\nabla ]$ iff it is diffeomorphic to $S^2$ or ${\mathbb R\mathbb P}^2$; and, as in the Riemannian case, any Zoll projective structure on ${\mathbb R\mathbb P}^2$ is actually the standard one, pulled back via some self-diffeomorphism of ${\mathbb R\mathbb P}^2$. Our proof of this last assertion [@lmzoll] hinged on the fact that the complex structure of ${\mathbb C\mathbb P}_2$ is unique [@yau] up to biholomorphism.
We now summarize our previous results [@lmzoll] regarding the the case of $M=S^2$. Given a smooth Zoll projective structure $[\nabla ]$ on $M$, its space of unoriented geodesics $N\approx {\mathbb R\mathbb P}^2$ has a natural embedding in ${\mathbb C\mathbb P}_2$ as a totally real submanifold, in a manner which is completely determined up to a projective linear transformation; for example, the usual projective structure induced by the standard “round” metric corresponds to a “real linear” embedding ${\mathbb R\mathbb P}^2\hookrightarrow {\mathbb C\mathbb P}_2$. Each point $x\in M$ determines an embedded holomorphic disk $\Delta_x\subset{\mathbb C\mathbb P}_2$ with $\partial \Delta_x\subset N$, and the relative homology class $[\Delta_x]$ of any such disk generates $H_2 ({\mathbb C\mathbb P}_2, N; {\mathbb{Z}})\approx {\mathbb{Z}}$. These disks meet $N$ only along their boundaries, and their interiors foliate ${\mathbb C\mathbb P}_2- N$. The family of disks $\Delta_x$ moreover sweeps out an entire connected component in the moduli space of holomorphic disks $(D^2, \partial D^2) \to ({\mathbb C\mathbb P}_2, N)$. If the family of disks $\{ \Delta_x~|~x\in M\}$ is known, the projective structure $[\nabla ]$ can then be completely reconstructed; namely, given a point $z\in N$, the set $${\mathfrak C}_z = \{ x\in M~|~ z\in \partial \Delta_z \}$$ is a geodesic of $[\nabla ]$, and every geodesic arises in this way.
The construction proceeds by first creating an abstract complex surface, and then showing that it must be biholomorphic to ${\mathbb C\mathbb P}_2$. In the process, the bundle of orientation-compatible almost-complex structures over $M=S^2$ is identified with the complement ${\mathbb C\mathbb P}_2 - N$ of the relevant totally real submanifold $N$. If there is an orientation-compatible complex structure ${\zap J}$ on $M$ which is parallel with respect to some torsion-free connection $\triangledown \in [\nabla ]$, then the graph of ${\zap J}$ becomes a holomorphic curve ${\mathcal Q}\subset {\mathbb C\mathbb P}_2 -N$. For homological reasons, this curve must be a non-singular conic, and so may be put in the standard form $$\label{conic}
z_1^2 + z_2^2 + z_3^2 =0$$ by making a suitable choice of homogeneous coordinates on ${\mathbb C\mathbb P}_2$. Notice that this happens precisely when there is a conformal structure $[g]$ on $M$ for which $\triangledown$ is a compatible Weyl connection. If there is actually a Zoll metric $g$ with Levi-Civita connection $\triangledown \in [\nabla ]$, then the totally real submanifold $N\subset {\mathbb C\mathbb P}_2$ is moreover [*Lagrangian*]{} with respect to the sign-ambiguous symplectic form $\Omega = \Im m ~\Upsilon$ on ${\mathbb C\mathbb P}_2 - {\mathcal Q}$, where $$\label{oops}
\Upsilon = \pm ~
\frac{z_1 ~dz_2\wedge dz_3 + z_2 ~dz_3 \wedge dz_1 + z_3~ dz_1\wedge dz_2}{
{\sqrt{(z_1^2 + z_2^2 + z_3^2)^3}}
} ~ .$$
In the converse direction, one would like to assert that the totally real submanifold $N\subset {\mathbb C\mathbb P}_2$ can be chosen essentially arbitrarily, and that each such choice uniquely determines a Zoll projective structure $[\nabla ]$ on $M=S^2$. But while our previous results in this direction may have been conceptually suggestive, they were technically crude in important respects. Indeed, using an elementary inverse-function theorem argument, we merely showed in [@lmzoll] that every $N\subset {\mathbb C\mathbb P}_2$ which is $C^{2k+5}$ close to the standard “real linear” ${\mathbb R\mathbb P}^2$ in the topology actually arises from a $C^k$ Zoll projective structure $[\nabla ]$, and that this projective structure is unique among those that are close to the standard “round” projective structure. By contrast, the rest of the story was quite clean; the choice of a reference conic ${\mathcal Q}\subset {\mathbb C\mathbb P}_2$ disjoint from such an $N$ then gives rise to a conformal structure $[g]$ on $M=S^2$ for which the Zoll projective structure $[\nabla ]$ is represented by a unique $[g]$-compatible Weyl connection $\triangledown \in [\nabla ]$, and this Weyl connection is the Levi-Civita connection of a Zoll metric $g\in [g]$ iff $N$ is Lagrangian with respect to the sign-ambigious symplectic form $\Omega$. Still, it must be admitted that our previous results remain esthetically unsatisfactory in two essential ways: we neither provided an effective condition on $N\subset {\mathbb C\mathbb P}_2$ sufficient for the existence of an associated family of holomorphic disks, nor proved the uniqueness of this family when it does exist.
The present article will address these issues by proving global existence and uniqueness results for holomorphic disks; see Theorems \[snap\], \[
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $G$ be a transitive permutation group on a finite set of size at least $2$. By a well known theorem of Fein, Kantor and Schacher, $G$ contains a derangement of prime power order. In this paper, we study the finite primitive permutation groups with the extremal property that the order of every derangement is an $r$-power, for some fixed prime $r$. First we show that these groups are either almost simple or affine, and we determine all the almost simple groups with this property. We also prove that an affine group $G$ has this property if and only if every two-point stabilizer is an $r$-group. Here the structure of $G$ has been extensively studied in work of Guralnick and Wiegand on the multiplicative structure of Galois field extensions, and in later work of Fleischmann, Lempken and Tiep on $r''$-semiregular pairs.'
address:
- 'T.C. Burness, School of Mathematics, University of Bristol, Bristol BS8 1TW, UK'
- 'H.P. Tong-Viet, Department of Mathematical Sciences, Kent State University, Kent, Ohio 44242, USA'
author:
- 'Timothy C. Burness'
- 'Hung P. Tong-Viet'
title: Primitive permutation groups and derangements of prime power order
---
Introduction {#s:intro}
============
Let $G$ be a transitive permutation group on a finite set $\Omega$ of size at least $2$. An element $x \in G$ is a *derangement* if it acts fixed-point-freely on $\Omega$. An easy application of the orbit-counting lemma shows that $G$ contains derangements (this is originally a classical theorem of Jordan [@Jordan]), and we will write $\Delta(G)$ for the set of derangements in $G$. Note that if $H$ is a point stabilizer, then $x$ is a derangement if and only if $x^G \cap H$ is empty, where $x^G$ denotes the conjugacy class of $x$ in $G$, so we have $$\label{e:delta}
\Delta(G) = G \setminus \bigcup_{g \in G}H^g.$$ The existence of derangements in transitive permutation groups has interesting applications in number theory and topology (see Serre’s article [@Serre], for example).
Various extensions of Jordan’s theorem on the existence of derangements have been studied in recent years. For example, if $\delta(G) = |\Delta(G)|/|G|$ denotes the proportion of derangements in $G$, then a theorem of Cameron and Cohen [@CC] states that $\delta(G) {\geqslant}|\Omega|^{-1}$, with equality if and only if $G$ is sharply $2$-transitive. More recently, Fulman and Guralnick have established the existence of an absolute constant ${\epsilon}>0$ such that $\delta(G)>{\epsilon}$ for any simple transitive group $G$ (see [@FG1; @FG2; @FG3; @FG4]). This latter result confirms a conjecture of Boston et al. [@Boston] and Shalev.
The study of derangements with special properties has been another major theme in recent years. By a theorem of Fein et al. [@FKS], $\Delta(G)$ contains an element of prime power order (their proof requires the classification of finite simple groups), and this result has important number-theoretic applications. For instance, it implies that the relative Brauer group of any finite extension of global fields is infinite. In most cases, $\Delta(G)$ contains an element of prime order, but there are some exceptions, such as the $3$-transitive action of the smallest Mathieu group ${\rm M}_{11}$ on $12$ points. The transitive permutation groups with this property are called *elusive* groups, and they have been investigated by many authors; see [@CGJKKMN; @Giudici; @GiuKel], for example.
In this paper, we are interested in the permutation groups with the special property that *every* derangement is an $r$-element (that is, has order a power of $r$) for some fixed prime $r$. One of our main motivations stems from a theorem of Isaacs et al. [@IKLM], which describes the finite transitive groups in which every derangement is an involution; by [@IKLM Theorem A], such a group is either an elementary abelian $2$-group, or a Frobenius group with kernel an elementary abelian $2$-group. In [@BDS], this result is used to classify the finite groups whose irreducible characters vanish only on involutions. It is natural to consider the analogous problem for odd primes, and more generally for prime powers. As noted in [@IKLM], it is easy to see that such a generalization will involve a wider range of examples. For instance, if $p$ is an odd prime then every derangement in the affine group ${\rm ASL}_{2}(p) = {\rm SL}_{2}(p){:}p^2$ (of degree $p^2$) has order $p$ (if $p=2$, the derangements have order $2$ or $4$).
Our first result is a reduction theorem.
\[t:main1\] Let $G$ be a finite primitive permutation group such that every derangement in $G$ is an $r$-element for some fixed prime $r$. Then $G$ is either almost simple or affine.
Our next result, Theorem \[t:main2\] below, describes all the almost simple primitive groups that arise in Theorem \[t:main1\]. Notice that in Table \[tab:main\], we write ${\rm P}_{1}$ for a maximal parabolic subgroup of ${\rm L}_{2}(q)$ or ${\rm L}_{3}(q)$, which can be defined as the stabilizer of a $1$-dimensional subspace of the natural module (similarly, ${\rm P}_{2}$ is the stabilizer of a $2$-dimensional subspace). In addition, we define $${\mathcal{E}}(G) = \{|x| \,:\, x \in \Delta(G)\}.$$
\[t:main2\] Let $G$ be a finite almost simple primitive permutation group with point stabilizer $H$. Then every derangement in $G$ is an $r$-element for some fixed prime $r$ if and only if $(G,H,r)$ is one of the cases in Table \[tab:main\]. In particular, every derangement has order $r$ if and only if $|\mathcal{E}(G)|=1$.
\[r:isom\]
$$\begin{array}{lllll} \hline
G & H & r & {\mathcal{E}}(G) & \mbox{Conditions} \\ \hline
{\rm L}_{3}(q) & {\rm P}_1, {\rm P}_{2} & r & r & q^2+q+1 = (3,q-1)r \\
& & r & r, r^2 & q^2+q+1 = 3r^2 \\
{\rm \Gamma L}_2(q) & {{\mathbf {N}}}_{G}({\rm D}_{2(q+1)}) & r & r & \mbox{$r=q-1$ Mersenne prime} \\
{\rm \Gamma L}_{2}(8) & {{\mathbf {N}}}_{G}({\rm P}_1), {{\mathbf {N}}}_{G}({\rm D}_{14}) & 3 & 3,9 & \\
{\rm PGL}_{2}(q) & {{\mathbf {N}}}_{G}({\rm P}_{1}) & 2 & 2^i, \, 1 {\leqslant}i {\leqslant}e+1 & \mbox{$q=2^{e+1}-1$ Mersenne prime} \\
{\rm L}_2(q) & {\rm P}_1 & r & r^i, \, 1 {\leqslant}i {\leqslant}e & q=2r^e-1 \\
& {\rm P}_1, {\rm D}_{2(q-1)} & r & r & \mbox{$r=q+1$ Fermat prime} \\
& {\rm D}_{2(q+1)} & r & r & \mbox{$r=q-1$ Mersenne prime} \\
{\rm L}_{2}(8) & {\rm P}_{1}, {\rm D}_{14} & 3 & 3,9 & \\
{\rm M}_{11} & {\rm L}_{2}(11) & 2 & 4,8 & \\
\hline
\end{array}$$
Now let us turn our attention to the affine groups that arise in Theorem \[t:main1\]. In order to state Theorem \[t:main3\] below, we need to introduce some additional terminology. Let ${\mathbb{F}}$ be a field and let $V$ be a finite dimensional vector space over ${\mathbb{F}}$. Let $H {\leqslant}{\rm GL}(V)$ be a finite group and let $r$ be a prime. Recall that $x \in H$ is an *$r'$-element* if the order of $x$ is indivisible by $r$. Following Fleischmann et al. [@FLT], the pair $(H,V)$ is said to be *$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Analyzing qualitative behaviors of biochemical reactions using its associated network structure has proven useful in diverse branches of biology. As an extension of our previous work, we introduce a graph-based framework to calculate steady state solutions of biochemical reaction networks with synthesis and degradation. Our approach is based on a labeled directed graph $G$ and the associated system of linear non-homogeneous differential equations with first order degradation and zeroth order synthesis. We also present a theorem which provides necessary and sufficient conditions for the dynamics to engender a unique stable steady state.
Although the dynamics are linear, one can apply this framework to nonlinear systems by encoding nonlinearity into the edge labels. We answer open question from our previous work concerning the non-positiveness of the elements in the inverse of a perturbed Laplacian matrix. Moreover, we provide a graph theoretical framework for the computation of the inverse of a such matrix. This also completes our previous framework and makes it purely graph theoretical. Lately, we demonstrate the utility of this framework by applying it to a mathematical model of insulin secretion through ion channels and glucose metabolism in pancreatic $\beta$-cells.
author:
- 'I. Mirzaev[^1]'
- 'D. M. Bortz$^{*}$[^2]'
bibliography:
- 'mathbioCU.bib'
title: Analytical Equilibrium Solutions of Biochemical Systems with Synthesis and Degradation
---
Introduction
============
In recent years, many researchers have devoted their efforts to developing a systems-level understanding of biochemical reaction networks. In particular, the study of these chemical reaction networks (CRNs) using their associated graph structure has attracted considerable attention. The work led by Craciun and Feinberg on multistationarity [@Craciun2005; @Craciun2010; @Craciun2006; @CraciunFeinberg2006] and the work led by Mincheva and Roussel on stable oscillations [@Mincheva2011a; @Mincheva2007a; @Mincheva2007] are two particularly influential approaches. For a good overview of the various graph theoretic developments, we direct the interested reader to the review provided in Domijan and Kirkilionis [@Domijan2008].
In this work, we focus on applications of graph theory, mainly the Matrix-Tree Theorem (MTT), for deriving equilibrium solutions (ES) for CRNs that fit within a Laplacian dynamics framework. The MTT-based framework was first applied in a biological context by King and Altman [@King1956] to derive steady state rate equations in enzyme kinetics. This framework was then simplified and summarized into rules (known as *Chou’s graphical rules* [@Lin2013]) by Chou and coworkers [@Chou1989; @Chou1990; @Chou1993]. Chou [@Chou1989] has also extended the framework for non-steady state enzyme-catalyzed systems.
The main disadvantage of Chou’s graphical rules is that they are only applicable if the underlying digraph structure is *strongly connected*, i.e., every vertex is reachable from every other vertex. This issue was solved and extended for general directed graphs (*digraphs*) by Mirzaev and Gunawardena in 2013 [@MirzaevGunawardena2013bmb] and is applicable to the specific class of linear ordinary differential equations (ODEs) known as *Laplacian dynamics*. Systems described by Laplacian dynamics are created using a weakly connected digraph, $G$, with $n$ vertices, with labelled, directed edges, and without self loops. Note that by *weakly connected* we mean that the graph cannot be expressed as the union of two disjoint digraphs. If there is an edge from vertex $j$ to vertex $i$, we label it with $e_{ij}>0$, and with $e_{ij}=0$ if there is no such edge. [^3]
The Laplacian matrix (hereafter, a *Laplacian* $\mathcal{L}$) of given digraph $G$ is then defined as
$$\left(\mathcal{L}(G)\right)_{ij}=\begin{cases}
e_{ij} & \text{if }i\ne j\\
-\sum_{m\ne j}e_{mj} & \text{if }i=j\,.
\end{cases}\label{eq:Laplacian Matrix}$$
The corresponding *Laplacian* *dynamics* are then defined as $$\frac{d\mathbf{x}}{dt}=\mathcal{L}(G)\cdot\mathbf{x}$$ where $\mathbf{x}=\left(x_{1},\cdots,x_{n}\right)^{T}$ is column vector of species concentrations at each vertex, $1,\cdots,n$. In a biochemical context one may think of vertices as different species and edges as rate of transformation from one species to another. However, we note that this framework is symbolic in nature in the sense that the mathematical description of the computed steady states is done without the specification of rate constants, i.e., edge weights $e_{ij}$. In other words, the only information about an individual $e_{ij}$ relevant to our approach is whether or not it is zero.
Laplacian matrices were first introduced by Kirchhoff in 1847 in his article about electrical networks [@Kirchhoff1847]. Ever since then Laplacians have been studied and applied in various fields. For an example of studying the applications of Laplacians to spectral theory, we refer the interested reader to Bronski and Deville [@Bronski2014] in which they study the class of *Signed graph Laplacians* (a symmetric matrix, which is special case of above defined Laplacian).
In this article we will extend the framework intitially developed in [@MirzaevGunawardena2013bmb] to investigate behaviors of Laplacian dynamics when zero-th order synthesis and first order degradation are added to the system. Specifically, we will examine the following dynamics,
$$\frac{d\mathbf{x}}{dt}=\mathcal{L}(G)\cdot\mathbf{x}-D\cdot\mathbf{x}+\mathbf{s}\label{eq:New dynamics}$$
where the degradation matrix $D$ is a diagonal matrix with $\left(D\right)_{ii}=d_{i}\geq0$ and the synthesis vector $\mathbf{s}$ is a column vector with $\left(\mathbf{s}\right)_{i}=s_{i}\geq0$. Hereafter, we refer to this new dynamics as synthesis and degradation dynamics (or simply as SD dynamics). In the biological networks literature this type of dynamics are often referred as *inconsistent* networks[@Marashi2014].
For these dynamics, several questions naturally arise. Under what conditions does this system have non-negative, stable ES solution? Moreover, how can we relate the ES solution to the underlying digraph structure of $G$ as we did for Laplacian dynamics without synthesis and degradation? Our goal is to answer these questions on a theoretical level as well as apply the result to real world CRN examples.
The outline of this work is as follows. We will first briefly review the main results of [@Gunawardena2012; @MirzaevGunawardena2013bmb] and present some additional notation (to be used in subsequent sections). In Section \[sec:Theoretical-Development\] we describe our main theoretical results and in Section \[sec: negativity of inverse\] fully discuss the proof of an important result in Section \[sec:Theoretical-Development\].
In Section \[sec:Biochemical-Network-Application\], we illustrate an application of these results to exocytosis cascade of insulin granules and glucose metabolism in pancreatic $\beta$-cells. Lastly, in Section \[sec:Conclusions\], we conclude with a discussion of the implications of these results as well as plans for future work.[^4]
\[sec:Preliminary-results\]Preliminaries
========================================
In this section we briefly summarize the important results of Mirzaev and Gunawardena [@MirzaevGunawardena2013bmb] and refer the interested reader to that article for proofs and more extensive discussion and interpretation. For the sake of clarity, we will preserve the original notation while we include some additional definitions that can be found in many introductory graph theory books.
Given a digraph $G$, we denote the set of vertices of $G$ with $\mathcal{V}(G)$ and we write $i\Longrightarrow j$ to denote the existence of a path from vertex $i$ to ** vertex $j$. If $i\Longrightarrow j$ and $j\Longrightarrow i$, vertex $i$ is said to be *strongly connected* to vertex $j$, and is denoted $i\Longleftrightarrow j$. A digraph $G$ is *strongly connected* if for each ordered pair $i,j$ of vertices in $G$, we have that $i\Longleftrightarrow j$. The *strongly connected components* (SCCs) of a digraph are the largest strongly connected subgraphs. Let $C[i]$ denote the SCC containing $i$, $i\in\mathcal{V}(C[i])$. Suppose we are given two SCCs, $C[i]$ and $C[j]$, if $i\Longrightarrow j$ we write $C[i]\preceq C[j]$ to denote that $C[i]$ *precedes* $C[j]$. This *precedes* relation is both reflexive and transitive. Moreover, the relation is also antisymmetric as $C[i]\preceq C[j]$ and $C[j]\preceq C[i]$ imply
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Andrew Cotter[^1]'
- 'Heinrich Jiang[^2]'
- 'Karthik Sridharan[^3]'
bibliography:
- 'main.bib'
title: 'Two-Player Games for Efficient Non-Convex Constrained Optimization'
---
[^1]: acotter@google.com
[^2]: heinrichj@google.com
[^3]: sridharan@cs.cornell.edu
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we analyse the double vector meson production in photon – hadron ($\gamma h$) interactions at $pp/pA/AA$ collisions and present predictions for the $\rho\rho$, $J/\Psi J/\Psi$ and $\rho J/\Psi$ production considering the double scattering mechanism. We estimate the total cross sections and rapidity distributions at LHC energies and compare our results with the predictions for the double vector meson production in $\gamma \gamma$ interactions at hadronic colliders. We present predictions for the different rapidity ranges probed by the ALICE, ATLAS, CMS and LHCb Collaborations. Our results demonstrate that the $\rho\rho$ and $J/\Psi J/\Psi$ production in $PbPb$ collisions is dominated by the double scattering mechanism, while the two - photon mechanism dominates in $pp$ collisions. Moreover, our results indicate that the analysis of the $\rho J/\Psi$ production at LHC can be useful to constrain the double scattering mechanism.'
author:
- 'V.P. Gonçalves $^{1,2}$, B.D. Moreira$^{3}$ and F.S. Navarra$^3$'
title: 'Double vector meson production in photon - hadron interactions at hadronic colliders'
---
LU TP 16-XX\
April 2016
Recent theoretical and experimental studies has demonstrated that hadronic colliders can also be considered photon – hadron and photon – photon colliders [@upc] which allow us to study the photon – induced interactions in a new kinematical range and probe e.g. the nuclear gluon distribution [@gluon; @gluon2; @gluon3; @Guzey; @vicwerluiz], the dynamics of the strong interactions [@vicmag_mesons1; @outros_vicmag_mesons; @vicmag_update; @motyka_watt; @Lappi; @griep; @bruno1; @bruno2], the Odderon [@vicodd1; @vicodd2], the mechanism of quarkonium production [@Schafer; @mairon1; @mairon2; @cisek; @bruno1; @bruno2] and the photon flux of the proton [@vicgus1; @vicgus2]. In particular, the installation of forward detectors in the LHC [@ctpps; @marek] should allows to separate more easily the exclusive processes, where the incident hadrons remain intact, allowing a detailed study of more complex final states as e.g. the exclusive production of two vector mesons explore other final states. Recent results from the LHCb Collaboration for the exclusive double $J/\Psi$ production [@lhcb_dif] has demonstrate that the experimental analysis of this process is feasible, motivating the improvement of the theoretical description of this process [@kmr_duplo; @vic_cris_dif; @antonirhorho; @antonipsipsi; @bruno_doublegama]. In particular, in Ref. [@bruno_doublegama] we have revisited the double vector production in $\gamma \gamma$ interactions, proposed originally in Refs. [@vicmagvv1; @vicmagvv2; @vicmagvv3], taking into account recent improvements in the description of the $\gamma \gamma \rightarrow VV$ ($V = \rho, J/\Psi$) cross section at low [@antonirhorho; @antonipsipsi] and high [@brunodouble] energies. A typical diagram for this process is represented in Fig. \[dia1\]. The results presented in Ref. [@bruno_doublegama] has demonstrated that the analysis of this process is feasible in hadronic collisions, mainly in $pp$ collisions, and that its study may be useful to constrain the QCD dynamics at high energies, as proposed originally in Ref. [@vicmagvv1]. However, double vector mesons can also be produced in photon – hadron ($\gamma h$) interactions if a double scattering occurs in a same event, as represented in Fig. \[dia2\]. The treatment of this double scattering mechanism (DSM) for $\gamma h$ interactions in heavy ion collisions was proposed originally in Ref. [@klein] and the double $\rho$ production was recently discussed in detail in Ref. [@mariola]. Such results demonstrated that the contribution of the double scattering mechanism is important for high energies, which motivates a more detailed analysis of this process. In this paper we extend these previous studies for the double $J/\Psi$ and $\rho J/\Psi$ production in $AA$ collisions and present, by the first time, predictions for the double vector meson in $pp$ and $pA$ collisions. Additionally, we compare our results for double vector meson production in $\gamma h$ interactions with those obtained in Ref. [@bruno_doublegama] for $\gamma \gamma$ interactions. As we will demonstrate below, the $\rho\rho$ and $J/\Psi J/\Psi$ production in $PbPb$ collisions is dominated by the double scattering mechanism, while the two - photon mechanism dominates in $pp$ collisions. Moreover, our results indicate that the analysis of the $\rho J/\Psi$ production at LHC can be useful to constrain the double scattering mechanism.
-- -- --
-- -- --
Lets start our analysis presenting a brief review of the main concepts and formulas to describe the single and double vector meson production in $\gamma h$ interactions at hadronic colliders. The basic idea in the photon-induced processes is that a ultra relativistic charged hadron (proton or nucleus) give rise to strong electromagnetic fields, such that the photon stemming from the electromagnetic field of one of the two colliding hadrons can interact with one photon of the other hadron (photon - photon process) or can interact directly with the other hadron (photon - hadron process) [@upc; @epa]. In these processes the total cross section can be factorized in terms of the equivalent flux of photons into the hadron projectile and the photon-photon or photon-target production cross section. In this paper our main focus will be diffractive vector meson production in photon – hadron interactions in hadronic collisions. The differential cross sections for the production of a single vector meson $V$ at rapidity $y$ at fixed impact parameter $b$ of the hadronic collision can be expressed as follows: $$\begin{aligned}
\frac{d\sigma \,\left[h_1 + h_2 \rightarrow h_1 \otimes V \otimes h_2\right]}{d^2b dy} = \left[\omega N_{h_1}(\omega,b)\,\sigma_{\gamma h_2 \rightarrow V \otimes h_2}\left(\omega \right)\right]_{\omega_L} + \left[\omega N_{h_2}(\omega,b)\,\sigma_{\gamma h_1 \rightarrow V \otimes h_1}\left(\omega \right)\right]_{\omega_R}\,
\label{dsigdy}\end{aligned}$$ where the rapidity ($y$) of the vector meson in the final state is determined by the photon energy $\omega$ in the collider frame and by mass $M_{V}$ of the vector meson \[$y\propto \ln \, ( \omega/M_{V})$\]. Moreover, $\sigma_{\gamma h_i \rightarrow V \otimes h_i}$ is the total cross section for the diffractive vector meson photoproduction, with the symbol $\otimes$ representing the presence of a rapidity gap in the final state and $\omega_L \, (\propto e^{-y})$ and $\omega_R \, (\propto e^{y})$ denoting photons from the $h_1$ and $h_2$ hadrons, respectively. One have that Eq. (\[dsigdy\]) takes into account that both incident hadrons can be source of photon which will interact with the other hadron. The equivalent photon spectrum $N(\omega,b)$ of a relativistic hadron for photons of energy $\omega$ at the distance ${\mathbf b}$ to the hadron trajectory, defined in the plane transverse to the trajectory, can be expressed in terms of the charge form factor $F$ as follows $$\begin{aligned}
N(\omega,b) = \frac{Z^{2}\alpha_{em}}{\pi^2}\frac{1}{b^{2}\omega}
\cdot \left[
\int u^{2} J_{1}(u)
F\left(
\sqrt{\frac{\left( \frac{b\omega}{\gamma_L}\right)^{2} + u^{2}}{b^{2}}}
\right )
\frac{1}{\left(\frac{b\omega}{\gamma_L}\right)^{2} + u^{2}} \mbox{d}u
\right]^{2} \,\,,
\label{fluxo}\end{aligned}$$ where $\gamma_L$ is the Lorentz factor. The double vector meson production can occur if two $\gamma h$ interactions are present in the same event, as represented in Fig. \[dia2\]. In order to treat this double - scattering mechanism we will follow the approach from Refs. [@klein; @mariola] that proposed that the double differential cross section for the production of a vector meson $V_1$ at rapidity $y_1$ and a second vector meson $V_2$ at rapidity $y_2$ will be given by $$\begin{aligned}
\frac{d^2\sigma_{h_1 h_2 \rightarrow h_1 V_1 V_2 h
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose a boundary value correction approach for cases when curved boundaries are approximated by straight lines (planes) and Lagrange multipliers are used to enforce Dirichlet boundary conditions. The approach allows for optimal order convergence for polynomial order up to 3. We show the relation to the Taylor series expansion approach used by Bramble, Dupont and Tomée [@BrDuTh72] in the context of Nitsche’s method and, in the case of *inf–sup* stable multiplier methods, prove a priori error estimates with explicit dependence on the meshsize and distance between the exact and approximate boundary.'
author:
- Erik Burman
- Peter Hansbo
- 'Mats G. Larson'
date:
-
-
title: Dirichlet Boundary Value Correction using Lagrange Multipliers
---
Introduction
============
In this contribution we develop a modified Lagrange multiplier method based on the idea of boundary value correction originally proposed for standard finite element methods on an approximate domain in [@BrDuTh72] and further developed in [@Du74]. More recently boundary value correction have been developed for cut and immersed finite element methods [@BuHaLa18; @BuHaLa18b; @BBCL18; @MaSc18; @MaSc18b]. Using the closest point mapping to the exact boundary, or an approximation thereof, the boundary condition on the exact boundary may be weakly enforced using multipliers on the boundary of the approximate domain. Of particular practical importance in this context is the fact that we may use a piecewise linear approximation of the boundary, which is very convenient from a computational point of view since the geometric computations are simple in this case and a piecewise linear distance function may be used to construct the discrete domain.
We prove optimal order a priori error estimates, in the energy and $L^2$ norms, in terms of the error in the boundary approximation and the meshsize. The proof utilizes the a priori error estimates derived in [@BuHaLa18] for the cut boundary value corrected Nitsche method together with a bound, which shows that the solution to the boundary value corrected Lagrange method is close to the corresponding Nitsche solution for which optimal bounds are available. We obtain optimal order convergence for polynomial approximation up to order 3 of the solution.
Note that without boundary correction one typically requires $O(h^{p+1})$ accuracy in the $L^\infty$ norm for the approximation of the domain, which leads to significantly more involved computations on the cut elements for higher order elements, see [@JoLa13]. We present numerical results illustrating our theoretical findings.
The outline of the paper is as follows: In Section 2 we formulate the model problem and our method, in Section 3 we present our theoretical analysis, in Section 4 we discuss the choice of finite element spaces in cut finite element methods, in Section 5 we present the numerical results, and finally in Section 6 we include some concluding remarks.
Model problem and method
========================
The domain
----------
Let $\Omega$ be a domain in $\mathbb{R}^d$ with smooth boundary $\partial \Omega$ and exterior unit normal ${\boldsymbol n}$. We let $\varrho$ be the signed distance function, negative on the inside and positive on the outside, to $\partial \Omega$ and we let $U_\delta(\partial \Omega)$ be the tubular neighborhood $\{{\boldsymbol x}\in {\mathbb{R}}^d : |\varrho({\boldsymbol x})| < \delta\}$ of $\partial \Omega$. Then there is a constant $\delta_0>0$ such that the closest point mapping ${\boldsymbol p}({\boldsymbol x}):U_{\delta_0}(\partial \Omega)
\rightarrow \partial \Omega$ is well defined and we have the identity ${\boldsymbol p}({\boldsymbol x}) = {\boldsymbol x}- \varrho({\boldsymbol x}){\boldsymbol n}({\boldsymbol p}({\boldsymbol x}))$. We assume that $\delta_0$ is chosen small enough that ${\boldsymbol p}({\boldsymbol x})$ is well defined. See [@GilTru01], Section 14.6 for further details on distance functions.
The model problem
-----------------
We consider the problem: find $u:\Omega \rightarrow {\mathbb{R}}$ such that $$\begin{aligned}
{2}\label{eq:poissoninterior_strong}
-\Delta u &= f \qquad
&& \text{in $\Omega$}
\\ \label{eq:poissonbc_strong}
u &= g \qquad && \text{on $\partial\Omega$}\end{aligned}$$ where $f\in H^{-1}(\Omega)$ and $g\in H^{1/2}(\partial \Omega)$ are given data. It follows from the Lax-Milgram Lemma that there exists a unique solution to this problem and we also have the elliptic regularity estimate $$\label{eq:ellipticregularity}
\|u\|_{H^{s+2}(\Omega)} \lesssim \|f\|_{H^s(\Omega)}, \qquad
s \geq -1.$$ Here and below we use the notation $\lesssim$ to denote less or equal up to a constant.
Using a Lagrange multiplier to enforce the boundary condition we can write the weak form of – as: find $(u,\lambda) \in H^1(\Omega) \times H^{-1/2}(\partial\Omega)$ such that $$\begin{aligned}
{2}\label{eq:poissoninterior}
\int_{\Omega}\nabla u \cdot\nabla v \,\text{d}\Omega +\int_{\partial\Omega}\lambda\, v\, \text{d}s &= \int_{\Omega}f v\, \text{d}\Omega\qquad \forall v\in H^1(\Omega)
\\ \label{eq:poissonbc}
\int_{\partial\Omega}u\, \mu\, \text{d}s &= \int_{\partial\Omega}g\, \mu\, \text{d}s\qquad \forall \mu\in H^{-1/2}(\partial\Omega)\end{aligned}$$
The mesh and the discrete domain
--------------------------------
Let ${\mathcal{K}}_{h}, h \in (0,h_0]$, be a family of quasiuniform partitions, with mesh parameter $h$, of $\Omega$ into shape regular triangles or tetrahedra $K$. The partitions induce discrete polygonal approximations $\Omega_h = \cup_{K \in {\mathcal{K}}_h}K$, $h \in (0,h_0]$, of $\Omega$. We assume neither $\Omega_h \subset \Omega$ nor $\Omega \subset
\Omega_h$, instead the accuracy with which $\Omega_h$ approximates $\Omega$ will be crucial. To each $
\Omega_h$ is associated a discrete unit normal ${\boldsymbol n}_h$ and a discrete signed distance $\varrho_h:\partial \Omega_h \rightarrow \mathbb{R}$, such that if ${\boldsymbol p}_h({\boldsymbol x},\varsigma):={\boldsymbol x}+ \varsigma {\boldsymbol n}_h({\boldsymbol x})$ then ${\boldsymbol p}_h({\boldsymbol x},\varrho_h({\boldsymbol x})) \in \partial \Omega$ for all ${\boldsymbol x}\in \partial \Omega_h$. We will also assume that ${\boldsymbol p}_h({\boldsymbol x},\varsigma)
\in U_{\delta_0}(\Omega):=U_{\delta_0}(\partial\Omega)\cup\Omega$ for all ${\boldsymbol x}\in \partial \Omega_h$ and all $\varsigma$ between $0$ and $\varrho_h({\boldsymbol x})$. For conciseness we will drop the second argument of ${\boldsymbol p}_h$ below whenever it takes the value $\varrho_h({\boldsymbol x})$, and thus we have the map $\partial \Omega_h \ni {\boldsymbol x}\mapsto {\boldsymbol p}_h({\boldsymbol x}) \in \partial \Omega$. We assume that the following assumptions are satisfied $$\label{eq:geomassum-a}
\delta_h := \| \varrho_h \|_{L^\infty(\partial \Omega_h)} = o(h),
\qquad h \in (0,h_0]$$ and $$\label{eq:geomassum-c}
\| {\boldsymbol n}_h - {\boldsymbol n}\circ {\boldsymbol p}_h \|_{L^\infty(\partial \Omega_h)} = o(1),
\qquad h \in (0,h_0]$$ where $o(\cdot)$ denotes the little ordo. We also assume that $h_0$ is small enough to guarantee that $$\label{eq:geomassum-b}
\partial \Omega_h \subset U_{\delta_0}(\partial \Omega), \qquad h\in(0,h_0]$$ and that there exists $M>0$ such for any ${\boldsymbol y}\in U_{\delta_0}(\partial
\Omega)$ the equation, find ${\boldsymbol x}\in \partial \Omega_h$ and $
|\varsigma| \leq \delta_h$ such that $$\label{eq:assump_olap}
{{\boldsymbol p}}_h({\boldsymbol x},\varsigma) = {\boldsymbol y}$$ has a solution set $\mathcal{P}_h$ with $$\label{eq:card_hyp}
\mbox{card}(\mathcal{P}_h) \leq M$$ uniformly in $h$. The rationale of this assumption is to ensure that the image of ${\boldsymbol p}_h$ can not degenerate for vanishing $h$; for more information cf. [@BuHaLa18].
We note that it follows
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Paritosh Garg
- Sagar Kale
- Lars Rohwedder
- Ola Svensson
bibliography:
- 'ref.bib'
title: 'Robust Algorithms under Adversarial Injections[^1]'
---
[^1]: Research supported in part by the Swiss National Science Foundation project 200021-184656 “Randomness in Problem Instances and Randomized Algorithms.”
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Dipolar interactions are ubiquitous in nature and rule the behavior of a broad range of systems spanning from energy transfer in biological systems to quantum magnetism. Here, we study magnetization-conserving dipolar induced spin-exchange dynamics in dense arrays of fermionic erbium atoms confined in a deep three-dimensional lattice. Harnessing the special atomic properties of erbium, we demonstrate control over the spin dynamics by tuning the dipole orientation and changing the initial spin state within the large 20 spin hyperfine manifold. Furthermore, we demonstrate the capability to quickly turn on and off the dipolar exchange dynamics via optical control. The experimental observations are in excellent quantitative agreement with numerical calculations based on discrete phase-space methods, which capture entanglement and beyond-mean field effects. Our experiment sets the stage for future explorations of rich magnetic behaviors in long-range interacting dipoles, including exotic phases of matter and applications for quantum information processing.'
author:
- 'A.Patscheider'
- 'B.Zhu'
- 'L.Chomaz'
- 'D.Petter'
- 'S.Baier'
- 'A-M.Rey'
- 'F.Ferlaino'
- 'M.J.Mark'
bibliography:
- 'Spindynamics.bib'
date: April 2019
title: Controlling Dipolar Exchange Interactions in a Dense 3D Array of Large Spin Fermions
---
Spin lattice models of localized magnetic moments (spins), which interact with one another via exchange interactions, are paradigmatic examples of strongly correlated many-body quantum systems. Their implementation in clean, isolated, and fully controllable lattice confined ultra-cold atoms opens a path for a new generation of synthetic quantum magnets, featuring highly entangled states, especially when driven out-of-equilibrium, with broad applications ranging from precision sensing and navigation, to quantum simulation and quantum information processing [@Bloch2008; @Gross2017]. However, the extremely small energy scales associated with the nearest-neighbor spin interactions in lattice-confined atoms with dominant contact interactions, have made the observation of quantum magnetic behaviors extremely challenging [@Bloch2008r; @Greif2013]. On the contrary, even under frozen motional conditions, dipolar gases, featuring long-range and anisotropic interactions, offer the opportunity to bring ultra-cold systems several steps ahead towards the ambitious attempt of modeling and understanding quantum magnetism. While great progress in studying quantum magnetism has been achieved using arrays of Rydberg atoms [@Zeiher2017; @Bernien2017; @Barredo2018; @Guardado2018], trapped ions [@Neyenhuise2017; @Blatt2012; @Britton2012], polar molecules [@Yan2013ood; @Hazzard2014], and spin-$3$ bosonic chromium atoms [@dePaz2013; @dePaz2016; @Lepoutre2018], most of the studies so far have been limited to spin-$1/2$ mesoscopic arrays of at the most few hundred particles or to macroscopic but dilute ($<0.1$ filling fractions) samples of molecules in lattices.
In this work, we report the first investigations of non-equilibrium quantum magnetism in a dense array of fermionic magnetic atoms confined in a deep three-dimensional optical lattice. Our platform realizes a quantum simulator of the long-range XXZ Heisenberg model. The simulator roots on the special atomic properties of ${}^{167}$Er, whose ground state bears large angular momentum quantum numbers with $I=7/2$ for the nuclear spin and $J=6$ for the electronic angular momentum, resulting in a $F=19/2$ hyperfine manifold, as depicted in Fig.\[fig:1\]A. Such a complexity enables new control knobs for quantum simulations. First, it is responsible for the large magnetic moment in Er. Second, it gives access to a fully controllable landscape of $20$ internal levels, all coupled by strong magnetic dipolar interactions, up to $49$ times larger than the ones felt by $F=1/2$ alkali atoms in the same lattice potential [@Stamper-Kurn2013]. Finally, it allows fast optical control of the energy splitting between the internal states which can be tuned on and off resonance using the large tensorial light shift [@Becher2017pol], which adds to the usual quadratic Zeeman shift.
{width="95.00000%"}
Using all these control knobs, we explore the dipolar exchange dynamics and benchmark our simulator with an advanced theoretical model, which takes entanglement and beyond mean-field effects into account [@suppmat]. In particular, we initialize the system into a desired spin state and activate the spin dynamics, while the motional degree of freedom mainly remains frozen. Here, we study the spreading of the spin population in the different magnetic sublevels as a function of both the specific initial spin state and the dipole orientation. We demonstrate that the spin dynamics at short evolution time follows a scaling that is invariant under internal state initialization (choice of macroscopically populated initial Zeeman level) and that is set by the effective strength of the dipolar coupling. On the contrary, at longer times, the many-body dynamics is affected by the accessible spin space and the long-range character of dipolar interactions beyond nearest neighbors. Finally, we show temporal control of the exchange dynamics using off resonant laser light.
The XXZ Heisenberg model that rules the magnetization-conserving spin dynamics of our system can be conveniently written using spin-$19/2$ dimensionless angular momentum operators $\hat{\mathbf{F}}_i=\{\hat{F}^x_i,\hat{F}^y_i,\hat{F}^z_i\}$, acting on site $i$ and satisfying the commutation relation $[\hat F_i^x,\hat F_i^y]=i\hat F_i^z$. We use the eigenbasis of $\hat{F}^z$ denoted as $|m_F\rangle$ with $0\leq |m_F|\leq F$ [@Auerbach1994iea; @Dutta2015; @suppmat]:
$$\begin{aligned}
\hat{H} &=& \frac{1}{2}\sum_{i,j\neq i}V_{i,j} \left(\hat{F}_ {i}^z\hat{F}_{j}^z-\frac{1}{4}(\hat{F}^+_i\hat{F}^-_{j}+\hat{F}^-_{i}\hat{F}^+_{j}) \right) \nonumber \\ &&+ \sum_{i} \delta_{i}(\hat{F}^z_{i} )^2\end{aligned}$$
The coupling constants $V_{i, j}\,=\,V_{dd}d_y^3\frac{1-3\cos^2(\theta_{i,j})}{r_{ij}^3}$, describe the direct dipole-dipole interactions (DDI), which have long-range character and thus couple beyond nearest neighbors. The dipolar coupling strength between two dipoles located at $\vec{r}_{i}$ and $\vec{r}_{j}$ depends on their relative distance $r_{ij}=|\vec{r}_{i}-\vec{r}_{j}|$ and on their orientation, described by the angle $\theta_{i,j}$ between the dipolar axis, set by the external magnetic field, and the interparticle axis; see Fig.\[fig:1\]B. Here, $V_{dd}\,\approx\,\frac{\mu_0g_F^2\mu_B^2}{4\pi d_y^3}$ denotes the dipolar coupling strength, with $g_F\approx 0.735$ for ${}^{167}$Er, $\mu_0$ the magnetic permeability of vacuum, $\mu_B$ the Bohr magneton, and $d_y$ the shortest lattice constant. The $\hat{F}_ {i}^z\hat{F}_{j}^z$ terms in the Hamiltonian account for the diagonal part of the interactions while the $\hat{F}^+_i\hat{F}^-_{j}+\hat{F}^-_{i}\hat{F}^+_{j}$ terms describe dipolar exchange processes. The second sum denotes the single particle quadratic term $\delta_{i}(\hat{F}^z_{i})^2$ with $\delta_{i}=\delta_{i}^Z+\delta_{i}^T$, accounting for the quadratic Zeeman effect $\propto \delta_{i}^Z$ and tensorial light shifts $\propto \delta_{i}^T$. These two contributions can be independently controlled in our experiment.
The quadratic Zeeman shift allows us to selectively prepare all atoms in one target state of the spin manifold [@suppmat]. The tensorial light shift can compete or cooperate with the quadratic Zeeman shift and can be used as an additional control knob to activate/deactivate the exchange processes. Note that, for all measurements, a large linear Zeeman shift is always present, but since it does not influence the spin-conserving dynamics, it is omitted in Eq.1.
In the experiment, we first load a spin-polarized quantum degenerate Fermi gas of $\approx 10^4$ Er atoms into a deep 3D optical lattice, following the scheme of Ref. [@Baier2017sif]. The cuboid lattice geometry with lattice constants $(d_x,d_y,d_z) = (272,266,544)\,$nm results in weakly coupled 2D planes, with typical tunneling rates of $\sim10\,$Hz inside the planes and $\sim\,$mHz between them [@suppmat]. The external magnetic field orientation, setting the quantization axis
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
There is increasing evidence that supermassive black holes in active galactic nuclei (AGN) are scaled-up versions of Galactic black holes. We show that the amplitude of high-frequency X-ray variability in the hard spectral state is inversely proportional to the black hole mass over eight orders of magnitude. We have analyzed all available hard-state data from [*RXTE*]{} of seven Galactic black holes. Their power density spectra change dramatically from observation to observation, except for the high-frequency ($\ga$ 10 Hz) tail, which seems to have a universal shape, roughly represented by a power law of index -2. The amplitude of the tail, $C_M$ (extrapolated to 1 Hz), remains approximately constant for a given source, regardless of the luminosity, unlike the break or QPO frequencies, which are usually strongly correlated with luminosity. Comparison with a moderate-luminosity sample of AGN shows that the amplitude of the tail is a simple function of black hole mass, $C_M
= C/M$, where $C \approx 1.25$ M$_\odot$ Hz$^{-1}$. This makes $C_M$ a robust estimator of the black hole mass which is easy to apply to low- to moderate-luminosity supermassive black holes. The high-frequency tail with its universal shape is an invariant feature of a black hole and, possibly, an imprint of the last stable orbit.
author:
- |
Marek Gierli[ń]{}ski$^{1,2}\thanks{E-mail:Marek.Gierlinski@durham.ac.uk}$, Marek Niko[ł]{}ajuk$^{3}$ and Bo[ż]{}ena Czerny$^{4}$\
$^1$Department of Physics, University of Durham, South Road, Durham DH1 3LE, UK\
$^2$Astronomical Observatory, Jagiellonian University, Orla 171, 30-244 Krak[ó]{}w, Poland\
$^3$Department of Physics, University of Bia[ł]{}ystok, Lipowa 41, 15-424 Bia[ł]{}ystok, Poland\
$^3$Copernicus Astronomical Centre, Bartycka 18, 00-716 Warszawa, Poland\
date: Submitted to MNRAS
title: 'High-frequency X-ray variability as a mass estimator of stellar and supermassive black holes'
---
= -0.5cm
\[firstpage\]
X-rays: binaries – galaxies: active – accretion, accretion discs
Introduction {#sec:introduction}
============
Astrophysical black holes are very simple objects, completely characterized by their mass and spin. Hence, the gravitational potential around a black hole simply scales with its mass. An important question in high-energy astrophysics is whether the accretion flow properties scale with the black hole mass in a simple manner, or, more specifically, whether active galactic nuclei (AGN) are scaled-up versions of Galactic black hole binaries (BHB).
One of the ways of tackling this problem is to study X-ray variability, which is observed in accreting black holes of all masses. Recent advances in mass estimates of AGN central black holes lead to discovery of dependence of the observed variability properties on mass. Long X-ray monitoring campaigns allowed to construct power density spectra (PDS) of accreting supermassive black holes which turned out to have a roughly of (broken) power-law shape. The variability amplitude (the excess variance; e.g. Lu & Yu 2001; Markowitz & Edelson 2001) and the frequency of the break (e.g. M$^c$Hardy et al. 2004, 2006) can depend on the black hole mass.
In order to use the X-ray variability for mass measurement we need a property which scales only with the black hole mass, and does not change with accretion rate. The break frequency does not satisfy this condition, as it changes significantly with the accretion rate, in X-ray binaries (e.g. Done & Gierli[ń]{}ski 2005). M$^c$Hardy et al. (2006) showed that a more general relation holds between the break frequency, $\nu_b$, and the black hole mass, $M$: $\nu_b = A
L_{\rm bol}^B / M^C$, where $A$, $B$ and $C$ are constants. This relation includes a significant dependence on the source bolometric luminosity, $L_{\rm bol}$.
It was already suggested by Hayashida et al. (1998) that measuring the normalization of the high-frequency tail of the power spectrum, well above the high-frequency break, is an interesting possibility for black hole mass measurement. Equivalently, one can use the excess variance, $\sigma^2_{\rm NXS}$, measured for short data sets. This general line was followed by Czerny et al. (2001), Papadakis (2004), Niko[ł]{}ajuk, Papadakis & Czerny (2004, hereafter N04) and Niko[ł]{}ajuk et al. (2006, hereafter N06). However, the method was not reliably checked against the dependence on the source accretion rate.
In this paper we put the idea of $\sigma^2_{NXS} \propto M^{-1}$ correlation to the test. We use an extensive set of BHB observations to see if and when $\sigma^2_{\rm NXS}$ is constant for a given mass and whether it anticorrelates with the black hole mass.
High-frequency power {#sec:power}
====================
Power density spectra of many AGN can be approximated by a broken power law, with power $P_\nu \propto \nu^{-1}$ below and $P_\nu
\propto \nu^{-2}$ above the break frequency, $\nu_b$ (e.g. Markowitz et al. 2003b), where $P_\nu$ is the power spectral density normalized to the mean and squared. A second break at lower frequencies, below which the power is roughly $P_\nu \propto
\nu^{0}$, has been also observed (e.g. Pounds et al. 2001; Markowitz, Edelson & Vauhgan 2003a). At the zeroth order of approximation this is consistent with the PDS observed in stellar-mass BHB in the hard spectral state. Fig. \[fig:pds\] shows a sample of PDS from Galactic BHB in the hard state (details of the data reduction are described in Section \[sec:data\]). Plainly, these spectra are much more complex that a doubly-broken power law, with multiple broad and narrow noise components, usually well described by a series of Lorentzians (e.g. Pottschmidt et al. 2003). However, despite this complexity, the entire spectral shape roughly resembles a (doubly) broken power law.
N04 assumed that the break frequency is inversely proportional to the black hole mass, while the $P_\nu \propto \nu^{-1}$ part of the PDS below the break (the ‘flat top’ in $\nu P_\nu$ diagrams) has constant normalization, independent of the black hole mass. Yet inspection of several BHB power spectra clearly shows that neither of these is constant for a given source. The break frequency is known to change with accretion rate (e.g. Done & Gierli[ń]{}ski 2005). The ‘flat top’ normalization can change as well, as one can see in GX 339–4 spectra in Fig. \[fig:pds\].
There is, however, one feature of these power spectra that remains remarkably invariant: the high-frequency spectral shape, above $\nu_b$. For a given source it can be roughly described by a single power law with constant index of 1.5–2.0, and constant normalization for various observations differing in luminosity by more than one order of magnitude. In this paper we test the idea that the high-frequency part of the PDS remains fairly constant for a given source and scales with the black hole mass. This is a simple refinement of the idea proposed by N04 and later developed by N06.
Here we do not make any assumptions about how the characteristic frequencies (e.g. break frequency) depend on black hole mass. Instead, we assume that the the PDS [*above*]{} the break frequency (the high-frequency tail) has a universal spectral shape (roughly $\propto \nu^{-2}$) with normalization depending on the black hole mass. This can be written as $$P_\nu = C_M
(\nu/\nu_0)^{-2},\label{eq:tail}$$ where $\nu_0$ is an arbitrary frequency which we chose to be $\nu_0$ = 1 Hz. Thus, $C_M$ (in units of Hz$^{-1}$) is the normalization of the (extrapolated) high-frequency tail at 1 Hz.
The assumption that $C_M$ is unique function of the black hole mass would directly correspond to the original assumption of N04 about constancy of $P(\nu_b)\nu_b$ if $\nu_b$ were constant for a given black hole mass. Due to limited statistics it is often difficult to study details of the high-frequency shape of the PDS. Therefore, we simplify the situation by calculating the amplitude, or the excess variance, of variability in a given frequency band significantly above the break. This can be done directly from a light curve or by integrating the PDS. The excess variance calculated between frequencies $\nu_1$ and $\nu_2$ (both greater than
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Liming Jiang
- Changxu Zhang
- Mingyang Huang
- Chunxiao Liu
- |
\
Jianping Shi
- 'Chen Change Loy$^{\textrm{\Letter}}$\'
bibliography:
- 'sections\_arxiv/egbib.bib'
title: 'TSIT: A Simple and Versatile Framework for Image-to-Image Translation\'
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Building on the work of Nogin [@Nogin], we prove that the braid group $B_4$ acts transitively on full exceptional collections of vector bundles on Fano threefolds with $b_2=1$ and $b_3=0$. Equivalently, this group acts transitively on the set of simple helices (considered up to a shift in the derived category) on such a Fano threefold. We also prove that on threefolds with $b_2=1$ and very ample anticanonical class, every exceptional coherent sheaf is locally free.'
address: 'Department of Mathematics, University of Oregon, Eugene, OR 97405'
author:
- 'A. Polishchuk'
title: Simple helices on Fano threefolds
---
\[section\] \[thm\][Proposition]{} \[thm\][Lemma]{} \[thm\][Corollary]{}
[^1]
Background and the main results
===============================
We refer to the paper [@GK] for the review of the theory of exceptional bundles and exceptional collections (see also section 3.1 of [@Bridge] for a short account of the basic definitions and some results).
Let $X$ be a (smooth) Fano threefold over ${{\Bbb C}}$ with $b_2=1$ and $b_3=0$. By the classification of Fano threefolds (see [@IP]), it is known that $X$ is either ${{\Bbb P}}^3$, or the $3$-dimensional quadric, or $V_5$, or $V_{22}$ (in the latter case there are moduli for Fano threefolds of this type). It is known that these Fano threefolds can be characterized by the condition ${\operatorname{rk}}K_0(X)=4$. Furhermore, in all of these cases the derived category $D^b(X)$ of coherent sheaves on $X$ admits a [*full exceptional collection of vector bundles*]{} $(E_1,E_2,E_3,E_4)$. By definition, this means that ${\operatorname{Ext}}^n(E_i,E_j)=0$ for $i>j$ and $n\ge 0$, ${\operatorname{Ext}}^n(E_i,E_i)=0$ for $n>0$, ${\operatorname{End}}(E_i)={{\Bbb C}}$, and the collection $(E_1,\ldots,E_4)$ generates $D^b(X)$. The constructions of full exceptional collections in the above four cases are due to Beilinson [@Be], Kapranov [@Ka], Orlov [@Orlov], and Kuznetsov [@Ku], respectively.
There is a natural action of the braid group $B_n$ on the set of full exceptional collections of objects in $D^b(X)$, where $X$ is a smooth projective variety with ${\operatorname{rk}}K_0(X)=n$ given by [*left and right mutations*]{}. Bondal proved that in the case when ${\operatorname{rk}}K_0(X)=\dim X+1$, the property of a collection to consist of pure sheaves (as opposed to complexes) is preserved under mutations (see [@Bondal]). Furthermore, Positselski showed in [@Posic] that in this case all full exceptional collections of sheaves actually consist of vector bundles. Thus, in the case when $n=rk K_0(X)=\dim X+1$ there is an action of the braid group $B_n$ on the set of full exceptional collections of vector bundles on $X$. In this paper we will prove transitivity of this action for the case of Fano threefolds of the above type, by reducing it to the similar transitivity result on the level of $K_0(X)$ established by Nogin [@Nogin].
\[main-thm\] Let $X$ be a Fano threefold with $b_2=1$ and $b_3=0$. Then the action of the braid group on the set of complete exceptional collections of bundles on $X$ is transitive.
Note that this result does not establish the conjecture on transitivity of the braid group action on the set of all full exceptional collections (up to a shift of each object) proposed in [@BP], since we do not know whether every full exceptional collection consists of shifts of sheaves only. Neither does it lead to the classification of exceptional bundles on $X$ since we do not know whether one can include every exceptional bundle in a full exceptional collection.
One can restate Theorem \[main-thm\] using the notion of a [*simple helix*]{}[^2]. By definition, a [*simple helix of period $n$*]{} in $D^b(X)$ for a smooth projective variety $X$ is a collection of objects $(E_i)$ numbered by $i\in{{\Bbb Z}}$, such that for every $m\in{{\Bbb Z}}$ the sequence $(E_{m+1},\ldots,E_{m+n})$ is full and exceptional, and ${\operatorname{Hom}}^p(E_i,E_j)=0$ for $p\neq 0$ and $i\le j$. Simple helices can exist only if $X$ is a Fano variety with ${\operatorname{rk}}K_0(X)=\dim X+1$ (see [@BP]) and necessarily consist of shifts of vector bundles (see [@Posic]). One can show that in this case $E_{i-n}\simeq E_i(K)$, where $K$ is the canonical class on $X$ (see [@Bondal]). Conversely, starting with any full exceptional collection of vector bundles $(E_1,\ldots,E_n)$, one gets a simple helix by considering $$\label{helix-eq}
(\ldots,E_1,\ldots,E_n,E_1(-K),\ldots, E_n(-K),E_1(-2K),\ldots).$$ Similarly to the case of exceptional collections one defines an action of the braid group on the set of simple helices. Our theorem can be restated as follows: [*the action of the braid group on simple helices in $D^b(X)$, where $X$ is a Fano threefold, is transitive.*]{}
The difficult part of the proof of Theorem \[main-thm\] was done by Nogin in [@Nogin] where he proved the transitivity of the action of the braid group on semiorthogonal bases in $K_0(X)$ in the above situation. In the case when $X$ is not of type $V_{22}$, this easily implies our result, as was observed by A. Bondal. Indeed, if $X$ is either ${{\Bbb P}}^3$, or a quadric, or $V_5$ then there exists a full exceptional collection of vector bundles on $X$, two of which are line bundles. Studying exceptional objects in the triangulated subcategory generated by the remaining two bundles, one finds that such an object is determined by its class in $K_0$ up to a shift, which concludes the proof in this case. So, in our proof of Theorem \[main-thm\] the reader may assume (but does not have to) that $X$ is of type $V_{22}$.
Our argument is based on the following result, perhaps of independent interest.
\[exc-K0-thm\] Let $X$ be a Fano threefold with very ample anticanonical class and $b_2=1$. Let $E_1$ and $E_2$ be exceptional bundles on $X$ with the same class in $K_0(X)$. Assume that ${\operatorname{Ext}}^1(E_1,E_1(-K))=0$. Then $E_1\simeq E_2$.
The proof of this theorem will be given in the next section. It is based on the trick of considering restrictions to a generic anticanonical K3 surface in $X$, which was exploited by S. Zube in [@Zube] to prove the stability of an exceptional bundle on ${{\Bbb P}}^3$. Using the same trick we will prove that in the situation of Theorem \[exc-K0-thm\] every exceptional sheaf on $X$ is locally free and stable (see Theorem \[stab-thm\] below). Now let us show how Theorem \[exc-K0-thm\] implies our main result.
[*Proof of Theorem \[main-thm\].*]{} Given a pair of complete exceptional collections of bundles on $X$, we can mutate one of them to obtain the situation when the two collections will give identical classes in $K_0$ (by the transitivity of the braid group action on the set of semiorthogonal bases in $K_0$ proved by Nogin [@Nogin]). It remains to note that every exceptional collection $(E_1,\ldots, E_n)$ of vector bundles on $X$ extends to a simple helix . Hence, ${\operatorname{Ext}}^1(E_i,E_i(-K))=0$ for $i=1,\ldots,n$, and we can apply Theorem \[exc-K0-thm\].
It would be nice to get rid of the assumption on the vanishing of ${\operatorname{Ext}}^1$ in Theorem \[exc-K0-thm\]. So far, we were able to do this only in the case of rank $2$ bundles assuming that the index of $X$ is $\ge 2$.
\[rk-2-thm\] Let
|
{
"pile_set_name": "ArXiv"
}
|
---
address: |
Department of Physics and Astronomy\
Michigan State University\
East Lansing, MI 48824 USA\
E-mail: shri@pa.msu.edu, yuan@pa.msu.edu
author:
- 'Shrihari Gopalakrishna and C.–P. Yuan'
title: |
B-physics Signature of a\
Supersymmetric U(2) Flavor Model[^1]
---
Introduction
============
The Standard Model (SM) of high energy physics suffers from the gauge hierarchy problem and the flavor problem. Supersymmetry (SUSY) eliminates the gauge hierarchy problem, and a (horizontal) flavor symmetry in generation space could explain the flavor problem. A SUSY theory with a flavor symmetry might relate the quark/lepton flavor structure with that of the scalar quark/lepton sector. Such a theory would imply certain predictions for flavor changing neutral current (FCNC) processes that we wish to investigate in this work, along with the constraints from experimental FCNC data.
We do not assume an alignment of the quark/lepton flavor structure with that of the scalar quark/lepton sector, leading to a non-minimal flavor violation (NMFV) scenario. We consider a spontaneously broken U(2) flavor symmetry [@Pomarol:1995xc; @Barbieri:1995uv] in the framework of “effective supersymmetry” [@Cohen:1996vb], in which the first two generation scalars are relatively heavy (a few TeV mass), thereby satisfying neutron electric dipole moment constraint, etc., while still allowing large CP violating phases in the scalar sector. We analyze the implications of such a framework to B-physics observables. We will present details in a forthcoming paper [@shricp].
Consider that the first and second generation superfields ($\psi_a$, a=1,2) transform as a U(2) doublet while the third generation superfield ($\psi$) is a singlet [@Barbieri:1995uv]. The most general U(2) symmetric superpotential can be written as = \_1 H + \_2 H \_a + \_a \_3 H \_b + \_a \_4 H \_b\
+ \_a \_5 H \_b + H\_u H\_d , where $M$ is the cutoff scale below which such an effective description is valid, the $\alpha_i$ are O(1) constants, $\phi^a$ is a U(2) doublet, $\phi^{ab}$ and $S^{ab}$ are second rank antisymmetric and symmetric U(2) tensors respectively. If U(2) is broken spontaneously by the Vacuum Expectation Values (VEV) =
0\
V
; = v \^[ab]{}; = 0, = V , with $\frac{V}{M} \equiv \epsilon \sim 0.02$ and $\frac{v}{M} \equiv \epsilon ' \sim 0.004$, and if U(2) is broken below the SUSY breaking scale, the SUSY breaking masses would also have a structure dictated by U(2). The resulting quark and scalar down-type masses are $$\begin{aligned}
{\cal M}_d = v_d \begin{pmatrix} O & -\lambda_1\epsilon ' & O \\ \lambda_1\epsilon ' & \lambda_2\epsilon & \lambda_4\epsilon \\ O & \lambda_4'\epsilon & \lambda_3 \end{pmatrix} \ &,& \quad
{\cal M}^2_{RL} = v_d \begin{pmatrix}O & -A_1 \epsilon ' & O \\ A_1 \epsilon ' & A_2\epsilon & A_4 \epsilon \\ O & A_4'\epsilon & A_3 \end{pmatrix} \ , \\
{\cal M}^2_{LL} = \begin{pmatrix}m_1^2 & 0 & 0 \\
0 & m_1^2+\epsilon^2 m_2^2 & \epsilon m_4^{2*} \\
0 & \epsilon m_4^2 & m_3^2 \end{pmatrix}_{LL}&,& \quad
{\cal M}^2_{RR} = \begin{pmatrix} m_1^2 & 0 & 0 \\
0 & m_1^2+\epsilon^2 m_2^2 & \epsilon m_4^{2*} \\
0 & \epsilon m_4^2 & m_3^2\end{pmatrix}_{RR}, \nonumber
\label{MSQ.EQ}\end{aligned}$$ where $v_d = \left< h_d \right>$ is the VEV of the Higgs field, the $\lambda_i$’s are O(1) coefficients, and, $m_i$ and $A_i$ (complex in general) are determined by the SUSY breaking mechanism. It has been shown [@Barbieri:1995uv] that such a pattern of the quark mass matrix explains the quark masses and CKM elements.
For our study, we consider the following values for the various SUSY parameters: ${m_{\tilde b_R,\tilde t_R}}=100$GeV, the other squark masses given by $m_0=1000$GeV, $A=1000$GeV, $\tan{\beta}=5$, $|\mu|=150$GeV, $M_2=250$GeV, $M_{\tilde g}=250$GeV and $m_{H^\pm}=250$GeV. ($m_0$ and $A$ denote generic SUSY breaking mass scales.)
Here, we consider processes that go through the $b\rightarrow s$ quark level transition, and in our framework the dominant SUSY contributions are due to $\delta_{32,23}^{RL,RR,LL} \equiv \frac{({\cal M}^2_{RL,RR,LL})_{32,23}}{m_0^2}$. For the chosen values of the parameters, we find $|\delta_{32,23}^{RL}| \sim \frac{v_d A\epsilon}{\tilde{m}_0^2}=6.8\times 10^{-4}$, and, $|\delta_{32}^{LL,RR}| \sim \epsilon \frac{m_4^2}{m_0^2} = 0.02$.
B-physics probes
================
Given such an effective SUSY theory we estimate the sizes of various B-physics observables that we expect are modified from their SM predictions. In addition to the SM contribution, we include the charged Higgs, chargino and gluino contributions. We analyze the $\Delta B=1$ FCNC processes, , , , ; and the $\Delta B=2$ processes mixing and the dilepton asymmetry in $B_s$. We find regions in U(2) SUSY parameter space that are consistent with current experimental data and obtain expectations for measurements that are forthcoming. To illustrate the effects, we present here expectations for CP asymmetries in and , and, mixing. A more exhaustive analysis will be presented elsewhere [@shricp].
The CP asymmetry in is given by A\_[CP]{}\^ && , \[ACPBSG.EQ\] and the expectation in the U(2) SUSY theory is shown in Fig. (\[BSGACPLR.FIG\]). We see that significant CP asymmetry is possible in the scenario we are considering, while satisfying experimental constraints.
The CP asymmetry in is defined by A\_[CP]{}\^ &&\
&=& -C\_[K]{} + S\_[K]{}, and Fig. (\[BPHIKCPBS.FIG\] left) shows the CP asymmetry in for a scan on $\delta_{32}^{RL}$ while satisfying all experimental constraints.
The mixing parameter $\Delta m_{B_s}$ depends quite sensitively on $\delta_{32}^{RR}$ and can be significantly altered from the SM prediction as shown in Fig. (\[BPHIKCPBS.FIG\] right).
In conclusion, we note that similar results hold for flavor models that have the same order of magnitude for the 23 element in the squark mass matrix. In such cases, the prospects look exciting for discovering SUSY in B meson processes at current and upcoming colliders.
[0]{}
A. Pomarol and D. Tommasini, Nucl. Phys. B [**466**]{}, 3 (1996). R. Barbieri, G. R. Dvali and L. J. Hall, Phys. Lett. B [**377**]{}, 76 (1996); R. Barbieri, L. J. Hall and A. Romanino, Phys. Lett. B [**401**]{}, 47 (1997). A. G. Cohen, D. B. Kaplan and A. E. Nelson, Phys. Lett. B [**388**]{}, 588 (1996). S. Gopalakrishna and C.–P. Yuan, In preparation.
[^1]: alk presented by at [ *2003: upersymmetry in the esert*]{}, held at the niversity of rizona, ucson, , une 5-10, 2003. o appear in the roceedings.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $K$ be a number field, and let $C$ be a hyperelliptic curve over $K$ with Jacobian $J$. Suppose that $C$ is defined by an equation of the form $y^{2} = f(x)(x - \lambda)$ for some irreducible monic polynomial $f \in \mathcal{O}_{K}$ of discriminant $\Delta$ and some element $\lambda \in \mathcal{O}_{K}$. Our first main result says that if there is a prime $\mathfrak{p}$ of $K$ dividing $(f(\lambda))$ but not $(2\Delta)$, then the image of the natural $2$-adic Galois representation is open in ${\mathrm{GSp}}(T_{2}(J))$ and contains a certain congruence subgroup of ${\mathrm{Sp}}(T_{2}(J))$ depending on the maximal power of $\mathfrak{p}$ dividing $(f(\lambda))$. We also present and prove a variant of this result that applies when $C$ is defined by an equation of the form $y^{2} = f(x)(x - \lambda)(x - \lambda'')$ for distinct elements $\lambda, \lambda'' \in K$. We then show that the hypothesis in the former statement holds for almost all $\lambda \in \mathcal{O}_{K}$ and prove a quantitative form of a uniform boundedness result of Cadoret and Tamagawa.'
author:
- Jeffrey Yelton
bibliography:
- 'bibfile.bib'
title: 'Boundedness results for $2$-adic Galois images associated to hyperelliptic Jacobians'
---
Introduction {#S1}
============
Let $K$ be a number field with absolute Galois group $G_{K}$, and let $C$ be a hyperelliptic curve defined over $K$; i.e. $C$ is a smooth projective curve defined by an equation of the form $y^{2} = f(x)$ for some squarefree polynomial $f$ of degree $d \geq 3$. (Note that in the case of $d = 3$, $C$ is an elliptic curve.) It is well known that the genus of $C$ is given by $g = \lfloor (d + 1) / 2 \rfloor$. We denote the Jacobian variety of $C$ by $J$; it is an abelian variety of dimension $g$. For each prime $\ell$, we let $T_{\ell}(J)$ denote the $\ell$-adic Tate module of $J$, which is a free ${\mathbb{Z}}_{\ell}$-module of rank $2g$. We write $\rho_{\ell} : G_{K} \to {\mathrm{Aut}}(T_{\ell}(J))$ for the natural $\ell$-adic Galois action on this Tate module. The Tate module $T_{\ell}(J)$ is endowed with the Weil pairing defined with respect to the canonical principal polarization on $J$, which we write as $e_{\ell} : T_{\ell}(J) \times T_{\ell}(J) \to {\mathbb{Z}}_{\ell}$; it is a ${\mathbb{Z}}_{\ell}$-bilinear skew-symmetric pairing. Let ${\mathrm{Sp}}(T_{\ell}(J))$ denote the group of symplectic automorphisms of $T_{\ell}(J)$ with respect to the pairing $e_{\ell}$, and let $${\mathrm{GSp}}(T_{\ell}(J)) := \{\sigma \in \mathrm{Aut}_{{\mathbb{Z}}_{\ell}}(T_{\ell}(J))\ |\ e_{\ell}(P^{\sigma}, Q^{\sigma}) = e_{\ell}(P, Q)^{\chi_{\ell}(\sigma)}\ \forall P, Q \in T_{2}(J)\}$$ denote the group of symplectic similitudes, where $\displaystyle \chi_{\ell} : G_{K} \to {\mathbb{Z}}_{\ell}^{\times}$ is the $\ell$-adic cyclotomic character.
It is well known that the image $G_{\ell}$ of $\rho_{\ell}$ is always a closed subgroup of ${\mathrm{GSp}}(T_{\ell}(J))$ and that in fact there is some hyperelliptic Jacobian $J$ of a given dimension $g$ such that the inclusion $G_{\ell} \subseteq {\mathrm{GSp}}(T_{\ell}(J))$ has finite index (or equivalently, that $G_{\ell}$ is an open subgroup of the $\ell$-adic Lie group ${\mathrm{GSp}}(T_{\ell}(J))$); see for instance [@yelton2015images Theorem 1.1]. Note that the subgroup $G_{\ell} \cap {\mathrm{Sp}}(T_{\ell}(J)) \subset G_{\ell}$ coincides with the image of the Galois subgroup which fixes the extension $K(\mu_{\ell}) / K$ obtained by adjoining all $\ell$-power roots of unity to $K$. Since $K$ is a number field, the extension $K(\mu_{\ell}) / K$ is infinite; it follows that $G_{\ell} \not\subset {\mathrm{Sp}}(T_{\ell}(J))$ and that $G_{\ell}$ has finite index in ${\mathrm{GSp}}(T_{\ell}(J))$ if and only if $G_{\ell} \cap {\mathrm{Sp}}(T_{\ell}(J))$ has finite index in ${\mathrm{Sp}}(T_{\ell}(J))$.
There have been many results stating that $G_{\ell}$ has finite index in ${\mathrm{GSp}}(T_{\ell}(J))$ under various hypotheses for the polynomial defining the hyperelliptic curve. For instance, Y. Zarhin has proven this for large enough genus in the case of hyperelliptic curves defined by equations of the form $y^{2} = f(x)$ or $y^{2} = f(x)(x - \lambda)$ with $\lambda \in K$, where the Galois group of $f$ is the full symmetric or alternating group ([@zarhin2002very Theorem 2.5] and [@zarhin2010families Theorem 8.3]; see also [@zarhin2013two Theorem 1.3] for a variant of this where the curve is defined using two parameters). A. Cadoret and A. Tamagawa have also proven ([@cadoret2012uniform Theorems 1.1 and 5.1]) that for any family of hyperelliptic Jacobians over a smooth, geometrically connected, separated curve over $K$, this openness condition will be satisfied for the $\ell$-adic Galois action associated to all but finitely many fibers, and that in fact the indices of the $\ell$-adic Galois images corresponding to these fibers are uniformly bounded. However, there have been very few results which give explicit bounds for the index of $G_{\ell}$ in ${\mathrm{GSp}}(T_{\ell}(J))$ in such cases.
Our aim in this paper is to give some similar results on the openness of the $2$-adic Galois images in the group of symplectic similitudes associated to Jacobians of hyperelliptic curves whose defining polynomials satisfy certain hypotheses, and to provide formulas giving explicit bounds for the indices of the $2$-adic Galois images in these cases. (Unfortunately, our method currently cannot tell us anything about the $\ell$-adic Galois images for odd primes $\ell$. However, we are hopeful that it can be strengthened to show the openness of the $\ell$-adic Galois images as well under the same or similar hypotheses as is implied by the Mumford-Tate conjecture, and to show that the $\ell$-adic Galois images contain the full symplectic group for almost all $\ell$.)
We state our main results below. In these statements as well as in the rest of the paper, we use the following notation. For any integer $N \geq 1$, we denote the level-$N$ congruence subgroup of ${\mathrm{Sp}}(T_{2}(J))$ by $\Gamma(N) := \{\sigma \in {\mathrm{Sp}}(T_{2}(J)) \ | \ \sigma \equiv 1 \ (\mathrm{mod} \ N)\}$. We denote the ring of integers of a number field $K$ by $\mathcal{O}_{K}$. Finally, we write $v_{2} : {\mathbb{Q}}^{\times} \to {\mathbb{Z}}$ for the (normalized) $2$-adic valuation on ${\mathbb{Q}}$.
\[thm main1\]
Let $K$ be a number field, and let $f \in \mathcal{O}_{K}[x]$ be an irreducible monic polynomial of degree $d \geq 2$ with discriminant $\Delta$. Let $J$ be the Jacobian of the hyperelliptic curve with defining equation $y^{2} = f(x)(x - \lambda)$ for some $\lambda \in \mathcal{O}_{K}$, and define the $2$-adic Galois image $G_{2}$ as above. Then if there is a prime $\mathfrak{p}$ of $\mathcal{O}_{K}$ which divides $(f(\lambda))$ but not $(2\Delta)$, the Lie subgroup $G_{2} \subset {\mathrm{GSp}}(T_{2}(J))$ is open. In fact, we have $G_{2} \cap {\mathrm{Sp}}(T_{2}(J)) \supsetneq \Gamma(2^{2v_{2}(m) + 2})$, where $m \geq 1$ is the greatest integer such that $\mathfrak{p}^{m} \mid (f(\lambda))$. If in addition $d = 3$, then $G_{2} \cap {\mathrm{Sp}}(T_{2}(J)) \supsetneq \Gamma(2^{v_{2}(m) + 1})$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We classify all possible limits of families of translates of a fixed, arbitrary complex plane curve. We do this by giving a set-theoretic description of the projective normal cone (PNC) of the base scheme of a natural rational map, determined by the curve, from the ${{\mathbb{P}}}^8$ of $3\times 3$ matrices to the ${{\mathbb{P}}}^N$ of plane curves of degree $d$. In a sequel to this paper we determine the multiplicities of the components of the PNC. The knowledge of the PNC as a cycle is essential in our computation of the degree of the ${\text{\rm PGL}}(3)$-orbit closure of an arbitrary plane curve, performed in [@MR2001h:14068].'
address:
- 'Dept. of Mathematics, Florida State University, Tallahassee FL 32306, U.S.A.'
- 'Inst. för Matematik, Kungliga Tekniska Högskolan, S-100 44 Stockholm, Sweden'
author:
- 'Paolo Aluffi, Carel Faber'
bibliography:
- 'ghizzIbib.bib'
title: 'Limits of PGL(3)-translates of plane curves, I'
---
Introduction {#intro}
============
In this paper we determine the possible [*limits*]{} of a fixed, arbitrary complex plane curve ${{\mathscr C}}$, obtained by applying to it a family of translations $\alpha(t)$ centered at a singular transformation of the plane. In other words, we describe the curves in the boundary of the ${\text{\rm PGL}}(3)$-orbit closure of a given curve ${{\mathscr C}}$.
Our main motivation for this work comes from enumerative geometry. In [@MR2001h:14068] we have determined the [*degree*]{} of the ${\text{\rm PGL}}(3)$-orbit closure of an arbitrary (possibly singular, reducible, non-reduced) plane curve; this includes as special cases the determination of several characteristic numbers of families of plane curves, the degrees of certain maps to moduli spaces of plane curves, and isotrivial versions of the Gromov-Witten invariants of the plane. A description of the limits of a curve, and in fact a more refined type of information is an essential ingredient of our approach. This information is obtained in this paper and in its sequel [@ghizzII]; the results were announced and used in [@MR2001h:14068].
The set-up is as follows. Consider the natural action of ${\text{\rm PGL}}(3)$ on the projective space of plane curves of a fixed degree. The orbit closure of a curve ${{\mathscr C}}$ is dominated by the closure ${{{{\widetilde{{{\mathbb{P}}}}}}}}^8$ of the graph of the rational map $c$ from the ${{\mathbb{P}}}^8$ of $3\times3$ matrices to the ${{\mathbb{P}}}^N$ of plane curves of degree $d$, associating to $\varphi\in {\text{\rm PGL}}(3)$ the translate of ${{\mathscr C}}$ by $\varphi$. The boundary of the orbit consists of limits of ${{\mathscr C}}$ and plays an important role in the study of the orbit closure.
Our computation of the degree of the orbit closure of ${{\mathscr C}}$ hinges on the study of ${{{{\widetilde{{{\mathbb{P}}}}}}}}^8$, and especially of the scheme-theoretic inverse image in ${{{{\widetilde{{{\mathbb{P}}}}}}}}^8$ of the base scheme ${{\mathscr S}}$ of $c$. Viewing ${{{{\widetilde{{{\mathbb{P}}}}}}}}^8$ as the blow-up of ${{\mathbb{P}}}^8$ along ${{\mathscr S}}$, this inverse image is the exceptional divisor, and may be identified with the projective normal cone (PNC) of ${{\mathscr S}}$ in ${{\mathbb{P}}}^8$. A description of the PNC leads to a description of the limits of ${{\mathscr C}}$: the image of the PNC in ${{\mathbb{P}}}^N$ is contained in the set of limits, and the complement, if nonempty, consists of easily identified ‘stars’ (that is, unions of concurrent lines).
This paper is devoted to a set-theoretic description of the PNC for an arbitrary curve. This suffices for the determination of the limits, but does not suffice for the enumerative applications in [@MR2001h:14068]; these applications require the full knowledge of the PNC [*as a cycle,*]{} that is, the determination of the multiplicities of its different components. We obtain this additional information in [@ghizzII].
The final result of our analysis (including multiplicities) was announced in §2 of [@MR2001h:14068]. The proofs of the facts stated there are given in the present article and its sequel. The main theorem of this paper (Theorem \[mainmain\], in §\[proof\]) gives a precise set-theoretic description of the PNC, relying upon five types of families and limits identified in §\[germlist\]. In this introduction we confine ourselves to formulating a weaker version, focusing on the determination of limits. In [@ghizzII] (Theorem 2.1), we compute the multiplicities of the corresponding five types of components of the PNC.
The limits of a curve ${{\mathscr C}}$ are necessarily curves with [*small linear orbit,*]{} that is, curves with infinite stabilizer. Such curves are classified in §1 of [@MR2002d:14084]; we reproduce the list of curves obtained in [@MR2002d:14084] in an appendix at the end of this paper (§\[appendix\]). For another classification, from a somewhat different viewpoint, we refer to [@MR1698902]. For these curves, the limits can be determined using the results in [@MR2002d:14083] (see also §\[boundary\]). The following statement reduces the computation of the limits of an arbitrary curve ${{\mathscr C}}$ to the case of curves with small orbit.
\[main\] Let ${{\mathscr X}}$ be a limit of a plane curve ${{\mathscr C}}$ of degree $d$, obtained by applying to it a ${{\mathbb{C}}}((t))$-valued point of ${\text{\rm PGL}}(3)$ with singular center. Then ${{\mathscr X}}$ is in the orbit closure of a star (reproducing projectively the $d$-tuple cut out on ${{\mathscr C}}$ by a line meeting it properly), or of curves with small orbit determined by the following features of ${{\mathscr C}}$:
- The linear components of the support ${{{{\mathscr C}}'}}$ of ${{\mathscr C}}$;
- The nonlinear components of ${{{{\mathscr C}}'}}$;
- The points at which the tangent cone of ${{\mathscr C}}$ is supported on at least $3$ lines;
- The Newton polygons of ${{\mathscr C}}$ at the singularities and inflection points of ${{{{\mathscr C}}'}}$;
- The Puiseux expansions of formal branches of ${{\mathscr C}}$ at the singularities of ${{{{\mathscr C}}'}}$.
The limits corresponding to these features may be described as follows. In cases I and III they are unions of a star and a general line, that we call ‘fans’; in case II, they are supported on the union of a nonsingular conic and a tangent line; in case IV, they are supported on the union of the coordinate triangle and several curves from a pencil $y^c=\rho\, x^{c-b} z^b$, with $b<c$ coprime positive integers; and in case V they are supported on unions of quadritangent conics and the distinguished tangent line. The following picture illustrates the limits in cases IV and V:

A more precise description of the limits is given in §\[germlist\], referring to the classification of these curves obtained in §1 of [@MR2002d:14084] and reproduced in §\[appendix\] of this paper.
The proof of Theorem \[main\] (or rather of its more precise form given in Theorem \[mainmain\]) is by an explicit reduction process, and goes along the following lines. The stars mentioned in the statement are obtained by families of translations $\alpha(t)$ (‘germs’) centered at an element $\alpha(0)\not\in{{\mathscr S}}$. To analyze germs centered at points of ${{\mathscr S}}$, we introduce a notion of equivalence of germs (Definition \[equivgermsnew\]), such that equivalent germs lead to the same limit. We then prove that every germ centered at a point of ${{\mathscr S}}$ is essentially equivalent to one with matrix representation $$\begin{pmatrix}
1 & 0 & 0\\
q(t) & t^b & 0\\
r(t) & s(t)t^b & t^c
\end{pmatrix}$$ with $0\le b\le c$ and $q$, $r$, and $s$ polynomials. Here, coordinates are chosen so that the point $p=(1:0:0)$ belongs to ${{\mathscr C}}$. Studying the limits obtained by applying such germs to ${{\mathscr C}}$, we identify five specific types of families (the [*marker germs*]{} listed in §\[germlist\]), reflecting the features of ${{\mathscr C}}$ at $p$ listed in Theorem \[main\], and with the stated kind of limit. We prove that unless the germ is of one of these types, the corresponding limit is already accounted for (for example, it is in the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The [*Reeb space*]{} of a function or a map on a manifold is defined as the space of all the connected components of inverse images.
A Reeb space represents the manifold compactly. In fact, such stuffs are fundamental and useful tools in geometric theory of Morse functions and more general maps not so ill: in other words, a branch of the global singularity theory. The author has been interested in the following problem first established and explicitly solved by Sharko: can we construct an explicit good function inducing a given graph as the Reeb space ([*Reeb graph*]{})? Such problems have been explicitly solved by several researchers and after that the author has set problems of new types and solved them.
A [*pseudo quotient map*]{} on a differentiable manifold is a surjective continuous map onto a lower dimensional polyhedron. More precisely, a [*pseudo quotient map*]{} is defined as a map locally regarded as the natural quotient map onto the Reeb space defined from a differentiable map of a given class. They were first defined by Kobayashi and Saeki in 1996 as useful objects in the theory of global singularity related to generic maps into the plane and later the author has used these objects in new explicit situations starting from redefining.
In this paper, we consider classes of continuous or differentiable maps on manifolds, introduce the [*Reeb graphs*]{} of the maps and redefine pseudo quotient maps onto graphs in a bit new way. Last, we attack and solve a new problem on construction before.
address: 'Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku Fukuoka 819-0395, Japan'
author:
- Naoki Kitazawa
title: Maps on manifolds onto graphs locally regarded as a quotient map onto a Reeb space and construction problem
---
Introduction {#sec:1}
============
Reeb spaces and graphs and differentiable functions realizing given graphs as Reeb graphs
-----------------------------------------------------------------------------------------
The [*Reeb space*]{} of a continuous map of a suitable class on a topological space is the space of all the connected components of inverse images. For a differentiable function, consider the set of all the points in the Reeb space coinciding with the set of all the connected components of inverse images including [*singular points*]{}: a [*singular point*]{} of a smooth map is a point at which the rank of the differential drops. For Morse functions, functions with finitely many singular points on closed manifolds and functions of several suitable classes, the spaces are graphs such that the vertex sets are the sets defined before. They are called the [*Reeb graphs*]{} of the maps. They seem to have been first defined in [@reeb].
Reeeb graphs and spaces are fundamental and important in the algebraic and differential topological theory of Morse functions and their generalizations, or in other words, the theory of global singularity.
We introduce several terminologies and a problem on construction of good functions inducing Reeb graphs isomorphic to given graphs.
The [*singular set*]{} of a differentiable map is defined as the set of all the singular points. A [*singular value*]{} is a point in the target manifold such that the inverse image contains a singular point and a [*regular value*]{} is a point in the target manifold which is not a singular value. The [*singular value set*]{} is the image of the singular set.
\[prob:1\] Can we construct a differentiable function with good geometric properties inducing a given graph as the Reeb graph? We do not fix a manifold on which we construct a desired function.
A problem of this type was first considered and explicitly solved by Sharko ([@sharko]). [@batistacostamezasarmiento], [@martinezalfaromezasarmientooliveira], [@masumotosaeki] and [@michalak] are important studies related to this. Later the author has set and solved explicit cases in [@kitazawa4] and [@kitazawa5].
We comment on studies of the author. Different from the other studies, conditions on inverse images of regular values are posed and manifolds appearing there may not be spheres, for example.
Pseudo quotient maps
--------------------
A [*pseudo quotient map*]{} on a differentiable manifold is a surjective continuous map onto a lower dimensional polyhedron. and defined as a map locally regarded as the natural quotient map onto the Reeb space of a differentiable map of a suitable class. They were first defined by Kobayashi and Saeki in 1996 ([@kobayashisaeki]) as useful objects in the theory of global singularity related to generic maps of dimensions larger than $2$ into the plane. Later the author has used these objects in new explicit situations starting from redefining in [@kitazawa2] and [@kitazawa3] for example.
The content of the present paper
--------------------------------
In this paper, we perform the following.
- We consider classes of continuous or differentiable maps on differentiable manifolds for maps of which we introduce the [*Reeb graphs*]{} and we introduce the Reeb graphs of these maps. This is a refinement of the definition of a Reeb graph which has not appeared.
- We redefine pseudo quotient maps on differentiable manifolds of a class before.
- We set a construction problem and give an answer (Theorem \[thm:1\]) with several terminologies needed. These are a problem of a new type and a result of a new type.
The author is a member of the project Grant-in-Aid for Scientific Research (S)
(17H06128 Principal Investigator: Osamu Saeki) “Innovative research of geometric topology and singularities of differentiable mappings”
(https://kaken.nii.ac.jp/en/grant/KAKENHI-PROJECT-17H06128/ ) and supported by this project.
Classes of continuous or differentiable maps on differentiable manifolds and the Reeb graphs of the maps of these classes
=========================================================================================================================
Let $(r,s)$ be a pair of non-negative integers satisfying $r>s$ or a pair such that $r=\infty$ and that $s$ is a non-negative integer.
Let $X$ be a $C^r$ manifold of dimension $m>1$ and $Y$ be a $C^r$ manifold of dimension $1$. Let $A \subset X$ be a measure zero set. A map $f:(X,A) \rightarrow N$ between the $C^r$ manifolds is said to be a [*$(C^r,C^s)$*]{} map if $f$ is of class $C^r$ at any point in $X-A$ and of class $C^s$ at any point in $A$. $A$ is called the [*measure zero set*]{} of $f$
Consider the Reeb space $W_f$ of the map $f$. Let $V$ be the set of all the points including singular points or points in $A$. If we can regard $W_f$ as a graph whose vertex set is $V$, then we call the graph the [*Reeb graph*]{} of $f$.
A pseudo quotient map of a class of maps
========================================
Let $(r,s)$ be a pair of non-negative integers satisfying $r>s$ or a pair such that $r=\infty$ and that $s$ is a non-negative integer. Two $C^r$ maps $c_1:X_1 \rightarrow Y_1$ and $c_2:X_2 \rightarrow Y_2$ are said to be [*$C^s$ equivalent*]{} if a pair $({\phi}_X,{\phi}_Y)$ of $C^s$ diffeomorphisms satisfying ${\phi}_Y \circ c_1=c_2 \circ {\phi}_X$ exists.
Let $X_1$ and $X_2$ be differentiable manifolds of dimension $m>1$ and $Y$ be a graph. Let $r$ be a positive integer or $\infty$. Two continuous maps $c_1:X_1 \rightarrow Y$ and $c_2:X_2 \rightarrow Y$ are said to be [*$C^r$-PL equivalent*]{} if there exists a $C^r$ diffeomorphism $\phi$ satisfying $c_1=c_2 \circ {\phi}_X$ exists.
Let $m>2$ be a positive integer. Let $\mathcal{C}$ be a class of $(C^r,C^s)$ maps from $m$-dimensional differentiable manifolds into $1$-dimensional ones whose Reeb spaces are regarded as Reeb graphs. A continuous map $q$ on an $m$-dimensional differentiable manifold onto a graph is said to be a [*pseudo quotient map*]{} of the class $\mathcal{C}$ if the following hold.
1. At each point $p$ in the interior of an edge, consider a small closed interval $C_p$ including the point, $q {\mid}_{q^{-1}(C_p)}:q^{-1}(C_p) \rightarrow C_p$ is $C^r$-PL equivalent to a $C^r$ trivial bundle, appearing locally around a closed interval in the interior of an edge in the Reeb graph of a map of the class $\mathcal{C}$.
2. At each vertex $p$, consider a small regular neighborhood $C_p$ including the point, $q {\mid}_{q^{-1}(C_p)}:q^{-1}(C_p) \rightarrow C_p$ is $C
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The possibility that like-charges can attract each other under the mediation of mobile counterions is by now well documented experimentally, numerically, and analytically. Yet, obtaining exact results is in general impossible, or restricted to some limiting cases. We work out here in detail a one dimensional model that retains the essence of the phenomena present in higher dimensional systems. The partition function is obtained explicitly, from which a wealth of relevant quantities follow, such as the effective force between the charges or the counterion profile in their vicinity. Isobaric and canonical ensembles are distinguished. The case of two equal charges screened by an arbitrary number $N$ of counterions is first studied, before the more general asymmetric situation is addressed. It is shown that the parity of $N$ plays a key role in the long range physics.'
author:
- Gabriel Téllez
- Emmanuel Trizac
title: 'Screening like-charges in one-dimensional Coulomb systems: Exact results'
---
Introduction
============
Coulombic effects are often paramount in soft matter systems, where the large dielectric constant of the solvent (say water) invites ionizable groups at the surface of macromolecules to dissociate [@KeHP01; @Levin02; @Messina09]. While a realistic treatment requires considering three dimensional systems, interesting progress has been achieved for lower dimensional problems where the key mechanisms can be studied in greater analytical detail [@Janco81; @Forrester98; @Samaj03]. In particular, a one dimensional model was introduced in the 1960s by Lenard and Prager independently, for which a complete thermodynamic solution was provided [@Len61; @Pra61; @EL62]. This model has been further studied in Ref. [@DHNP09], but it turns out that some interesting features have been overlooked in relation with the like-charge attraction phenomenon [@Levin02; @Varenna]. This striking non mean-field effect, relevant for strongly coupled charged matter [@Netz01; @Varenna] is the thread in our study.
The paper is organized as follows. The model is first defined in section \[sec:equal-charges\]. It mimics the screening of charged colloids. The Coulomb potential in one dimension between two charges $q$ and $q'$ located along a line with coordinates $\widetilde{x}$ and $\widetilde{x}'$ is $$v(\widetilde{x},\widetilde{x}')=-qq'|\widetilde{x}-\widetilde{x}'|
\,.$$ Therefore, the electric field created by one particle is of constant magnitude. This fact simplifies the study of the equilibrium statistical mechanics of such systems, and allows to obtain some of its properties by simple arguments. Furthermore, it also allows for an explicit computation of the partition function [@Len61; @Pra61]. The system under scrutiny can be envisioned as a collection of parallel charged plates, able to move along a perpendicular axis. The salient properties of this system can be obtained by simple arguments which we present in section \[sec:equal-charges\], followed afterwards by a more technical analysis where the explicit calculation of the partition function is performed, first in the isobaric and then in the canonical ensemble. After having presented the symmetric case, section \[sec:different-charges\] will generalize the investigation to the situations where the two screened charges are different. Noteworthy is that parity of the particle number<span style="font-variant:small-caps;"></span> considerations will play an important role in the remainder.
Screening of two equal charges by counterions only {#sec:equal-charges}
==================================================
Consider two charges $q$ along a line located at $\widetilde{x}=0$ and $\widetilde{x}=\widetilde{L}$. Between the charges there are $N$ counterions of charge $e=-2q/N$ between them. Consider the equilibrium thermal properties of this system at a temperature $T$, and as usual define $\beta=1/(k_B T)$ with $k_B$ the Boltzmann constant. This simple model mimics the screening and effective interaction between two charged colloids in a counterion solution, without added salt. In one dimension, $\beta e^2$ has dimensions of inverse length, therefore it is convenient to use rescaled units in which all distances are measured in units of $1/(\beta e^2)$: $x=\beta e^2 \widetilde{x}$. It is also convenient to work with a dimensionless pressure $P=\widetilde{P}/e^2$ where $\widetilde{P}$ is the pressure (equal to the force, in one dimensional systems).
The potential energy (dimensionless, measured in units of $k_B T$) of the system is $$\label{eq:pot}
U=-\sum_{1\leq i < j \leq N} |x_i-x_j| + \left(\frac{N}{2}\right)^2 L.$$ Before presenting the technical analysis, we start by simple and more quantitative considerations.
Possibility of attraction between like-charges {#sec:like-charge-attraction}
----------------------------------------------
### A heuristic argument
The possibility of attraction between the two $+q$ charges at $0$ and $L$ is related to the parity of $N$. If $N$ is odd, $N=2p+1$, then $p$ counterions will form a double layer around each charge $q$. This will form two compound objects with charge $q(1-2p/N)=q/N$ each one, located around $0$ and $L$. There will be in addition one counterion between these two object, which is essentially free, as the electric field created by the charges located on each side around $0$ and $L$ cancel each other. When $L$ is large enough, consider figure \[fig:1Dmodel-odd-attract\]. The right side of the system composed of one charge $q$ and $p$ counterions has charge $q/N$. The left side which, for the sake of the argument, has the free counterion plus the compound charge, exhibits a total charge $-q/N$. Thus the force exerted by the left side on the right side is $\widetilde{P}\to -q^2/N^2=-e^2/4$, an attractive force. Thus one expects that $P \to -1/4$, for $L\to\infty$.
![An odd number of mobile counterions screening two like-charges. The $N$ mobile ions (counter-ions) have charge $-2q/N$ and the confining objects have charge $q$, so that the whole system is electro-neutral. Here, $N=2p+1$ is odd, so that a single ion (referred to as the misfit since the net electric force acting on it vanishes) “floats” in between the two screened boundaries which attract, each, $p$ ions in their vicinity (see also Fig. \[fig:1Dmodel-odd\]). This single free counterion provides the binding mechanism responsible for long range attraction. In the canonical treatment, $L$ is held fixed, while in the isobaric situation, it is a fluctuating quantity. \[fig:1Dmodel-odd-attract\]](1Dmodel-odd-attract){width="70.00000%"}
On the other hand, if $N$ is even, there will not be a free counterion between the layers, which will be completely neutral, thus one expects that $P\to 0^{+}$ when $L\to\infty$, as shown in figure \[fig:1Dmodel-even\].
![An even number of counterions screening two like-charges ($N=2p$). At large distance, the two double-layers (made up of an ion $q$ and $p$ counter-ions) decouple since they are neutral. No misfit ion is present to mediate attraction, and the pressure is repulsive at all distances. \[fig:1Dmodel-even\]](1Dmodel-even){width="70.00000%"}
### Beyond heuristics
The previous intuition, providing a large distance attraction for odd $N$, can be substantiated by a simple calculation. Use will be made here of the contact theorem [@HBL79; @contact2; @contact3; @DHNP09; @MaTT15], an exact relation between the force exerted on the charge $q$, and the ionic density at contact (stemming from the mobile charges $-2q/N$). Such a relation is particularly useful for discussing the like-charge attraction phenomenon [@Netz01; @SaTr11; @rque9]. The argument allowing to get the contact density is two-fold, and goes as follows.
![Upon regrouping the $p+1$ leftmost counterions in Fig. \[fig:1Dmodel-odd-attract\], one obtains an ion with charge $-q-q/N$. This newly defined system has the same large distance pressure as that of Fig. \[fig:1Dmodel-odd-attract\]. \[fig:argument\_trick\]](1Dmodel-odd-effective-ion){width="70.00000%"}
First, we argue that at large $L$, the $p$ counterions that are closest to each boundary remain in their vicinity, while the middle free counterion (the misfit in Figs. \[fig:1Dmodel-odd-attract\] and \[fig:1Dmodel-odd\]), which does not feel any electric field by symmetry, tends to be unbounded and no longer contributes to the pressure (discarding $1/L$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We demonstrate a five-bit nuclear-magnetic-resonance quantum computer that distinguishes among various functions on four bits, making use of quantum parallelism. Its construction draws on the recognition of the sufficiency of linear coupling along a chain of nuclear spins, the synthesis of a suitably coupled molecule, and the use of a multi-channel
spectrometer.
address:
- |
Institut für Organische Chemie, J. W. Goethe-Universität, Marie-Curie-Str. 11,\
D-60439 Frankfurt, Germany
- |
Biological Chemistry and Molecular Pharmacology, Harvard Medical School,\
240 Longwood Avenue, Boston, MA 02115, USA
- |
Gordon McKay Laboratory, Division of Engineering and Applied Sciences,\
Harvard University, Cambridge, MA 02138, USA
- 'Bruker Analytik GmbH, Silberstreifen, D-76287 Rheinstetten, Germany'
- |
Institut für Organische Chemie und Biochemie, Technische Universität München,\
Lichtenbergstr. 4, D-85748 Garching, Germany
author:
- 'R. Marx'
- 'A. F. Fahmy'
- 'John M. Myers'
- 'W. Bermel'
- 'S. J. Glaser'
title: |
Realization of a 5-Bit NMR Quantum Computer\
Using a New Molecular Architecture
---
= 9in
Introduction
============
While quantum computers of two bits have been implemented [@bitsa], as have nuclear-magnetic-resonance (NMR) quantum computers of three bits [@bitsc], extending the number of bits has not proved easy. We report the implementation of an NMR quantum computer having five bits, involving the use of a linear coupling pattern [@Brueschweiler], synthesis of a molecule having five usable spin-active nuclei with predominantly linear spin-spin coupling, and the development of radio-frequency (r.f.) pulse sequences to act as quantum logic gates for the molecule synthesized. Techniques to suppress unwanted couplings between nuclear spins are described, as are techniques to avoid perturbing some nuclear spins while manipulating others. Results are presented of a test of the five-bit computer on a problem of Deutsch and Jozsa to distinguish one class of mathematical function from another [@deutsch].
Definition of an [****]{}-bit NMR computer
==========================================
An $n$-bit quantum computer is called on to do three things: 1) accept an instruction to prepare a starting state and prepare that state; 2) accept instructions for and implement quantum gates (from which more general unitary transformations of the state can be composed); and 3) measure the state and yield an outcome. The connection to computation with classical computers depends on the recognition, due to Bennett[@bennett2], that all classical computations can be made reversible. Any terminating reversible computation is a permutation of the inputs, which is unitary, and thus belongs to the class of transformation performable on a quantum computer. (For issues of possibly nonterminating programs, see [@myers].)=-1
In theory, a variant of the quantum computer is the expectation-value quantum computer (EVQC), which in place of an outcome of a measurement yields the expectation value [@gradientPPS1; @gradientPPS2]. NMR quantum computing was born of the recognition that an EVQC can be approximated by use of an NMR spectrometer containing a liquid sample, the molecules of which have $n$ atoms with a nuclear spin of 1/2 (and possibly other atoms, either spinless or having spins not used) [@gradientPPS1; @gradientPPS2; @logicalPPS]. Because tumbling of the molecules decouples each molecule from all the others, the sample can be
described by a density matrix for the nuclear spins of the atoms of a single molecule [@Ernst], with only the spin-degrees of freedom, corresponding to the desired Hilbert space of dimension $2^n$. NMR spectrometers sense only the traceless part of the density matrix, so in place of matter in a pure state, an NMR computer can use a liquid sample described by a density matrix proportional to a sum of a pure state and any multiple of the unit matrix. Such a density matrix, called a [*pseudopure*]{} state [@gradientPPS1], plays a role in the 5-bit quantum computer.
Acting as an $n$-bit EVQC, a suitable NMR spectrometer allows the preparation of a pseudopure starting state, the programming and execution of r.f. pulse sequences that implement quantum gates, and the
determination of expectation values visible in NMR spectra. To perform the unitary operations required of a quantum computer, a sufficient set of quantum gates consists of all single-spin operations and all controlled-not gates that act on one nuclear spin under the control of another nuclear spin. Single-spin gates are implemented by selective r.f. pulses. Controlled-not gates between nuclei having spin-spin coupling will be described, along with techniques to avoid unwanted influences on other spins. A key feature of the present design of the NMR quantum computer is the reliance on a chain of linear coupling and the use of swap gates to implement a controlled-not in which a spin $j$ controls spin $k$, where $j$ and $k$ have no direct spin-spin coupling [@Brueschweiler]. This allows use in NMR quantum computers of a molecule having a simpler coupling pattern, and eases the problem of unwanted influences on spins.
Design of Test
==============
The proof of the pudding is in the eating: the 5-bit NMR computer to be described was tested on the Deutsch-Jozsa problem for functions of 4 bits[@deutsch], in the form described in [@cleve], modified for efficiency with NMR as described by Jones and Mosca [@bitsb]. (A recent simplification [@collins], unused here, would permit working with functions of 5 bits.) The problem is to decide whether a function program selected from a set of possible programs computes one kind of function or another. Specifically, the problem is to distinguish programs for balanced functions from programs for constant functions, where the functions are from $\{0,1\}^4$ to $\{0,1\}$. (A function is constant if its value is independent of its argument, and is called balanced if the value for half the arguments is $1$ while the value is $0$ for the other half.) The test actually made was to distinguish between programs for one constant and one balanced function, defined as follows: $$f_{0}(\vec{x}) \stackrel{\rm def}{=} 0$$ and $$f_{b}(\vec{x}) \stackrel{\rm def}{=} x_{1}\oplus x_{2}\oplus
x_{3}\oplus x_{4}\label{eq:fb}$$ for all $\vec{x}$, where $\vec{x} \stackrel{\rm def}{=}
(x_{1},x_{2},x_{3},x_{4})$, and “$\oplus$” is addition modulo 2. Also, several controlled-not (CNOT) gates were tested, along with a variety of 1-bit operators. The balanced function chosen, $f_{b}$, has the nice property of being implementable also in classical reversible gates with no work bits.
Used to solve this problem, a quantum computer is a resource used both to specify the function under test and to determine what it is. In order to separate these two uses, one can view the quantum computer as used alternately by a [*specifier*]{} of the function and a [*decision maker*]{}, two players of a game in which: (A) the decision maker prepares the starting state; (B) the specifier runs the function program;[^1] and (C) the decision maker makes a measurement independent of the function program, and interprets the result to decide the function class.
On an NMR quantum computer, (A) the decision maker starts a play by using r.f. pulses and magnetic-field gradients (independent of the function to be specified) to put the liquid sample in the pseudopure state having a density matrix with a traceless part proportional to $$\rho_{i} \stackrel{\rm def}{=}16 |00001\rangle\langle
00001|-\frac{1}{2} {\bf 1} = 16
I_1^{\alpha}I_2^{\alpha}I_3^{\alpha}I_4^{\alpha}I_5^{\beta}-
\frac{1}{2}{\bf 1}\label{eq:rhoi}$$ in terms of the polarization operators $I_k^{\alpha} = (\frac{1}{2}{\bf 1}+I_{kz})$ and $I_k^{\beta} = (\frac{1}{2}{\bf 1}-I_{kz})$ usual to NMR [@Ernst; @Brueschweiler]. Then the decision maker applies a unitary transform $U_{90}$ by use of a hard $90^{\circ}$ $y$-pulse which for this particular state has the same effect as the Hadamard transform on each spin [@bitsb]. $$\begin{aligned}
\rho_{i} \stackrel{U_{90}}{\rightarrow} \rho_0
&=& U_{90} \rho_i U_{90}^{\dag}\nonumber\\
& = &16 \left(\frac{1}{2}{\bf
1}+I_{1x}\right)\left(\frac{1}{2}{\bf 1}
+I_{2x}\right)\left(\frac{1}{2}{\bf
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the two-dimensional kinetic Ising model below its equilibrium critical temperature, subject to a square-wave oscillating external field. We focus on the multi-droplet regime where the metastable phase decays through nucleation and growth of [*many*]{} droplets of the stable phase. At a critical frequency, the system undergoes a genuine non-equilibrium phase transition, in which the symmetry-broken phase corresponds to an asymmetric stationary limit cycle for the time-dependent magnetization. We investigate the universal aspects of this dynamic phase transition at various temperatures and field amplitudes via large-scale Monte Carlo simulations, employing finite-size scaling techniques adopted from equilibrium critical phenomena. The critical exponents, the fixed-point value of the fourth-order cumulant, and the critical order-parameter distribution all are consistent with the universality class of the two-dimensional [*equilibrium*]{} Ising model. We also study the cross-over from the multi-droplet to the strong-field regime, where the transition disappears.'
address: |
$^1$School of Computational Science and Information Technology, Florida State University, Tallahassee, Florida 32306-4120\
$^2$Center for Materials Research and Technology and Department of Physics, Florida State University, Tallahassee, Florida 32306-4350
author:
- 'G. Korniss,$^1$[^1] C. J. White,$^{1,2}$ P. A. Rikvold,$^{1,2}$ and M. A. Novotny$^1$'
title: 'Dynamic Phase Transition, Universality, and Finite-size Scaling in the Two-dimensional Kinetic Ising Model in an Oscillating Field'
---
=10000
Introduction
============
Metastability and hysteresis are widespread phenomena in nature. Ferromagnets are common systems that exhibit these behaviors [@EWIN1881; @WARB1881; @EWIN1882; @STEI1892; @AHARONI], but there are also numerous other examples ranging from ferroelectrics [@BEALE; @RAO91] to electrochemical adsorbate layers [@SMEL; @MITCHELL00] to liquid crystals [@CHENG96]. A simple model for many of these real systems is the kinetic Ising model (in either the spin or the lattice-gas representation). For example, it has been shown to be appropriate for describing magnetization dynamics in highly anisotropic single-domain nanoparticles and uniaxial thin films [@HE; @JIANG; @SUEN; @group2].
The system response to a single reversal of the “external field” is fairly well understood [@switch]. In sufficiently large systems below the equilibrium critical temperature, $T_c$, the order parameter changes its value through the nucleation and growth of [*many*]{} droplets, inside which it has the equilibrium value consistent with the value of the applied field, as shown in Fig. \[decay\_conf\]. This is the multi-droplet regime of phase transformation [@switch; @phase_transformation]. The well-known Avrami’s law [@Avrami] describes this process of homogeneous nucleation followed by growth quite accurately up to the time when the growing droplets coalesce and the stable phase becomes the majority phase [@RAMOS99]. The intrinsic time scale of the system is given by the metastable lifetime, $\langle\tau\rangle$, which is defined as the average first-passage time to zero magnetization. It is a measure of the time it takes for the system to escape from the metastable region of the free-energy landscape. In this paper we will use the magnetic language in which the order parameter is the magnetization, $m$, and its conjugate field is the external magnetic field, $H$. Analogous interpretations, e.g., using the terms polarization and electric field for ferroelectric systems [@BEALE; @RAO91], and coverage and chemical potential for adsorption problems [@SMEL; @MITCHELL00], are straightforward.
It is natural next to ask, “what is the response to an oscillating external field?” The hysteretic behavior in ferromagnets has attracted significant experimental interest, mainly focused on the characteristic behavior of the hysteresis loop and its area. Its dependence on the field amplitude and frequency has been intensively studied and its scaling behavior (power law versus logarithmic) is still under investigation, both experimentally [@HE; @JIANG; @SUEN] and theoretically [@JUNG90; @RAO90; @TOME90; @Mendes91; @Zimmer93; @LO90; @SIDES98a; @SIDES98b; @SIDES99]. For the kinetic Ising ferromagnet in two dimensions it has been recently shown [@SIDES98a; @SIDES98b; @SIDES99] that the true behavior is in fact a crossover, approaching a logarithmic frequency dependence only for extremely low frequencies.
An important aspect of hysteresis in bistable systems, which is the focus of the present paper, is the dynamic competition between the two time scales in the system: the half-period of the external field $t_{1/2}$ (proportional to the inverse of the driving frequency) and the metastable lifetime $\langle\tau\rangle$. For low frequencies, a complete decay of the metastable phase almost always occurs in each half-period, just like it does after a single field reversal. Consequently, the time-dependent magnetization reaches a limit cycle which is symmetric about zero \[Fig. \[m\_series\](a)\]. For high frequencies, however, the system does not have enough time to switch during one half-period, and the symmetry of the hysteresis loop is broken. The magnetization then reaches an asymmetric limit cycle \[Fig. \[m\_series\](b)\]. Avrami’s law [@Avrami; @RAMOS99] is a good approximation when the majority of the droplets do not overlap. Thus, it can be employed to estimate the time-dependent magnetization and the dynamic order parameter (period-averaged magnetization) in the low-frequency (see the Appendix) and in the high-frequency [@SIDES99] limits. However, it cannot describe the “critical regime” where $t_{1/2}$ becomes comparable to $\langle\tau\rangle$, and which is dominated by coalescing droplets.
This symmetry breaking between the symmetric and asymmetric limit cycles has been the subject of intensive research over the last decade. It was first observed during numerical integration of a mean-field equation of motion for the magnetization of a ferromagnet in an oscillating field [@TOME90; @Mendes91]. Since then, it has been observed and studied in numerous Monte Carlo (MC) simulations of kinetic Ising systems [@LO90; @SIDES99; @SIDES98; @ACHA95; @ACHA97C; @ACHA97D; @ACHA98; @BUEN00], as well as in further mean-field studies [@Zimmer93; @ACHA95; @ACHA97D; @ACHA98; @BUEN98]. It may also have been experimentally observed in ultrathin films of Co on Cu(001) [@JIANG]. The results of these studies suggest that this symmetry breaking corresponds to a genuine continuous non-equilibrium phase transition. For recent reviews see Refs. [@ACHA94; @CHAK99]. Associated with the transition is a divergent time scale (critical slowing down) [@ACHA97D] and, for spatially extended systems, a divergent correlation length [@SIDES99; @SIDES98]. Estimates for the critical exponents and the universality class of the transition have recently become available [@SIDES99; @SIDES98; @UGA99] after the successful application of finite-size scaling techniques borrowed from equilibrium critical phenomena [@FISH72; @BIND92; @BIND90; @LANDAU76].
The purpose of the present paper is to extend preliminary results [@UGA99] and to provide more accurate estimates of the exponents for two-dimensional kinetic Ising systems in a square-wave oscillating field. The use of the square-wave field tests the universality of the dynamic phase transition (DPT) [@SIDES99; @SIDES98], and it also significantly increases computational speed, compared to the more commonly used sinusoidal field. We further explore the universal aspects of the transition by varying the temperature and field amplitude within the multi-droplet regime, and we study the cross-over to the strong-field regime where the transition disappears. In obtaining our results, we rely on dynamic MC simulations. Computational methods are always helpful, especially when theoretical ideas are largely missing. There are cases, however, when even the use of standard equilibrium techniques, such as finite-size scaling requires some insight and building analogies between equilibrium and non-equilibrium systems [@SIDES99; @SIDES98]. This is the case for our present study. No effective “Hamiltonian” was known before the completion of this work for the dynamic order-parameter (in the coarse-grained sense), from which the long-distance behavior of the model could be derived. This is a typical difficulty when dealing with systems far from equilibrium [@DDS; @MARRO_DICKMAN]. Recently, however, a coarse-grained Hamiltonian has been derived [@Fuji] for the dynamic order-parameter, supporting our results for the DPT. Similar to the previous work for sinusoidly oscillating fields [@SIDES99; @SIDES98], we perform large-scale simulations and finite-size scaling to investigate the universal properties of the DPT.
The remainder of the paper is organized as follows. In Sec. II we
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
During the final growth phase of giant planets, accretion is thought to be controlled by a surrounding circumplanetary disk. Current astrophysical accretion disk models rely on hydromagnetic turbulence or gravitoturbulence as the source of effective viscosity within the disk. However, the magnetically-coupled accreting region in these models is so limited that the disk may not support inflow at all radii, or at the required rate.
Here, we examine the conditions needed for self-consistent accretion, in which the disk is susceptible to accretion driven by magnetic fields or gravitational instability. We model the disk as a Shakura-Sunyaev $\alpha$ disk and calculate the level of ionisation, the strength of coupling between the field and disk using Ohmic, Hall and Ambipolar diffusevities for both an MRI and vertical field, and the strength of gravitational instability.
We find that the standard constant-$\alpha$ disk is only coupled to the field by thermal ionisation within $30\,R_J$ with strong magnetic diffusivity prohibiting accretion through the bulk of the midplane. In light of the failure of the constant-$\alpha$ disk to produce accretion consistent with its viscosity we drop the assumption of constant-$\alpha$ and present an alternate model in which $\alpha$ varies radially according to the level magnetic turbulence or gravitoturbulence. We find that a vertical field may drive accretion across the entire disk, whereas MRI can drive accretion out to $\sim200\,R_J$, beyond which Toomre’s $Q=1$ and gravitoturbulence dominates. The disks are relatively hot ($T\gtrsim800\,$K), and consequently massive ($M_{\text{disk}}\sim0.5\,M_J$).
author:
- |
Sarah L. Keith $^{1,2}$[^1] and Mark Wardle$^{1}$\
$^{1}$Department of Physics & Astronomy and MQ Research Centre in Astronomy, Astrophysics & Astrophotonics, Macquarie University,\
NSW 2109, Australia\
$^{2}$Jodrell Bank Centre for Astrophysics, The University of Manchester, Alan Turing Building, Manchester, M13 9PL, United Kingdom
bibliography:
- 'references.bib'
date: 'Accepted Year Month date Day. Received Year Month Day; in original form Year Month Day'
title: Accretion in giant planet circumplanetary disks
---
\[firstpage\]
accretion discs – magnetic fields – MHD – planets and satellites: formation
Introduction
============
Gas giant planets form within a protoplanetary disk surrounding a young star [@1985prpl.conf..981L]. Those orbiting within $\sim100\,$au of the star form through the aggregation of a $\sim15M_{\text{Earth}}$ solid core and subsequent gas capture from the surrounding disk [@1996Icar..124...62P; @2009ApJ...695L..53B]. During the initial slow accretion phase the protoplanet envelope is thermally supported and distended. However, once the envelope mass reaches the core mass gas accretion accelerates rapidly and, unable to maintain thermal equilibrium, the envelope collapses [@1996Icar..124...62P; @2009Icar..199..338L]. This ‘run-away’ gas accretion ends once the planet is massive enough that it accretes faster than gas can be replenished into its vicinity. Infalling gas has too much angular momentum to fall directly onto the contracted planet, and so an accretion disk, the circumplanetary disk, forms around the planet [@1982Icar...52...14L; @2009MNRAS.397..657A].
In contrast to the icy conditions implied by satellite systems around Solar System giant planets, circumplanetary disks are likely initially hot and convective [@1989oeps.book..723C]. Most of the protoplanet’s mass is delivered during run-away accretion and so the circumplanetary disk must support a high inflow rate during this phase. The formation of Jupiter consistent with the giant planet formation time-scale inferred from the life-time of protoplanetary disks (life-time$\sim3\times 10^6 $years; ) suggests an inflow rate of $\dot{M}\sim10^{-6}M_J/$year. Models of the accretion phase of a circumplanetary disk include self-luminous disks , Shakura-Sunyaev $\alpha$ disks (@2002AJ....124.3404C [-@2002AJ....124.3404C], [-@2006Natur.441..834C]; [@2005AA...439.1205A; @2013arXiv1306.2276T]), time-dependent disks with MRI-Gravitational instability limit cycles [@2011ApJ...740L...6M; @2012ApJ...749L..37L], and hydrodynamical simulations ([@1999ApJ...526.1001L], [@2002AA...385..647D], [@2003ApJ...599..548D], ). The evolution of the disk associated with the contraction of the proto-planetary envelope and changes in the mode of accretion from the protoplanetary disk have also been addressed [@2010AJ....140.1168W].
The angular momentum transport mechanism is key in determining the disk structure and evolution, however little work has been done to model the disk self-consistently with the accretion mechanism. The $\alpha$-model invokes a source of viscosity (typically hydromagnetic turbulence is suggested) however there is no guarantee that the resulting disk complies with the conditions required for viscosity, hydromagnetic or otherwise. An exception is the time-dependent gravo-magneto outbursting cycles modelled by @2012ApJ...749L..37L, however numerical simulations suggest disks rapidly evolve away from a gravitationally unstable state.
There are a variety of candidates for the accretion mechanism, including magnetic forces, gravitational instability, thermally-driven hydrodynamical instabilities, torque from spiral waves generated by satellitesimals \[see and @TurnerPPVI (in preparation) for a review\], and stellar forcing . Magnetic fields and gravitational instability are generally considered the most promising mechanisms within the protoplanetary disk. Magnetically-driven accretion may result from hydromagnetic turbulence produced by the magnetorotational instability (MRI; @1991ApJ...376..214B [@1995ApJ...440..742H]), centrifugally driven disk winds associated with large-scale vertical fields [@1982MNRAS.199..883B; @1993ApJ...410..218W], magnetic braking [@2004ApJ...616..266M], or large-scale toroidal fields [@2000prpl.conf..589S]. MRI turbulence has been modelled extensively (e.g., [@1996ApJ...457..355G; @2004ApJ...605..321S; @2007ApJ...659..729T; @2012MNRAS.420.2419F; @2012MNRAS.422.2737W; @2013ApJ...763...99P]) and simulations of MRI transport in protoplanetary disks indicate $\alpha\sim10^{-3}$, where $\alpha$ is the Shakura-Sunyaev viscosity parameter . Gravitational instability occurs in massive disks and may cause fragmentation or gravitoturbulence [@1964ApJ...139.1217T; @2001ApJ...553..174G].
Certain conditions are required for these mechanisms to be effective. For example, magnetic processes can only act in sufficiently ionised ‘active’ regions, where the evolution of the magnetic field is coupled to the motion of the disk. If the ionisation fraction is too low, magnetic diffusivity decouples their motion (e.g. ). In protoplanetary disks, magnetic coupling is strong enough to permit MRI accretion in two regions: (i) layers above the midplane where cosmic rays, and stellar X-rays and UV photons penetrate, and (ii) close to the star where the disk is hot and thermally ionised [@1996ApJ...457..355G]. Gravitational instability requires strong self-gravity such that Toomre’s stability parameter $Q\lesssim 1$, and quasi-steady gravitoturbulent accretion further requires a cooling time-scale in excess of $\sim30$ orbital time-scales ([@2012MNRAS.427.2022M], [@2012MNRAS.421.3286P]).
Existing steady-state model circumplanetary disks are not massive enough for gravitational instability, and so testing for self-consistent accretion has focussed on identifying regions which are susceptible to the MRI. @2011ApJ...743...53F determined the thickness of the magnetically-uncoupled Ohmic midplane ‘dead zone’ of an $\alpha$ disk for the ionisation by cosmic rays. They find that the dead zone extends up at least $2.5$ scale heights (for plasma $\beta=10^4$) with the presence of grains extending this region to even greater heights. These results agree with the recent paper by @2013arXiv1306.2276T which includes ionisation from X-rays, radioactive decay, turbulent mixing, thermal ionisation as well as cosmic rays and accounting for Ambipolar and Ohmic diffusion. They find that $\alpha$ disks are magnetically coupled in surface layers above $\sim3$ scale heights unless the disk is dusty and is shielded from X-rays. They also consider magnetic coupling in the Jovian analogue to the Minimum Mass Solar Nebula-the Minimum Mass Jovian Nebula (MMJN; [@2003Icar..163..198M]), finding that dust must be removed for magnetically coupled surface layers. They find that thermal ionisation in actively supplied disks may permit coupling within the inner $4\,R_J$ of the midplane, although suggest a larger thermally ionised region ($r\lesssim65\,R_J$). Either way,
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A prefix grammar is a context-free grammar whose nonterminals generate prefix-free languages. A prefix grammar $G$ is an ordinal grammar if the language $L(G)$ is well-ordered with respect to the lexicographic ordering. It is known that from a finite system of parametric fixed point equations over ordinals one can construct an ordinal grammar $G$ such that the lexicographic order of $G$ is isomorphic with the least solution of the system, if this solution is well-ordered. In this paper we show that given an ordinal grammar, one can compute (the Cantor normal form of) the order type of the lexicographic order of its language, yielding that least solutions of fixed point equation systems defining algebraic ordinals are effectively computable (and thus, their isomorphism problem is also decidable).'
address: 'University of Szeged, Hungary'
author:
- Kitti Gelle
- Szabolcs Iván
bibliography:
- 'biblio.bib'
title: The ordinal generated by an ordinal grammar is computable
---
Algebraic ordinals; Ordinal grammars; Parametric fixed-point equations over ordinals; Isomorphism of algebraic well-orderings
Introduction
============
Least solutions of finite systems of fixed points equations occur frequently in computer science. Some very well-known instances of this are the regular and context-free languages, rational and algebraic power series, well-founded semantics of generalized logic programs, semantics of functional programs, just to name a few. A perhaps less-known instance is the notion of the algebraic linear orders of [@Bloom:2010:MTC:1655414.1655555]. A linear ordering is algebraic if it is (isomorphic to) the first component of the least solution of a finite system of fixed point equations of the sort $$F_i(x_0,\ldots,x_{n_i-1})=t_i,\quad i=1,\ldots,n,$$ where $n_1=0$ and each $t_i$ is an expression composed of the function variables $F_j$, $j=1,\ldots,n$, the variables $x_0,\ldots,x_{n_i-1}$ which range over linear orders, the constant $1$ and the sum operation $+$. As an example, consider the following system from [@DBLP:journals/fuin/BloomE10]: $$\begin{aligned}
F_0 &= G(1)\\
G(x)&=x+G(F(x))\\
F(x)&=x+F(x)\end{aligned}$$ In this system, the function $F$ maps a linear order $x$ to $x+x+\ldots = x\times\omega$, the function $G$ maps a linear order $x$ to $x+G(x\times \omega)=x+x\times\omega+G(x\times\omega^2)+\ldots
=x\times\omega^\omega$, thus the first component of the least solution of the system is $F_0=G(1)=\omega^\omega$.
If the system in question is parameterless, that is, $n_i=0$ for each $i$, then the ordering which it defines is called a regular ordering. An ordinal is called algebraic (regular, respectively) if it is algebraic (regular, resp.) as a linear order. It is known [@ITA_1980__14_2_131_0; @BLOOM2001533; @10.1007/978-3-540-73859-6_1; @DBLP:journals/fuin/BloomE10; @10.1007/978-3-642-29344-3_25] that an ordinal is regular if and only if it is smaller than $\omega^\omega$ and is algebraic if and only if it is smaller than $\omega^{\omega^\omega}$.
To prove the latter statement, the authors of [@DBLP:journals/fuin/BloomE10] applied a path first used by Courcelle [@ITA_1978__12_4_319_0]: every countable linear order is isomorphic to the frontier of some (possibly) infinite (say, binary) tree. Frontiers of infinite binary trees in turn correspond to prefix-free languages over the binary alphabet, equipped with the lexicographic ordering. Moreover, algebraic (regular, resp.) ordinals are exactly the lexicographic orderings of context-free (regular, resp.) prefix-free languages [@COURCELLE198395] (prefix-free being optional here as each language can be effectively transformed to a prefix-free order-isomorphic one for both the regular and the algebraic case). Thus, studying lexicographic orderings of prefix-free regular or context-free languages can give insight to regular or algebraic linear orders. The works [@BLOOM2001533; @6a0baa0d4e6744d38956d22057b410ce; @BLOOM200555; @10.1007/978-3-540-73859-6_1; @COURCELLE198395; @ITA_1980__14_2_131_0; @LOHREY201371; @ITA_1986__20_4_371_0] deal with regular linear orders this way, in particular [@LOHREY201371] shows that the isomorphism problem for regular linear orders is decidable in polynomial time The study of the context-free case was initiated in [@10.1007/978-3-540-73859-6_1], and further developed in [@DBLP:journals/fuin/BloomE10; @doi:10.1142/S0129054111008155; @ESIK2011107; @10.1007/978-3-642-22321-1_19; @10.1007/978-3-642-29344-3_25; @CARAYOL2013285; @KUSKE201446].
Highlighting the results from these works that are tightly connected to the current paper: the case of regular linear orders is well-understood, even their isomorphism problem (that is, whether two regular linear orders, given by two finite sets of fixed-point equations, are isomorphic) is decidable. For algebraic linear orders, there are negative results: it is already undecidable whether an algebraic linear ordering is dense, thus (as there are exactly four dense countable linear orders up to isomorphism) the isomorphism problem of algebraic linear orders is undecidable. On the other hand, deciding whether an algebraic linear order is scattered, or a well-order, is decidable. The frontier of decidability of the isomorphism problem of algebraic linear orderings is an interesting question: for the general case it is undecidable, while for the case of regular ordinals it is known to be decidable by [@LOHREY201371] and [@Khoussainov:2005:ALO:1094622.1094625]. In [@DBLP:journals/fuin/BloomE10], it was shown that a system of equations defining an algebraic ordering can be effectively transformed (in polynomial time) to a so-called prefix grammar $G$ (a context-free grammar whose nonterminals each generate a prefix-free language), such that the lexicographic order of the language generated by $G$ is isomorphic to the algebraic ordering in question. If the ordering is a well-ordering (i.e. the system defines an algebraic ordinal), then the grammar we get is called an ordinal grammar, that is, a prefix grammar generating a well-ordered language with respect to the lexicographic ordering.
In this paper we show that given an ordinal grammar, the order type of the lexicographic ordering of the language it generates is computable (that is, we can effectively construct its Cantor normal form). Hence, applying the above transformation we get that the Cantor normal form of any algebraic ordinal is computable from its fixed-point system presentation, thus in particular, the isomorphism problem of algebraic ordinals is decidable.
Notation
========
When $n\geq 0$ is an integer, $[n]$ denotes the set $\{1,\ldots,n\}$. (Thus, $[0]$ is another notation for the empty set $\emptyset$.)
Linear orders, ordinals {#linear-orders-ordinals .unnumbered}
-----------------------
In this paper we consider countable linear orderings. A good reference on the topic is [@rosenstein]. A linear ordering $(I,<)$ is a set $I$ equipped with a strict linear order: an irreflexive, transitive and trichotome relation $<$. When the order $<$ is clear from the context, we omit it. Set-theoretic properties of $I$ are lifted to $(I,<)$, thus we can say that a linear order is finite, countable etc. When $(I_1,<_1)$ and $(I_2,<_2)$ are linear orders, their (ordered) sum is $(I_1,<_1)+(I_2,<_2)=(I_1\uplus I_2,<)$ with $x<y$ if and only if either $x\in I_1$ and $y\in I_2$, or $x,y\in I_1$ and $x<_1y$, or $x,y\in I_2$ and $x<_2y$. A linear ordering
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The results in [@O2] (see [@O1] for the quasistatics regime) consider the Helmholtz equation with fixed frequency $k$ and, in particular imply that, for $k$ outside a discrete set of resonant frequencies and given a source region $D_a\subset \RR^d$ ($d=\overline{2,3}$) and $u_0$, a solution of the homogeneous scalar Helmholtz equation in a set containing the control region $D_c\subset \RR^d$, there exists an infinite class of boundary data on $\partial D_a$ so that the radiating solution to the corresponding exterior scalar Helmholtz problem in $\RR^d\setminus D_a$ will closely approximate $u_0$ in $D_c$. Moreover, it will have vanishingly small values beyond a certain large enough “far-field" radius $R$ (see Figure \[fig:mainsetup\] for a geometric description).
In this paper we study the minimal energy solution of the above problem (e.g. the solution obtained by using Tikhonov regularization with the Morozov discrepancy principle) and perform a detailed sensitivity analysis. In this regard we discuss the stability of the the minimal energy solution with respect to measurement errors as well as the feasibility of the active scheme (power budget and accuracy) depending on: the mutual distances between the antenna, control region and far field radius $R$, value of regularization parameter, frequency, location of the source.
author:
- Mark Hubenthal
- Daniel Onofrei
bibliography:
- 'controlpaper.bib'
title: Sensitivity analysis for active control of the Helmholtz equation
---
Introduction
============
During recent years, there has been a growing interest in the development of feasible strategies for the control of acoustic and electromagnetic fields with one possible application being the construction of robust schemes for sonar or radar cloaking.
One main approach controls fields in the regions of interest by changing the material properties of the medium in certain surrounding regions ([@Chan3; @Chan2; @Cummer; @Green2; @Green1; @Gunther-pr; @Pendry] and references therein). Several alternative techniques are proposed in the literature (other than transformation optics strategies) such as: plasmonic designs (see [@Alu] and references therein), strategies based on anomalous resonance phenomena (see [@Mil1; @Mil3; @Mil2]), conformal mapping techniques (see [@Ulf2; @Ulf1]), and complementary media strategies (see [@Chan]).
In the applied community, active designs for the manipulation of fields appear to have occurred initially in the context of low-frequency acoustics (or active noise cancellation). Especially notable are the pioneering works of Lueg [@Lueg] (feed-forward control of sound) and Olson & May [@Olson-May] (feedback control of sound). The reviews [@Elliot; @Fuller; @Tsynkov; @T1; @Peake; @Peterson], provide detailed accounts of past and recent developments in acoustic active control.
In the context of cloaking, the [**[*interior*]{}**]{} strategy proposed in [@Miller] employs a continuous active layer on the boundary of the control region while the [**[*exterior*]{}**]{} scheme discussed in [@OMV4; @OMV1; @OMV2; @OMV3] (see also [@CTchan-num]), uses a discrete number of active sources located in the exterior of the control region to manipulate the fields. The active exterior strategy for 2D quasistatics cloaking was introduced in [@OMV1], and, based on *a priori* information about the incoming field, the authors constructively described how one can create an almost zero field control region with very small effect in the far field. However, the proposed strategy did not work for control regions close to the active source. It “cloaked" large objects only when they are far enough from the source region (see [@OMV4]) and was not adaptable to three space dimensions. The finite frequency case was studied in the last section of [@OMV1] and in [@OMV3] (see also [@OMV4] for a recent review) where three (or four in 3D) active sources were needed to create a zero field region in the interior of their convex hull, while creating a very small scattering effect in the far field. The broadband character of the proposed scheme was numerically observed in [@OMV2]. All the above results were obtained assuming large amplitude and highly oscillatory currents on the active source regions. In this regard, in [@Norris] (see also [@Devaney0; @Miller]) the authors presented theoretical and numerical evidence that increasing the number of sources will decrease the power needed on each source and thus increase the feasibility of the scheme. Experimental designs and testing of active cloaking schemes in various regimes are reported in [@Du; @Ma; @Eleftheriades1; @Eleftheriades2].
In a recent development in [@O1], a general analytical approach based on the theory of boundary layer potentials is proposed for the active control problem in the quasi-static regime. By using the same integral equation approach, in [@O2] we extended the results presented in [@O1] to the active control problem for the exterior scalar Helmholtz equation. In particular, we characterized an infinite class of boundary functions on the source boundary $\partial D_a$ so that we achieve the desired manipulation effects in several mutually disjoint exterior regions. The method is novel in the sense that instead of using microstructures, exterior active sources modeled with the help of the above boundary controls are employed for the desired control effects. Such exterior active sources can represent velocity potential, pressure or currents.
In the current paper we study the active control problem in the context of cloaking, where one antenna $D_a$ protects a given control region $D_c$ from far field interrogation on $\partial B_{R}(\mathbf{0})$, with $R \gg 1$ (see Figure \[fig:mainsetup\]). We make use of the results in [@O2] and present a detailed sensitivity and feasibility study for the minimal norm solution of the problem.
The paper is organized as follows: In Section \[sec:background\] we recall the general result obtained in [@O2] in the context of exterior active cloaking. In Section \[sec:stability\] we present an $L^2$ conditional stability result for the minimal norm solution with respect to measurement errors of the incoming field. In Section \[sec:numerics\] we present the numerical details of the Tikhonov regularization algorithm with the Morozov discrepancy principle for the computation of the minimal norm solution of the exterior active cloaking problem in two dimensions. We will numerically observe the fact that the scheme requires large antenna powers in the far field and we will provide numerical support for our theoretical stability results. An important part of this section will be focused on the sensitivity analysis, where we will study: the dependence of the control results as a function of mutual distances between the antenna, control region and far field region; and the broadband character of our scheme in the near field region. Finally, in Section \[sec:conclusions\] we highlight the main results of the paper and discuss current and future challenges and extensions of our research.
Background {#sec:background}
==========
In this section we will recall the main result regarding the active exterior control problem for the Helmholtz equation obtained in [@O2]. We will focus only on the case where one active external source (antenna) $D_a$ protects a control region $D_c$ from an interrogating far field and maintains an overall small signature beyond a disk of large enough radius $R$.
The general setup for this question will be as follows. Let $B_{R} \subset \mathbb{R}^{d}$ be the ball of radius $R > 0$. We assume $\B0\in D_{a} \subset B_{R}$ is the region inside a single antenna with sufficiently smooth boundary $\partial D_{a}$. We also let $D_{c} \Subset B_{R}$ be the control region, which is assumed to satisfy $\overline{D_{c}} \cap \overline{D_{a}} = \emptyset$ (see Figure \[fig:mainsetup\]). The numerical simulations in the current work are performed for the two dimensional case but the methods are adaptable to the three dimensional setting as well. Consider the function space $$\Xi = L^{2}(\partial D_{c}) \times L^{2}(\partial B_{R}),$$ endowed with the scalar product $$(\phi,\psi)_{\Xi} = \int_{\partial D_{c}}\phi_{1}(\mathbf{y})\overline{\psi}_{1}(\mathbf{y})\,dS_{\mathbf{y}} + \int_{\partial B_{R}} \phi_{2}(\mathbf{y})\overline{\psi}_{2}(\mathbf{y})\,dS_{\mathbf{y}},$$ which is a Hilbert space. For the remainder of the paper we will assume that every $L^2$ space of complex valued functions will be endowed with the usual inner product. As in [@O2] consider $K: L^{2}(\partial D_{a}) \to \Xi$, the double layer potential operator restricted to $\partial D_{c}$ and $\partial B_{R}$, respectively, defined by $$\label{EQ:K}
K\phi(\Bx,\Bz) = (K_{1}\phi(\Bx), K_{2}\phi(\Bz)), \quad \phi \in L^{2}(\partial D_{a}),$$ where $$\begin{aligned}
K_
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The extremal coefficient function (ECF) of a max-stable process $X$ on some index set $T$ assigns to each finite subset $A \subset T$ the effective number of independent random variables among the collection $\{X_t\}_{t \in A}$. We introduce the class of Tawn–Molchanov processes that is in a 1:1 correspondence with the class of ECFs, thus also proving a complete characterization of the ECF in terms of negative definiteness. The corresponding Tawn–Molchanov process turns out to be exceptional among all max-stable processes sharing the same ECF in that its dependency set is maximal [w.r.t. ]{}inclusion. This entails sharp lower bounds for the finite dimensional distributions of arbitrary max-stable processes in terms of its ECF. A spectral representation of the Tawn–Molchanov process and stochastic continuity are discussed. We also show how to build new valid ECFs from given ECFs by means of Bernstein functions.'
address: 'Institute of Mathematics, University of Mannheim, 68131 Mannheim, Germany.\'
author:
-
-
title: 'An exceptional max-stable process fully parameterized by its extremal coefficients'
---
Introduction {#sect:intro}
============
Besides the class of [square integrable processes]{}, the class of temporal or spatial *max-stable processes* is of common interest in stochastics and statistics, [cf. ]{}[@dehaan_84; @ginehahnvatan_90; @wangstoev_10; @blanchetdavison_11; @buishandetalii_08; @naveauetalii_09], for example. In spite of considerable differences between these two classes, for example, the non-existence of the first moments in case of max-stable processes with *unit Fréchet marginals*, connections between the two classes have been made for instance, by the *extremal Gaussian process* [@schlather_02] and the *Brown–Resnick process* [@kabluchkoetalii_09] that are parameterized by a [correlation function]{} and a [variogram]{}, respectively.
Naturally, extremal dependence measures such as the *extremal coefficients* [@smith_90; @schlathertawn_02], the *(upper) tail dependence coefficients* [@beirlantetalii_03; @davismikosch_09; @falk_05; @colesheffernantawn_99] or other special cases of the *extremogram* [@davismikosch_09] are appropriate summary statistics for max-stable processes. In this article, we capture the full set of extremal coefficients of a max-stable process $X=\{X_t\}_{t \in T}$ on some space $T$ in the so-called *extremal coefficient function (ECF)* $\theta$, which assigns to each finite subset $A$ of $T$ the effective number of independent variables among the collection $\{X_t\}_{t \in A}$. We introduce a subclass of max-stable processes that is parameterized by the ECF, and thus reveal some analogies to *Gaussian processes* and *positive definite functions* as follows:
Among (zero mean) square integrable processes, the subclass of *Gaussian processes* takes a unique role, since it is in a 1–1 correspondence with the set of *covariance functions*, which are precisely the *positive definite functions*. This fact can be proven by means of Kolmogorov’s extension theorem and is illustrated in the following graph:
----------------------

----------------------
In case $T$ is a metric space, the Gaussian process $Z^*(C)$ is continuous in the mean square sense (and then also stochastically continuous) if and only if the covariance function $C$ is continuous if and only if $C$ is continuous on the diagonal ([cf. ]{}[@scheuerer_10], Theorem 5.3.3). Well-known operations on the set of positive definite functions $C$, and hence on the corresponding Gaussian processes $Z^*(C)$, include convex combinations and pointwise limits. Moreover, *Bernstein functions* play an important role for the construction of positive definite functions.
In our case, the crucial role of zero mean Gaussian processes is taken by the class of *Tawn–Molchanov processes (TM processes)*, which are in fact the spatial generalization of the multivariate *max-linear* model of [@schlathertawn_02]. Using Kolmogorov’s extension theorem, we shall see that each ECF $\theta$ (of some max-stable process) uniquely determines a TM process $X^*(\theta)$ having the same ECF (Theorem \[thm:ECF\_ND\]). Alongside, we generalize a multivariate result [@molchanov_08], Corollary 1, to the spatial setting, proving that the ECFs coincide with the functions $\theta$ on ${\mathcal{F}}(T)$ (the *set of finite subsets* of $T$) that are normalized to $\theta
(\varnothing
)=0$ and $\theta(\{t\})=1$ for $t \in T$ and that are *negative definite* (or equivalently *completely alternating*) in a sense to be explained below ([cf. ]{}Definition \[def:ND\_CA\]). This can be illustrated in analogy to the above sketch:
----------------------

----------------------
Having identified the ECF $\theta$ as a negative definite quantity allows for several immediate consequences: First, we obtain an *integral representation* of $\theta$ as a mixture of maps $A \mapsto\mathbh{1}_{A \cap Q \neq\varnothing}$ (Corollary \[cor:intrep\]) and derive a *spectral representation* for the corresponding TM process $X^*(\theta)$ (Theorem \[thm:spectralrep\]). Second, we consider operations on ECFs that allow to build new ECFs from given ones. We find that ECFs allow for convex combinations and pointwise limits (Corollaries \[cor:Theta\_convex\] and \[cor:Theta\_compact\]) and that the class of *Bernstein functions* operates on ECFs (Corollary \[cor:opBernstein\]). We also recover the “triangle inequalities” for $\theta$ from [@cooleyetalii_06], Proposition 4, and see that the inequalities therein correspond to three specific choices of a Bernstein function, whereas we may plug in arbitrary Bernstein functions to obtain further “triangle inequalities” (Corollary \[cor:triangleineq\]).
For $T$ being a metric space, we discuss *stochastic continuity*: The TM process $X^*(\theta)$ is stochastically continuous if and only if $\theta$ is continuous ([cf. ]{}Definition \[def:ECF\_cts\]) if and only if the bivariate map $(s,t) \mapsto\theta(\{s,t\})$ is continuous if and only if the bivariate map $(s,t) \mapsto\theta(\{s,t\})$ is continuous on the diagonal (Theorem \[thm:ECprocess\_cty\]).
Finally, we address the exceptional role of the TM processes among simple max-stable processes. To this end, Molchanov’s *dependency set* ${{\mathcal K}}$ [@molchanov_08] is transferred to max-stable processes $X$. It comprises the finite dimensional distributions (f.d.d.) of $X$ (Lemma \[lemma:DepSetprops\]). Now, let ${{\mathcal K}}^*(\theta)$ denote the dependency set of the process $X^*(\theta)$. Then we identify ${{\mathcal K}}^*(\theta)$ as intersection of halfspaces that are directly given by the ECF $\theta$ (Theorem \[thm:starDepSet\]). It turns out that ${{\mathcal K}}^*(\theta)$ is exceptional among the dependency sets ${{\mathcal K}}$ of all max-stable processes sharing the same ECF $\theta$, since ${{\mathcal K}}^*(\theta)$ is maximal [w.r.t. ]{}inclusion as illustrated in Figure \[fig:ballDepSet\]. Since inclusion of dependency sets corresponds to stochastic ordering, this observation leads to sharp inequalities for the [f.d.d. ]{}of max-stable processes in terms of its ECF $\theta$ (Corollary \[cor:fddinequalities\]).
The text is structured as follows. After the introductory Section \[sect:foundations\], the characterization of ECFs and the existence of TM processes is established in Section \[sect:TMprocess\]. Section \[sect:consequences\] collects several immediate consequences and related results, while Section \[sect:depset\] exhibits the exceptional role of TM processes. Sections \[sect:consequences\] and \[sect:depset\] can be read independently.
![Examples of dependency sets in a trivariate setting: a “typical” dependency set ${{\mathcal K}}$ (left) and a dependency set ${{\mathcal K}}^*$ stemming from a TM process (right). It is shown that ${{\mathcal K}}\subset{{\mathcal K}}^*$ (middle). For further details, see the introduction, Example \[example:ball\], Lemma \[lemma:DepSet\] and Theorem \[thm:starDepSet\].[]{data-label="fig:ballDepSet"}](567f01.eps)
Foundations and definitions {#sect:foundations}
===========================
Notation for max-stable processes and ECFs
------------------------------------------
A stochastic process $X=\{X_t
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'S. D. P. Vitenti'
- 'M. Penna-Lima'
title: A general reconstruction of the recent expansion history of the universe
---
Introduction {#sec:introduction}
============
Many indications of the accelerated expansion of the universe come from distance measurements, such as the distance modulus of type Ia supernovae (SNe Ia) [@Riess1998; @Perlmutter1999]. In the last two decades, several models have been proposed in order to explain this phenomenon and, in general, they can be classified into dynamic and kinematic models. Assuming the general relativity, the first is described by adding a fluid, Dark Energy (DE), in which several propositions provide different DE equation of state (EoS) (for a review, see [@Joyce2015] and references therein). Other common dynamic approach is to modify the geometric setting of the gravitational theory instead of the energy-momentum tensor, such as the high-dimensional models [@Dvali2000] and $f(R)$ theories [@Sotiriou2010; @Nojiri2011]. These approaches are labeled as dynamic in the sense that there are differential equations of motion for the metric, whose modifications consist in altering the source term or the equation of motion itself.
In the context of kinematic models, the expansion history of the universe can be probed without assuming any theory of gravitation nor its matter content, and one only needs to define the space-time metric to study it. Considering the Friedmann-Lemaître-Robertson-Walker (FLRW) metric, the recent expansion of the universe is described in terms of the scale factor and its $n$-order derivatives with respect to time, such as the Hubble, deceleration and jerk functions [@Turner2002c; @Visser2004; @Rapetti2007; @Zhai2013], as well as in terms of the luminosity distance [@Daly2003; @Benitez-Herrera2012] (and references therein).[^1]
Since the only unknown in this metric is the scale factor (and possibly the spatial curvature), once one of the kinematic functions is determined, the others can be found by integrating and/or differentiating it. Therefore, all kinematic functions are related. Which function will be chosen to be reconstructed depends on the questions one wants to answer.[^2] For example, one can model the luminosity distance by a linear piecewise function and obtain a statistically sound and unbiased fit using observational data. However, this study will not contribute at all to the understanding of the recent accelerated expansion since, in this case, the deceleration function is assumed zero in the entire redshift interval.[^3]
Keeping the above idea in mind, the reconstruction of a kinematic function has been addressed using different methods. To understand some of these different methodologies, we divide the problem in two parts: (i) the theory underlying the observable quantities and (ii) the relation between the observables and the data, including the data probability distribution.
Regarding (i), the kinematic model provides a perfect description of the observables, not taking into account any noises nor errors in the measurements. For the sake of argument, suppose that (ii) is not part of the problem, i.e., the data is perfectly known. In this case, we could use a parametric function and adjust its parameters, such that the observable (hopefully) matches the data points, or we could use the data points to determine the observable function using, for example, interpolation. In this sense, we say that the analysis is model-dependent when a given parametric function is chosen a priori and model-independent when we use the data to determine it.
There are also two main procedures to treat (ii). We can assume which is the probability distribution of the data and, consequently, the only problem left is to determine the observable curve, which can be done in a model-dependent or independent way, as discussed above. In statistics texts, this is described as a parametric method. On the other hand, we can follow an even more conservative path and not impose a given probability distribution for the data. This way, known as non-parametric, also uses the data to reconstruct their own probability distribution.
In the model-dependent parametric approach, one assumes a priori a specific functional form of kinematic quantities, such as the deceleration function $q(z)$, and a probability distribution of the data [@Riess2004; @Shapiro2006; @Avgoustidis2009; @Nair2012]. A feature of this strategy is that its results have potentially smaller error bars when compared to the others. After all, one is introducing a reasonable set of assumptions which can lead to biased results. A natural improvement to this is to apply a model-independent approach, where one tries to reconstruct the curve when still using the assumed distribution for the data. Among these approaches is the Principal Component Analyses (PCA), in which the kinematic function is described in terms of a set of basis functions and the data is used to determine which subset of this basis is better constrained. Then, the function is reconstructed by using this subset [@Huterer2003; @Shapiro2006; @Clarkson2010; @Ishida2011; @Ruiz2012; @Benitez-Herrera2013].
Another possibility is to use smoothing methods [@Shafieloo2006; @Shafieloo2012]. In this model-independent non-parametric case, only mild assumptions are made about the data and, usually, no assumption is made about the model. This allows a direct translation of the data into a kinematic curve. Still in this context, we also have the Gaussian Process (GP), in which one chooses to model directly the probability distribution of the kinematic function itself [@Holsclaw2010; @Seikel2012]. For a more complete list of non-parametric methods see [@Montiel2014] and references therein.
Recovering both the probability distribution of the data and a reconstructed kinematic function require a large amount of data and, in practice, the current observational cosmology did not seem to have reached this level yet. This is evinced by the results obtained so far in the literature [@Ruiz2012; @Shafieloo2012; @Seikel2012; @Montiel2014]. Regarding the data, there is a good perspective to increasingly improve their probabilistic descriptions, since different error sources, such as the systematic ones, are being included in their modeling (e.g., [@Conley2011; @Betoule2014] ). This presents an additional challenge to the non-parametric methods, as they must incorporate all the error sources in their reconstruction.
Even in a model-independent and non-parametric approach, the estimated curves are not free from assumptions. Each method has some internal choices of parameters. Currently in the literature, these parameters are obtained using the observational data. However, as we usually have only one set of data, doing so will calibrate the method for this one particular realization of the data. In this case, there is no way to know if this calibration provides the best balance between bias and variance. This difficulty can be circumvented using different realizations of the **same** data set. For a given calibration, i.e., for a given choice of the internal parameters, the method is applied to a large number of simulations obtaining the bias and variance for this calibration. Then, repeating this process for different calibrations one can find the best suited one for the chosen data set. In other words, the internal parameters in these reconstructions must not be related to one particular realization of the data, but to their probability distribution.
This idea can be extended to the study of the statistical properties of the data. For example, in [@Montiel2014], among other results, the authors apply a bootstrap-like procedure to calibrate the smoothing parameter applied to the data. This kind of analysis can provide a insightful information about the statistical properties of the data when little is known about their relationships.
In this work, we use the current available observational data for small redshifts ($z \lesssim 2.3$) and their likelihoods, namely, the Sloan Digital Sky Survey-II and Supernova Legacy Survey 3 years (SDSS-II/SNLS3) combined Joint Light-curve Analysis (JLA) SNe Ia sample [@Betoule2014], baryon acoustic oscillation (BAO) data [@Beutler2011; @Padmanabhan2012; @Kazin2014; @Ross2014] and $H(z)$ measurements [@Stern2010; @Riess2011; @Moresco2012; @Busca2013]. Currently, there is not enough data to perform a full model-independent and non-parametric reconstruction of the recent evolution of the universe. Therefore, we use the usual likelihood for these data, but, to be conservative, we reconstruct $q(z)$ along with some astrophysical parameters of SN Ia, the drag scale (present in the BAO likelihood) and the Hubble parameter $H_0$.
Besides the above data, there is also a wealth of data concerning the large scale structure connected to the perturbations around a FLRW metric, such as the temperature fluctuations of the cosmic microwave background (CMB) [@Hinshaw2013; @Planck2015]. Since we assume no dynamic model, we would have to propose a kinematic one for the perturbations. Such model is not feasible as it would require a set of functions of both time and space. In principle, one could also directly use derived observables, as, e.g., the CMB distance priors [@Komatsu2011], to fit the background model. However, these parameters are obtained in the context of a specific model, e.g., $\Lambda$CDM. Thus this model would be indirectly reintroduced in the results. For this reason, we
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Robert Fleischer\
CERN, Switzerland\
E-mail:
- |
\
TU Muenchen, Germany\
E-mail:
title: '$b \to d$ Penguins: CP Violation, General Lower Bounds on the Branching Ratios and Standard Model Tests'
---
Introduction
============
Flavour-changing neutral-current (FCNC) processes, possible in the Standard Model (SM) only through loop diagrams, are an extremely important probe for new physics (NP). The good agreement between experiment and theory in processes induced by $b\to s$ FCNCs has already put important constraints on physics beyond the SM. Due to the excellent work of the $B$-factories, we are now entering the era where $b\to d$ penguin-induced processes – typically suppressed by a factor of 20 with respect to the corresponding $b\to s$ penguin transitions – can be used to test the SM more rigorously than it was possible before.
The flavour structure of the SM, more specifically the order of magnitude of the individual elements of the Cabibbo–Kobayashi–Maskawa (CKM) matrix, allows us to derive certain relationships between different observables in $b\to d$-induced decays, and between $b\to s$- and $b\to d$-related observables. These relationships allow us to test the SM in those cases where the corresponding observables have already been measured and to make predictions where observations are still missing.
$B_d^0\to K^0\bar K^0$: CP Violation and the Branching Ratio
============================================================
In the SM, we can write the amplitude for the decay $B_d^0\to K^0\bar K^0$ as $$\label{ampl-BdKK}
A(B_d^0\to K^0\bar K^0)=\lambda^{(d)}_u {\cal P}_u^{K\!K} +
\lambda^{(d)}_c {\cal P}_c^{K\!K} + \lambda^{(d)}_t {\cal P}_t^{K\!K},$$ where the $\lambda^{(d)}_q \equiv V_{qd}V_{qb}^\ast$ are CKM factors, and the ${\cal P}_q^{K\!K}$ denote the strong amplitudes of penguin topologies with internal $q$-quark exchanges, which receive tiny contributions from colour-suppressed electroweak (EW) penguins and are fully dominated by QCD penguin processes. Eliminating $\lambda^{(d)}_t$ with the help of the unitarity relation $\lambda^{(d)}_t=-\lambda^{(d)}_u-\lambda^{(d)}_c$ of the CKM matrix, we can write the amplitude as $$\label{ampl-BdKK-lamt}
A(B^0_d\to K^0\bar K^0)=\lambda^3A{\cal P}_{tc}^{K\!K}
\left[1-\rho_{K\!K} e^{i\theta_{K\!K}}e^{i\gamma}\right],$$ where ${\cal P}_{tc}^{K\!K}\equiv {\cal P}_t^{K\!K}-{\cal P}_c^{K\!K}$, and $\rho_{K\!K} e^{i\theta_{K\!K}}$ is a function of the ${\cal P}_q^{K\!K}$ that we treat as an unknown hadronic parameter.
The direct and mixing-induced CP asymmetries ${\cal A}_{\rm CP}^{\rm dir}(B_d\to K^0\bar K^0)$ and ${\cal A}_{\rm CP}^{\rm mix}(B_d\to K^0\bar K^0)$ are functions of [*only*]{} $\rho_{K\!K}$, $\theta_{K\!K}$, the angle $\gamma$ of the unitarity triangle, and (in the latter case) the $B^0_d$–$\bar B^0_d$ mixing phase $\phi_d$; the same is true for the normalized branching ratio $\langle B \rangle$, where phase-space and CKM factors as well as $|{\cal P}_{tc}^{K\!K}|^2$ have been factored off.
For fixed values of $\gamma$ and $\phi_d$, $\rho_{K\!K}$ and $\theta_{K\!K}$ then span a surface in the ${\cal A}_{\rm CP}^{\rm dir}$–${\cal A}_{\rm CP}^{\rm
mix}$–$\langle B \rangle$ observable space, shown in Fig. 1 for $\phi_d = 47^\circ$ and $\gamma=65^\circ$. (The fringe is defined by $\rho_{K\!K}=1$, the numbers give the value for $\theta_{K\!K}$.)
In the SM, any measurement of the three observables has to lie on this surface, which is theoretically clean. Sufficiently accurate measurements of the branching ratio will give strong constraints on possible values for the asymmetries.
The form of the surface implies a theoretical [*lower*]{} bound for $\langle B \rangle$ that can be converted into a lower bound for $\mbox{BR}(B_d\to K^0\bar K^0)$ using input from $b\to s$ penguin decays (see [@FR1] for details). With the help of this lower bound, the recent measurement of $B_d\to K^0\bar K^0$ [@BK0K0exp] was correctly predicted in [@FR1]. Using the latest experimental input and the central values of the factorizable $SU(3)$-breaking parameters, we update the bound in (3) of [@FR1] to ${\rm BR}(B^0_d \to \bar K^0 K^0)> 1.43\,^{+0.17}_{-0.25}$, nicely consistent with the old result and the recent measurements (see Table \[BRtable\]).
We observe that the measured ${\rm BR}(B^0_d \to \bar K^0 K^0)$ is right at the lower theoretical bound (bottom of the surface in Fig. 1). This implies a value of $\rho_{K\!K}$ significantly different from 0, with a small phase $\theta_{K\!K}$; $\rho_{K\!K}$ can be related to a hadronic $B\to \pi K$ parameter through $\rho_{\rm c}=\epsilon \rho_{K\!K}$, where $\epsilon\equiv\lambda^2/(1-\lambda^2)=0.053$. This quantity is usually neglected. However, a value of $\rho_{\rm c}\sim 0.05$, as suggested by ${\rm BR}(B^0_d \to \bar K^0 K^0)$, would be rather welcome in the analysis of the $B\to \pi K$ system [@UPDATE].
General Lower Bounds on the Branching Ratios of $b\to d$ Penguin Processes
==========================================================================
The mechanism that provided the lower bound on ${\rm BR}(B^0_d \to \bar K^0 K^0)$ is actually more general. We will now first use it to derive lower bounds on $b\to d \gamma$ processes, and then discuss the general $b\to d$ penguin case. The amplitude for the decay $\bar B \to \rho\gamma$ can be written as $$\label{Ampl-Brhogam}
A(\bar B \to \rho\gamma)=c_\rho \lambda^3 A {\cal P}_{tc}^{\rho\gamma}
\left[1-\rho_{\rho\gamma}e^{i\theta_{\rho\gamma}}e^{-i\gamma}\right],$$ where $c_\rho=1/\sqrt{2}$ and 1 for $\rho=\rho^0$ and $\rho^\pm$, respectively, and $A=|V_{cb}|/\lambda^2$. Moreover, ${\cal P}_{tc}^{\rho\gamma}\equiv{\cal P}_t^{\rho\gamma}-{\cal P}_c^{\rho\gamma}$, where ${\cal P}_t^{\rho\gamma}$ and ${\cal P}_c^{\rho\gamma}$ are matrix elements of operators from the standard weak effective Hamiltonian (see [@FR2] for details). $\rho_{\rho\gamma}e^{i\theta_{\rho\gamma}}$ is again a hadronic parameter that we will treat as essentially unknown. Let us now use the information offered by the $b\to s$ counterpart of our $b\to d$ transition, which is well measured and takes an amplitude of the following form: $$\label{Ampl-BKastgam}
A(\bar B \!\to\! K^\ast \!\gamma)\!=-\!
\frac{\lambda^3 \! A {\cal P}_{tc}^{K^\ast\!\gamma}}{\sqrt{\epsilon}} \!
\left[1\!+\!\epsilon\rho_{K\!^\ast\!\gamma}e^{i\theta_{K\!^\ast\!\gamma}}
e^{-i\!\gamma}\right]\!,$$ where $\epsilon$ was introduced above. The ratio of the corresponding BRs is then given by $$\label{rare-ratio}
\frac{\mbox{BR}(\bar B \to \rho
\gamma)}{\mbox{BR}(\bar B \to K^\ast \gamma)}=\epsilon
\left[\frac{\Phi_{\rho\gamma}}{\Phi_{K\!^\ast\gamma}}\right]
\left|\frac{{\cal P}_{tc}^{\rho\
|
{
"pile_set_name": "ArXiv"
}
|
‘=11 \#1 =by60 =
\#1[[bsphack@filesw [ gtempa[auxout[ ]{}]{}]{}gtempa @nobreak esphack]{} eqnlabel[\#1]{}]{} eqnlabel vacuum \#1
\#1[@underline\#1 $\@@underline{\hbox{#1}}$]{}
‘@=12
October 2001 PAR–LPTHE 01/?\
[**Universality of coupled Potts models**]{}\
[**Vladimir S. Dotsenko (1), Jesper Lykke Jacobsen (2),\
Xuan Son Nguyen (1), and Raoul Santachiara (1)**]{}\
[**(1)**]{} [*LPTHE*]{}[^1], [*Universit[é]{} Pierre et Marie Curie, Paris VI\
Universit[é]{} Denis Diderot, Paris VII\
Boîte 126, Tour 16, 1$^{\it er}$ [é]{}tage\
4 place Jussieu, F-75252 Paris Cedex 05, France.*]{}\
[**(2)**]{} [*Laboratoire de Physique Théorique et Modèles Statistiques,\
Université Paris-Sud, Bâtiment 100, F-91405 Orsay, France.*]{}
.15in
**ABSTRACT**
> [We study systems of $M$ Potts models coupled by their local energy density. Each model is taken to have a distinct number of states, and the permutational symmetry $S_M$ present in the case of identical coupled models is thus broken initially. The duality transformations within the space of $2^M-1$ multi-energy couplings are shown to have a particularly simple form. The selfdual manifold has dimension $D_M = 2^{M-1}-1$. Specialising to the case $M=3$, we identify a unique non-trivial critical point in the three-dimensional selfdual space. We compare its critical exponents as computed from the perturbative renormalisation group with numerical transfer matrix results. Our main objective is to provide evidence that at the critical point of three different coupled models the symmetry $S_3$ is restored.]{}
Introduction {#sec:intro}
============
In the study of coupled models, and also of disordered models in their replica formulation, the permutation group symmetry $S_M$ is supposed to play an essential role [@ludwig; @djlp; @dns]. Namely, the interaction part of the action for a set of $M$ identical coupled models A\_[int]{} \^2x g \_[a b]{} \_[a]{}(x)\_[b]{}(x) is explicitly invariant with respect to any permutation of the models. Here $a$ and $b$ are replica indices, and $\{\varepsilon_a(x)\}$ designates the set of local energy operators. In the lattice definition of such coupled models the interaction part of the Hamiltonian takes a similar form, with only $\int {\rm d}^2x$ replaced by a summation and $\{\varepsilon_a(x)\}$ by an appropriate lattice expression[^2].
When one introduces asymmetric couplings, by generalising the common coupling constant $g$ to a matrix $g_{ab}$, A\_[int]{} \^2x \_[a b]{} g\_[ab]{} \_[a]{}(x) \_[b]{}(x) \[A\_int\] a perturbative renormalisation group (RG) analysis reveals that the $S_M$ symmetry is restored at the fixed point. A detailed study of this scenario, within the $\epsilon$-type perturbative RG for coupled Potts models, was carried out in [@ls]. Supposing all components of $g_{ab}$ to stay of order $\epsilon$, their initial values being all positive[^3], it was shown in [@ls] that the only non-trivial fixed point, having one attractive direction, all other directions being repulsive, is that with $g_{ab} \equiv g$.[^4] This type of restoration of the symmetry $S_M$ could be called “soft universality”.
In this paper we are going to argue for a “strong universality” in the criticality of coupled models. We shall mainly be interested in coupling $M=3$ different models[^5], via (\[A\_int\]), with $\{\varepsilon_a\}$ belonging to Potts models with different number of states $\{q_1,q_2,q_3\}$. This breaks the permutational symmetry in a “strong” sense. Still, the RG calculations show the existence of a single fixed point with all $\{g_{ab}\}$ being positive, like it is the case for identical models.
To our knowledge, Simon [@simon] was the first to apply the RG analysis to a set of different coupled Potts models. The most general model studied by this author was that of $M_1$ Potts models with $q_1$ states and $M_2$ Potts models with $q_2$ states ($q_1 \neq q_2$), all of them being coupled. After determining the fixed point structure, he computed the dimensions of the spin operators, as well as the RG equations for the energy operators to two-loop order. The effect of disorder on these coupled systems was also analysed.
Here we generalise the RG calculations of [@simon] to the case of three different coupled Potts models $(q_1 \neq q_2 \neq q_3)$. We shall compute the dimensions of energy operators, with a special focus on the symmetry of the theory, at the non-trivial fixed point that generalises the one found in [@djlp] for three identical models. Within the space of couplings $\{g_{ab}\}$, the new fixed point is stable in one direction and unstable in the others, the topology of the RG flows being similar to those of [@djlp]. But there is also one apparent difference: The permutational symmetry has disappeared, the coupled models being different.
The purpose of this paper will be to provide evidence, by using various methods, that at the fixed point of three different coupled models the apparently lost symmetry $S_3$ is restored, implying a “strong universality”.
This symmetry restoration cannot be observed on the level of the initial action (\[A\_int\]), as discussed above, nor is it visible in the perturbative RG treatment, or in the Hamiltonian of the explicit lattice realisation. This is because $\{\varepsilon_{a}\}$ are the energy operators of different models, with different scaling dimensions in particular.
The way we shall check for the restoration of the symmetry is by looking at the spectrum of scaling dimensions at the new fixed point, within the sector of energy operators. Like in the case of identical models [@ludwig; @djlp; @dns], the RG analysis implies that the three energy operators of the decoupled models $\{\varepsilon_1(x),\varepsilon_2(x),\varepsilon_3(x)\}$ will rearrange as three particular linear combinations so as to form the new primary operators at the fixed point of the coupled models.
In the case of identical models the corresponding linear combinations are easy to guess on symmetry grounds. The irreducible representations (irreps) of the group $S_3$ in the basis $\{\varepsilon_1(x),\varepsilon_2(x),\varepsilon_3(x)\}$ consist of a (symmetric) singlet \_[S]{}(x)=\_[1]{}(x)+\_[2]{}(x)+ \_[3]{}(x) \[symm\] and an (antisymmetric) doublet \[antisym\] that act as the new primary operators at the fixed point [@ludwig; @djlp; @dns]. The fact that the operators $\varepsilon_{\rm A_1}$ and $\varepsilon_{\rm A_2}$ belong to the same two-dimensional irrep means that their dimensions must coincide: $\Delta(\varepsilon_{\rm A_1})=\Delta(\varepsilon_{\rm A_2})$. These are however in general different from the dimension $\Delta(\varepsilon_{\rm S})$ of the one-dimensional irrep.
When coupling different models, the corresponding linear combinations of $\varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}$ will have more complicated coefficients which have to be calculated by the RG technique. One will find something of the form: \[lin-comb\] Since the initial dimensions $\Delta(\varepsilon_{1}),\Delta(\varepsilon_{2}), \Delta(\varepsilon_{3})$ of the decoupled models differ, one may expect that the critical dimensions (RG eigenvalues) of the newly formed primary operators $\varepsilon_1^*,\varepsilon_2^*,\varepsilon_3^*$ might all be different.
Our argument is that, in the case of coupling different models, it is not the linear combinations (\[lin-comb\]) which have to be examined to analyse the symmetry, but rather the spectrum of their critical dimensions. Permuting $\varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}$ in the combinations (\[lin-comb\]) does not make much sense because they are different. To permute $\varepsilon_{1}^{\ast}, \varepsilon_{2}^{\ast},
\varepsilon_{3}^{\ast},$ one first has to know their properties, their scaling dimensions, to decide if it makes sense or not.
The conclusion will be that one has to study the spectrum of dimensions at the new fixed point. This provides a representation independent information, independent of the way one has defined the theory initially, as by its action (Hamiltonian) in Eq. (\[A\_int\]).
If the symmetry $S_{3}$ is restored then the dimensions (\^\_[1]{}), (\^\_[2]{}), (\^\_[3]{}) should form a singlet and a doublet, as is the case when one couples initially identical models.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A new parameter-free method is proposed for treatment of single-particle resonances in the real-energy continuum shell model. This method yields quasi-bound states embedded in the continuum which provide a natural generalization of weakly bound single-particle states.'
address: 'Grand Accélérateur National d’Ions Lourds (GANIL),CEA/DSM – CNRS/IN2P3, BP 55027, F-14076 Caen Cedex 05, France'
author:
- 'J.B. Faes and M. P[ł]{}oszajczak'
title: 'New method for extracting quasi-bound states from the continuum'
---
Introduction {#intro}
============
Many-body states of nuclear shell model (SM) are linear combinations of Slater determinants built of bound single-particle (s.p.) states. Thus, the SM describes bound many-body systems which are isolated from the external environment of scattering states and decay channels. A natural generalization of the SM for weakly bound or unbound many-body systems, the so-called Gamow shell model (GSM), has been formulated recently in Berggren ensemble consisting of bound s.p. states, s.p. resonant states (Gamow [@Gam28] or Siegert [@Sie39] states), and the complex-energy s.p. continuum states [@Mic02a; @Bet02]. A complete set of s.p. states can be defined in the Berggren ensemble [@Ber68; @Ber93]. The complete set of many-body states is then given by all Slater determinants spanned by s.p. states of the complete s.p. set in the Berggren ensemble [@Mic02a; @Bet02]. So defined theoretical framework gives the full description of interplay between scattering states, resonances and bound states in the many-body wave function, without imposing a limit on the number of particles in the scattering continuum. On the other hand, the asymptotic decay channels in GSM are not individually resolved. Hence, this approach cannot be applied for a description of nuclear reactions and remains the tool for nuclear structure studies.
The unification of structure and reaction theories is possible in the continuum shell model (CSM) formalism [@Mah69; @Bar77; @Phi77], including the recently developed shell model embedded in the continuum (SMEC) [@Ben99; @Ben00; @Oko03; @Rot05]. In this formalism, the s.p. basis includes bound states and the real-energy continuum states. Feshbach’s projection technique [@Fes58], used in CSM and SMEC, allows to describe on the same footing the nuclear reactions, including the rearrangement term, and the nuclear structure of well-bound, weakly-bound or unbound nuclear states.
Bound and scattering s.p. states define two orthogonal subspaces $q_0$ and $p_0$, respectively. The s.p. resonances do not belong to the Hilbert space and have to be regularized before including them in the SMEC framework. The regularization procedure consists of including s.p. resonances in a discrete part of the spectrum after removing the scattering tails which are included in the embedding scattering continuum. These regularized resonances are usually called the quasi-bound states embedded in the continuum (QBSEC). The new subspaces: $q$ including bound and QBSEC s.p. states, and $p$ including non-resonant scattering states and scattering tails of regularized resonances, are subsequently reorthogonalized.
The many-body states in SMEC are given by all Slater determinants spanned by s.p. states in $q$ and $p$. Many-body states with all particles occupying bound and QBSEC s.p. states span the $\mathcal{Q} \equiv \mathcal{Q}_0$ subspace of the Hilbert space. The complement subspace $\mathcal{P}$ includes many-body states with one or more particles in the scattering states, hence: $\mathcal{P} \equiv \sum_{i=1}^{A}\mathcal{Q}_i$, where $\mathcal{Q}_{i}$ projects on the space with $i$ particles in the continuum.
The number of particles in the scattering continuum provides a natural hierarchy of approximations in the CSM. Technical complications associated with inclusion of multiparticle continua and complex asymptotic channels are such that none of the early CSM or SMEC studies considered more than one particle in the continuum. Only recently, two-particle continuum has been included in the SMEC for the description of the two-proton radioactivity [@Rot05]. The general framework unifying the reaction theory and the structure theory for problems with any number of particles in the scattering continuum have been worked out as well [@Fae07].
One of key elements in the practical implementation of SMEC scheme is the definition of orthogonal subspaces $q$, $p$ and, consequently, $\mathcal{Q}_0, \mathcal{Q}_1, \mathcal{Q}_2, \dots$ many-body subspaces. This definition is associated with the extraction of regularized resonances from the s.p. scattering continuum. In all previous SMEC applications (for an extensive list of applications see Refs. [@Ben99; @Ben00; @Oko03; @Rot05]) as well as in CSM studies of Dresden group [@Bar77; @Rot91], the Wang and Shakin method was employed [@Wan70]. In this method, the construction of a QBSEC depends on few parameters: the (real) energy of a scattering wave function which is then regularized and removed from the continuum, and the parameters of the cutting function (Heaviside or Fermi functions) chopping off the tail of this wave function. The so defined QBSECs are auxiliary, artificial objects which do not correspond to any solution of the Schrödinger equation.
Unfortunately, the CSM/SMEC results concerning, e.g., the partial decay widths, depend on the parameters of the cutting function. This ambiguity cannot be totally removed even if the condition on the s.p. width is applied in choosing the cutting radius [@Bar77]. In practical applications, the radius of the cutting function is selected close to the top of the Coulomb barrier [@Oko03; @Rot05] what yields a sensible prescription for narrow s.p. resonances. Problems with the choice of cutting function and the extraction of QBSECs appear for broad resonances. Moreover, the QBSECs obtained using Wang and Shakin method [@Wan70] do not have the correct asymptotic behavior.
In this paper, we present the new method of regularizing s.p. resonances which is unambiguous, parameter-free and yields states with the correct bound-state asymptotics. These bound states embedded in the non-resonant continuum are called the [*anamneses*]{} of resonances in the space of square-integrable functions 9${\cal L}^2$-functions0.
In Sect. \[radicons\], we provide a short discussion of different radial solutions of the Schrödinger equation which are used in the construction of a complete s.p. basis in SMEC. Qn unambiguous determination of the resonance anamneses is presented in Sect. \[new\_qbsec\], and Sect. \[non\_res\_cont\] is devoted to the discussion of consequences of the extraction of resonance anamneses on the non-resonant, real-energy continuum states. Together, bound s.p. states, anamneses of s.p. resonances and a regularized real-energy s.p. scattering states provide the complete s.p. basis in the Hilbert space. This basis can be used to obtain the complete many-body basis in CSM/SMEC studies.
Applications of the new method are discussed in Sect. \[applications\]. We shall present examples of the anamneses for resonances of different widths and angular momenta $\ell$, both for neutrons and protons. General features of the resonance anamneses, such as the energy dependence of both the root-mean-square (RMS) radius or the matching point between inner and outer solutions of the Schrödinger solution, will be analyzed as well. Finally, main conclusions will be given in Sect. \[conclusion\].
Radial Schrödinger equation: the general considerations {#radicons}
=======================================================
Let us consider a spherical potential $V(r)$ describing qn interaction between a nucleon and a target nucleus. In the center of mass coordinates, the one-body radial wave function $u(r)$ of the relative motion is the solution of the Schrödinger equation: $$\begin{aligned}
\label{local_schr}
\left[ \frac{d^{2}}{dr^{2}}+k^{2}-\frac{\ell(\ell+1)}{r^{2}}-\frac{2\mu}{\hbar^{2}}V(r)\right]u_{k,\ell}(r) = 0 ~ \ , \end{aligned}$$ where $\mu$ is the reduced mass and $\ell$ is the relative angular momentum. In the following, we shall omit the angular momentum index $\ell$ to simplify notations.
We are interested in three kinds of solutions of Eq. (\[local\_schr\]):
- The scattering solutions, which form a continuum for real and positive momenta $k$. These solutions are regular at $r=0$ and asymptotically take a form: $$\begin{aligned}
\label{asym_scatt}
u_{k}(r) &\sim& kr\Big{(}C^{-}h^{-}(kr)+C^{+}h^{+}(kr)\Big{)} ~ \ , \end{aligned}$$ where $h^{\
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Steven V. Fuerst and Kinwah Wu'
date: 'Received: '
title: 'Radiation Transfer of Emission Lines in Curved Space-Time'
---
Introduction
============
The strong X-rays observed in active galactic nuclei (AGN) and some X-ray binaries are believed to be powered by accretion of material into black holes. The curved space-time around the black hole influences not only the accretion hydrodynamics but also the transport of radiation from the accretion flow.
Emission lines from thin Keplerian disks around non-relativistic stellar objects generally have two symmetric peaks (Smak 1969), corresponding to the approaching and receding line-of-sight velocities due to disk rotation. Because of various relativistic effects, lines from accretion disks around black holes do not always have symmetrical double-peak profiles. The accretion flow near a black hole is often close to the speed of light, and emission is relativistically boosted. The blue peak of the line therefore becomes stronger and sharper. Moreover, the strong gravity near the black-hole event horizon causes time dilation, which shifts the line to lower energies. Emission lines from accretion disks around black holes appear to be broad, with a very extended red wing and a narrow, sharp blue peak (see e.g. the review by Fabian et al. (2000) and references therein). Furthermore, gravitational lensing can produce multiple images and self-occultation, further modifying the emission line profile.
Various methods have been used to calculate the profiles of emission lines from accretion disks around black holes. The methods can be roughly divided into three categories. We now discuss each of them briefly. The first method uses a transfer function to map the image of the accretion disk onto a sky plane (Cunningham 1975, 1976). The accretion disk is assumed to reside in the equatorial plane. It is Keplerian and geometrically thin, but optically thick. The space-time metric around the black hole is first specified, and the energy shift of the emission (photons) from each point on the disk surface is then calculated. A parametric emissivity law for the disk emission is usually used — typically, a simple power-law which decreases radially outward. The specific intensity at each point in the sky plane is determined from the energy shift and the corresponding specific intensity at the disk surface, using the Lorentz-invariant property. The transfer-function formulation (Cunningham 1975, 1976) has been applied to line calculations in settings ranging from thin accretion rings (e.g. Gerbal & Pelat 1981) and accretion disks around Schwarzschild (e.g. Laor 1991) and rotating (Kerr) black holes (e.g. Bromley, Chen & Miller 1997). The second method makes use of the impact parameter of photon orbits around Schwarzschild black holes (e.g. Fabian et al. 1989; Stella 1990; Kojima 1991). The transfer function in this method is described in terms of elliptical functions, which are derived semi-analytically. The Jacobian of the transformation from the accretion disk to sky plane is, however, determined numerically via infinitesimal variations of the impact parameter (Bao 1992). The method can be generalized to the case of rotating black holes by using additional constants of motion (Viergutz 1993; Bao, Hadrava & Ostgaard 1994; Fanton et al. 1997; Cadez, Fanton & Calvani 1998). The third method simply considers direct integration of the geodesics to determine the photon trajectories and energy shifts (Dabrowski et al. 1997; Pariev & Bromley 1998; Reynolds et al. 1999).
These calculations have shown how the dynamics of the accretion flow around the black hole and the curved space-time shape the line profiles. Various other aspects of the radiation processes, e.g. reverberation and reflection (Reynolds et al. 1999) and disk warping (Cadez et al. 2003) were also investigated using the methods described above. The results obtained from these calculations have provided us with a basic framework for interpreting X-ray spectroscopic observations, in particular, the peculiar broad Fe K$\alpha$ lines in the spectra of AGN, e.g. MCG-6-30-15 (Tanaka et al. 1995). While existing studies have put emphasis on the energy shift of the emission, transport effects such as extinction have been neglected. Resonant absorption (scattering) by ambient material can greatly modify the disk emission line profile. This effect was already demonstrated in a study by Ruszkowski & Fabian (2000), in which a simple rotating disk-corona provides the resonant scattering.
Here, we present ray-tracing calculations of spectra from relativistic flows in curved space-time. We include line-of-sight extinction and emission explicitly in the formulation. The radiative-transfer equation is derived from the Lorentz-invariant form of the conservation law. It reduces to the standard classical radiative-transfer equation in the non-relativistic limit. The formulation can incorporate dynamical and geometric models for the line-of-sight absorbing and emitting material. As an illustration, we calculate the from thin accretion disks and thick accretion tori around rotating black holes. The emitted spectra include a power law continuum together with a line. This emission is resonantly scattered by the line-of-sight-material. We include the contribution from higher-order images and allow for self-occultation.
We organize the paper as follows. In §2 we show the derivation of the transfer equation. In §3 we construct the equation of motion for free particles in a Kerr space-time and for force-constrained particles for some simple parametric models. In §4 we construct a thin disk and a thick torus model. In §5 we generalize this by adding in absorption due to a distribution of absorbing clouds. In §6 we present the results from the models where either emission geometry (tori), or absorption (clouds) are important.
Radiative-Transfer Equation
===========================
Throughout this paper, we adopt the usual convention $c=G=h=1$ for the speed of light, gravitational constant and Planck constant. The interval in space-time is specified by $$\label{metric}
d\tau^2 = g_{\alpha \beta} dx^{\alpha} dx^{\beta} $$ where $g_{\alpha \beta}$ is the metric.
Consider a bundle of particles which fill a phase-space volume element $$\label{bundle}
{d\cal{V}} \equiv dx\,dy\,dz\,dp^x\,dp^y\,dp^z\ ,$$ where $dx\,dy\,dz (\equiv dV)$ is the three-volume and $dp^x\ dp^y\ dp^z$ is the momentum range, at a given time $t$. Liouville’s Theorem reads $$\frac{d{\cal V}}{d\lambda} = 0$$ (see Misner, Thorne & Wheeler 1973), with $\lambda$ here being the affine parameter for the central ray in the bundle. The volume element $d{\cal V}$ is thus Lorentz invariant.
The distribution function for the particles in the bundle, $F(x^i, p^i)$ is given by $$F(x^i, p^i) = {dN \over d{\cal V}}\ ,$$ where $dN$ is the number of particles in the three-volume. Since $dN/d{\cal V}$ is Lorentz invariant, $F(x^i, p^i)$ is Lorentz invariant. From equation (\[bundle\]), we have $$\label{rawfluxdefn}
F={dN\over p^2 dV\,dp\,d\Omega}\ ,$$ where $p^2\,dp\,d\Omega=dp^x\,dp^y\,dp^z$. For massless particles, $v = c = 1$ and $\vert p\vert=E$. The number of photons in the given spatial volume is therefore the number of photons flowing through an area $dA$ in a time $dt$. It follows that $$\label{flux_inv}
F={dN\over E^2dA\,dt\,dE\,d\Omega}\ .$$ Recall that the specific intensity of the photons is $$\label{inten_inv}
I_\nu={E dN\over dA\,dt\,dE\,d\Omega}\ ,$$ By inspection of equations (\[flux\_inv\]) and (\[inten\_inv\]), we obtain $$F=\frac{I_\nu}{E^3}=\frac{I_\nu}{\nu^3}\ ,$$ where $\nu ~(= E)$ is the frequency of the photon. We will use this Lorentz invariant intensity, ${\cal I}\equiv F$, in the radiative transfer formulation.
In a linear medium, extinction is proportional to the intensity, and the emission is independent of the intensity of the incoming radiation. The radiative transfer equation is therefore $$\label{classradtrans}
\frac{d{\cal I}}{d s}=-\chi{\cal I} + \eta\left(\frac{\nu_0}{\nu}\right)^3 \ ,$$ where $\chi$ is the absorption coefficient, $\eta$ is the emission coefficient and $ds$ is the length element the ray traverses. The equation in this form is defined in the observer’s frame, and the absorption and emission coefficients are related to their counterparts in the rest frame with respect to the medium via $$\begin{aligned}
\label{chframe}
\chi&=&\left(\frac{\nu_0}{\nu}\right)\chi_0 \ , \\
\eta&=&\left(\frac{\nu}{\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the $\beta$ analogue of the nonintersecting Poisson random walks. We derive a stochastic differential equation of the Stieltjes transform of the empirical measure process, which can be viewed as a dynamical version of the Nekrasov’s equation in [@MR3668648 Section 4]. We find that the empirical measure process converges weakly in the space of c[á]{}dl[á]{}g measure-valued processes to a deterministic process, characterized by the quantized free convolution, as introduced in [@MR3361772]. For suitable initial data, we prove that the rescaled empirical measure process converges weakly in the space of distributions acting on analytic test functions to a Gaussian process. The means and the covariances are universal, and coincide with those of $\beta$-Dyson Brownian motions with the initial data constructed by the Markov-Krein correspondence. Our proof relies on integrable features of the generators of the $\beta$-nonintersecting Poisson random walks, the method of characteristics, and a coupling technique for Poisson random walks.'
address: |
Harvard University\
E-mail: jiaoyang@math.harvard.edu
author:
- Jiaoyang Huang
bibliography:
- 'References.bib'
title: '$\beta$-Nonintersecting Poisson Random Walks: Law of Large Numbers and Central Limit Theorems'
---
Introduction
============
$\beta$-nonintersecting Poisson random walks
--------------------------------------------
Let $\tilde{{\bm{x}}}(t)=(\tilde x_1(t),\tilde x_2(t),\cdots, \tilde x_n(t))$ be the continuous-time *Poisson random walk* on ${\mathbb{Z}}_{{\geqslant}0}^n$, i.e. particles independently jump to the neighboring right site with rate $n$. The generator of ${{\bm{x}}}(t)$ is given by $$\begin{aligned}
\tilde {{{\mathcal}L}}^n f(\tilde {{\bm{x}}})=\sum_{i=1}^nn\left(f(\tilde{{\bm{x}}}+{\bm{e}}_i)-f(\tilde {{\bm{x}}})\right),\end{aligned}$$ where $\{{\bm{e}}_i\}_{1{\leqslant}i{\leqslant}n}$ is the standard vector basis of ${{\mathbb R}}^n$. $\tilde {{\bm{x}}}(t)$ conditioned never to collide with each other is the *nonintersecting Poisson random walk*, denoted by $\tilde {{\bm{x}}}(t)=(x_1(t),x_2(t),\cdots,x_n(t))$. The nonintersecting condition has probability zero, and therefore, needs to be defined through a limit procedure which is performed in [@MR1887625]. The nonintersecting Poisson random walk is a continuous time Markov process on $$\begin{aligned}
{\mathbb{W}}^n_1=\{({\lambda}_1+(n-1),{\lambda}_2+(n-2),\cdots,{\lambda}_n): ({\lambda}_1,{\lambda}_2,\cdots,{\lambda}_n)\in {\mathbb{Z}}_{{\geqslant}0}^n, {\lambda}_1{\geqslant}{\lambda}_2{\geqslant}\cdots{\geqslant}{\lambda}_n{\geqslant}0\},\end{aligned}$$ with generator $$\begin{aligned}
{{{\mathcal}L}}_1^n f({{\bm{x}}})=n\sum_{i=1}^n\frac{V({{\bm{x}}}+{\bm{e}}_i)}{V({{\bm{x}}})}\left(f({{\bm{x}}}+{\bm{e}}_i)-f({{\bm{x}}})\right)=n\sum_{i=1}^{n}\left(\prod_{j:j\neq i}\frac{x_i-x_j+1}{x_i-x_j}\right)\left(f({{\bm{x}}}+{\bm{e}}_i)-f({{\bm{x}}})\right),\end{aligned}$$ where $V({{\bm{x}}})=\prod_{1{\leqslant}i<j{\leqslant}n}(x_i-x_j)$ is the Vandermond determinant in variables $x_1,x_2,\cdots,x_n$.
If instead of the Poisson random walk, we start from $n$ independent Brownian motions with mean $0$ and variance $t/n$, then the same conditioning leads to the celebrated *Dyson Brownian motion* with $\beta=2$, which describes the stochastic evolution of eigenvalues of a Hermitian matrix under independent Brownian motion of its entries. For general $\beta>0$, the *$\beta$-Dyson Brownian motion* ${{\bm{y}}}(t)=(y_1(t), y_2(t),\cdots, y_n(t))$ is a diffusion process solving $$\begin{aligned}
\label{e:DBM1}
{{\rm d}}y_i(t)=\sqrt{\frac{2}{\beta n}}{{\rm d}}{{\mathcal B}}_i(t)+\frac{1}{n}\sum_{j\neq i}\frac{1}{y_i(t)-y_j(t)}{{\rm d}}t,\quad i=1,2,\cdots, n,\end{aligned}$$ where $\{({{\mathcal B}}_1(t), {{\mathcal B}}_2(t),\cdots, {{\mathcal B}}_n(t))\}_{t{\geqslant}0}$ are independent standard Brownian motions, and $\{{{\bm{y}}}(t)\}_{t>0}$ lives on the Weyl chamber ${\mathbb{W}}^n=\{({\lambda}_1,{\lambda}_2,\cdots,{\lambda}_n): {\lambda}_1>{\lambda}_2>\cdots>{\lambda}_n\}$.
The nonintersecting Poisson random walk can be viewed as a discrete version of the Dyson Brownian motion with $\beta=2$. For general $\beta>0$, we fix $\theta=\beta/2$ and define the *$\beta$-nonintersecting Poisson random walk*, denoted by ${{\bm{x}}}(t)=(x_1(t),x_2(t),\cdots,x_n(t))$, as a continuous time Markov process on $$\begin{aligned}
\label{e:defWtheta}
{\mathbb{W}}^n_\theta=\{({\lambda}_1+(n-1)\theta,{\lambda}_2+(n-2)\theta,\cdots,{\lambda}_n): ({\lambda}_1,{\lambda}_2,\cdots,{\lambda}_n)\in {\mathbb{Z}}_{{\geqslant}0}^n, {\lambda}_1{\geqslant}{\lambda}_2{\geqslant}\cdots{\geqslant}{\lambda}_n{\geqslant}0\},\end{aligned}$$ with generator $$\begin{aligned}
\label{e:generator}
{{{\mathcal}L}}^n_\theta f({{\bm{x}}})=\theta n\sum_{i=1}^n\frac{V({{\bm{x}}}+{\bm{e}}_i)}{V({{\bm{x}}})}\left(f({{\bm{x}}}+{\bm{e}}_i)-f({{\bm{x}}})\right)=\theta n\sum_{i=1}^{n}\left(\prod_{j:j\neq i}\frac{x_i-x_j+\theta}{x_i-x_j}\right)\left(f({{\bm{x}}}+{\bm{e}}_i)-f({{\bm{x}}})\right).\end{aligned}$$
In the beautiful article [@MR3418747], Gorin and Shkolnikov constructed certain multilevel discrete Markov chains whose top level dynamics coincide with the $\beta$-nonintersecting Poisson random walks. However, we use slightly different notations, and speed up time by $n$. In [@MR3418747], the $\beta$-nonintersecting Poisson random walks are constructed as stochastic dynamics on Young diagrams. We recall that a Young diagram $\bm\lambda$, is a non-increasing sequence of integers $$\begin{aligned}
\bm\lambda=({\lambda}_1,{\lambda}_2,{\lambda}_3, \cdots), \quad \lambda_1{\geqslant}\lambda_2{\geqslant}{\lambda}_3{\geqslant}\cdots{\geqslant}0.\end{aligned}$$ We denote $\ell_{{\bm{\lambda}}}$ the number of non-empty rows in ${\bm{\lambda}}$, i.e. ${\lambda}_{\ell_{{\bm{\lambda}}}}>0, {\lambda}_{\ell_{{\bm{\lambda}}}+1}={\lambda}_{\ell_{{\bm{\lambda}}}+2}=\cdots =0$, and $|{\bm{\lambda}}|=\sum_{i=1}^{\ell_{{\bm{\lambda}}}}{\lambda}_i$ the number of boxes in ${\bm{\lambda}}$. Let ${\mathbb{Y}}^n$ denote the set of all Young diagrams with at most $n$ rows, i.e. $\ell_{{\bm{\lambda}}}{\leqslant}n$. A box $\Box\in {\bm{\lambda}}$ is a pair of integers, $$\begin{aligned}
\Box=(i,j)\in {\bm\lambda}, \text{ if and only if } 1{\leqslant}i{\leqslant}\ell_\lambda, 1{\leqslant}j{\leqslant}\lambda_i.\end{aligned}$$ We denote ${\bm{\lambda}}'$ the transposed diagram of $\bm\lambda$, defined by $$\begin{aligned}
\lambda_j'=|\{i: 1{\leqslant}j{\leqslant}\lambda_i\}|, \quad 1{\leqslant}j{\leqslant}{\lambda}_1.\end{aligned}$$ For a box $\Box=(i,j)\in {\bm{\lambda}}$, its arm $a_\Box$, leg $l
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We construct a class of anomaly-free supersymmetric $U(1)^\prime$ models that are characterized by family non-universal $U(1)^\prime$ charges motivated from $E_6$ embeddings. The family non-universality arises from an interchange of the standard roles of the two $SU(5)$ ${\bf 5}^*$ representations within the ${\bf 27}$ of $E_6$ for the third generation. We analyze $U(1)^\prime$ and electroweak symmetry breaking and present the particle mass spectrum. The models, which include additional Higgs multiplets and exotic quarks at the TeV scale, result in specific patterns of flavor-changing neutral currents in the $b\to s$ transitions that can accommodate the presently observed deviations in this sector from the SM predictions.'
---
EFI-09-25\
MADPH-09-1546\
1.5cm
[**Phenomenological Implications of Supersymmetric Family Non-universal $U(1)^\prime$ Models**]{} 2.0cm [Lisa L. Everett$^{a}$, Jing Jiang$^{a}$, Paul G. Langacker$^{b}$ and Tao Liu$^{c}$ ]{}
1.0cm
Introduction
============
Extensions of the Standard Model of particle physics (SM) with an additional anomaly-free gauged $U(1)^\prime$ symmetry broken at the TeV scale are arguably some of the most well-motivated candidates for new physics (for a review, see [@Langacker:2008yv]). Such symmetries are theoretically motivated, as they represent the simplest augmentations of the SM gauge sector and are ubiquitous within string and/or grand unified theories. While the phenomenology of such $Z^\prime$ gauge bosons depends on the details of the couplings of the $Z^\prime$ to the SM fermions, current limits from direct and indirect searches indicate typical lower bounds of order $800-900$ GeV on the $Z^\prime$ mass and an upper bound of $\sim 10^{-3}$ on the $Z-Z^\prime$ mixing angle [@Erler:2009jh]. For a reasonable range of couplings, the presence of such TeV scale $Z^\prime$ bosons should be easily discernable at present and forthcoming colliders such as the Tevatron and the Large Hadron Collider (LHC).
Within the context of supersymmetric theories, a plethora of $U(1)^\prime$ models have been proposed, including scenarios motivated by grand unified theories (GUTs) such as $SO(10)$ and $E_6$ and scenarios motivated from string compactifications of heterotic and/or Type II theories (see [@Langacker:2008yv] for a review). Recent models also include scenarios in which the $U(1)^\prime$ mediates supersymmetry breaking [@Langacker:2007ac], plays a role in the generation of neutrino masses [@durmus] and/or spontaneous $R$-parity violation [@pavel], or provides a portal to a hidden/secluded sector (for reviews, see [@Langacker:2009im; @Goodsell:2009xc]). Though the details of the $U(1)^\prime$ charge assignments are model-dependent, generically the cancellation of $U(1)^\prime$ anomalies requires an enlargement of the matter content to include SM exotics and SM singlets with nontrivial $U(1)^\prime$ charges. In these theories, the SM singlets also typically play an important role in triggering the low-scale breaking of the $U(1)^\prime$ gauge symmetry.
In most models of this type, the $U(1)^\prime$ charges of the quarks and leptons are family universal. Though this feature is desirable for the first and second generations due to the strong constraints from flavor-changing neutral currents (FCNCs), there is still room for departures from family universality for the charges of the third generation. In fact, this often occurs in string constructions if the families result from different embeddings (see e.g., [@Cleaver:1998gc; @Blumenhagen:2005mu]). Indeed, though many of the results from the $B$ factories have indicated a strong degree of consistency with the Cabibbo-Kobayashi-Maskawa (CKM) predictions of the SM, there are hints of non-SM FCNC patterns within the $b\to s$ transitions for both $\Delta B=1$ and $\Delta B=2$ processes at the level of a few standard deviations [@Bona:2008jn]. Of the many options for new physics models that can explain this discrepancy, family non-universal $U(1)^\prime$ models are interesting in that they are theoretically well-motivated scenarios that lead to tree-level FCNC, as opposed to scenarios in which the new physics contributions are loop-suppressed [@Langacker:2000ju]. A recent model-independent analysis of $Z^\prime$-mediated FCNC in the $b\to s$ transitions showed that this general framework can accommodate the data [@Barger:2009eq; @Barger:2009qs]. (Related analyses include [@zprimefcnc; @zprimefcnc2]). However, it is optimal to consider the bounds on specific family non-universal $U(1)^\prime$ models in addition to the fully model-independent results.
Our purpose in this paper is to construct and analyze supersymmetric anomaly-free family non-universal $U(1)'$ models (which we will denote as NUSSM models). Our strategy in building this class of NUSSM models is to exploit the well-known fact that in $E_6$ models, there are two options for embedding the down quarks and lepton doublets in the ${\bf 5^*}$ representation of $SU(5)$, which is related to the fact that the down-type Higgs and the lepton doublets have the same gauge quantum numbers. By choosing one embedding for the first and second generations and the alternative embedding for the third generation, we can obtain anomaly-free models in which the additional family non-universal $U(1)^\prime$ is given by a particular linear combination of the usual $U(1)_\psi$ and $U(1)_\chi$ of $E_6$-inspired models.
This paper is structured as follows. We begin by outlining our basic procedure and presenting the resulting classes of anomaly-free family non-universal $U(1)^\prime$ models. In the following section, we analyze the gauge symmetry breaking and comment on general features of the mass spectrum. We next turn to an analysis of the implications of these models for FCNC in the $b\to s $ transitions, then provide our concluding remarks.
$E_6$-Motivated Family Non-universal $U(1)^\prime$ Models (NUSSMs)
==================================================================
\[table1\]
In $U(1)^\prime$ models, the cancellation of gauge anomalies generally implies that additional fermions are present in the theory (see e.g., [@Langacker:2008yv]). To motivate the presence of these additional fermions and construct simple anomaly-free family non-universal models, our approach is to exploit the properties of $E_6$ embeddings of the SM fermions and Higgs fields in grand unified theories. Recall that in $E_6$ models, the SM particles are embedded in the fundamental ${\bf 27}$ representations. With respect to the two-step breaking scheme of $E_6$ to its $SO(10)$ and $SU(5)$ subgroups $$E_6\rightarrow SO(10)\times U(1)_\psi \rightarrow SU(5)\times U(1)_\chi \times U(1)_\psi,$$ the ${\bf 27}$ has the decomposition $${\bf 27}= {\bf 16}+{\bf 10}+{\bf 1}=({\bf 10}+{\bf 5^*}+{\bf 1})+({\bf 5}+{\bf 5^*})+{\bf 1},$$ with respect to the representations of $SO(10)$ and $SU(5)$, respectively. Hence, the ${\bf 27}$ has two ${\bf 5^*}$ multiplets; these representations are used to embed the down-type $SU(2)$-singlet quarks with the lepton doublets and exotic $SU(2)$-singlet quarks with down-type Higgs doublets. A standard choice for model-building is to have the down-type quarks and lepton doublets of all three SM families in the ${\bf 5^*}$ of the ${\bf 16}$, though models with the SM down-type quarks and lepton doublets in the other ${\bf 5^*}$ have also been considered in the literature [@alternative; @Athron:2009bs]. We will assign the down-type quark singlets and lepton doublets of the first and second generations to be in the ${\bf 5^*}$ of the ${\bf 16}$, and the associated particles of the third generation to be in the ${\bf 5^*}$ of the ${\bf 10}$, as shown in Table \[table1\]. The matter content of these theories thus includes the following fields: (i) the SM first and second families $\{ \Psi_{10}^i, \Psi_{5^*}^i, \Psi_1^i \}$, Higgs plus exotic fields $\{\sigma^i_5,\sigma^i_{5^*}\}$, and singlets $\sigma_0^i$ ($i=1,2$ is a family index), and (ii) the SM third family $\{ \Phi_{10}, \Sigma_{5^*}, \Sigma_0\}$, Higgs and exotics $\{ \Sigma_5,\Phi_{5^*} \}$, and singlet $\Phi_1$. The Higgs sector of the theory thus generically has multiple Higgs doublets and singlets beyond those of the MSSM.[^1]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present astrochemical photo-dissociation region models in which cosmic ray attenuation has been fully coupled to the chemical evolution of the gas. We model the astrochemical impact of cosmic rays, including those accelerated by protostellar accretion shocks, on molecular clouds hosting protoclusters. Our models with embedded protostars reproduce observed ionization rates. We study the imprint of cosmic ray attenuation on ions for models with different surface cosmic ray spectra and different star formation efficiencies. We find that abundances, particularly ions, are sensitive to the treatment of cosmic rays. We show the column densities of ions are under predicted by the “classic” treatment of cosmic rays by an order of magnitude. We also test two common chemistry approximations used to infer ionization rates. We conclude that the approximation based on the H$_3^+$ abundance under predicts the ionization rate except in regions where the cosmic rays dominate the chemistry. Our models suggest the chemistry in dense gas will be significantly impacted by the increased ionization rates, leading to a reduction in molecules such as NH$_3$ and causing H$_2$-rich gas to become \[C II\] bright.'
author:
- 'Brandt A.L. Gaches'
- 'Stella S.R. Offner'
- 'Thomas G. Bisbas'
bibliography:
- 'crchem1.bib'
title: 'The Astrochemical Impact of Cosmic Rays in Protoclusters I: Molecular Cloud Chemistry'
---
Introduction {#sec:intro}
============
Molecular cloud dynamics and chemistry are sensitive to the ionization fraction. The chemistry of molecular clouds is dominated by ion-neutral reactions [@watson1976] and thus controlled by the ionization fraction. The gas (kinetic) temperature of a typical molecular cloud with an average H-nucleus number density of $n\approx10^3\,{\rm cm}^{-3}$ is approximately 10 K for cosmic-ray ionization rates $\zeta\lesssim10^{-16}\,{\rm s}^{-1}$ [@bisbas2015; @bisbas2017], thus rendering neutral-neutral reactions inefficient. Ionization in molecular clouds is produced in three difference ways: UV radiation, cosmic rays (CRs), and X-Ray radiation. Ultra-violet radiation, from external O- and B-type stars and internal protostars, does not penetrate very far into the cloud due to absorption by dust. However, cosmic rays, which are relativistic charged particles, travel much further into molecular clouds and dominate the ionization fraction when $A_V \geq 5\,{\rm mag}$ [@mckee1989; @strong2007; @grenier2015]. CR-driven chemistry is initiated by ionized molecular hydrogen, ${\rm H}_2^+$ [@dalgarno2006]. The ion-neutral chemistry rapidly follows: $${\rm CR + H_2 \rightarrow H_2^+ + e^- + CR'}$$ $${\rm H_2^+ + H_2 \rightarrow H_3^+ + H,}$$ where CR$'$ is the same particle as CR but with a lower energy. The ejected electron from the first reaction can have an energy greater than the ionization potential of H$_2$ and thus cause further ionization. Once H$_3^+$ forms, more complex chemistry follows, thereby creating a large array of hydrogenated ions: $${\rm X + H_3^+ \rightarrow [XH]^+ + H_2}.$$
Both HCO$^+$ and N$_2$H$^+$, important molecules used to map the dense gas in molecular clouds, form this way with X being CO and N$_2$, respectively. These species are also used to constrain the cosmic ray ionization rate (CRIR) [i.e., @caselli1998; @ceccarelli2014]. OH$^+$ and H$_n$O$^+$ are also formed this way through H$_3^+$ and H$^+$ [@hollenbach2012]. In addition, at low column densities ($A_V<1\,{\rm mag}$), which is typical of the boundaries of molecular clouds), the non-thermal motions between ions and neutrals may overcome the energy barrier of the reaction $${\rm C^+ + H_2 \rightarrow CH^+ + H,}$$ leading to an enhancement of the CO column density [@federman1996; @visser2009] and a shift of the H[i]{}-to-H$_2$ transition to higher $A_V$ [@bisbas2019].
The ionization fraction controls the coupling of the magnetic fields to the gas, influencing non-ideal magnetohydrodynamic (MHD) effects such as ambipolar diffusion [@mckee2007]. These non-ideal effects can play a signficant role in the evolution in the cores and disks of protostars. On galactic scales, numerical simulations have shown that CRs can help drive large outflows and winds out of the galaxy [e.g., @girichidis2016]. Our study focuses on the impact of CRs on Giant Molecular Cloud scales which is typically not resolved fully in such simulations.
There have been a plethora of studies modeling the impact of CRs on chemistry and thermal balance [i.e., @caselli1998; @bell2006; @meijerink2011; @bayet2011; @clark2013; @bisbas2015]. However, in these studies, and the vast majority of astrochemical models, the CRIR is held constant throughout the cloud, despite the recognition that CRs are attenuated and modulated while traveling through molecular gas [@schlickeiser2002; @padovani2009; @schlickeiser2016; @padovani2018]. Galactic-CRs, thought to be accelerated in supernova remnants or active galactic nuclei, are affected by hadronic and Coulombic energy losses and screening mechanisms that reduce the flux with increasing column density [@strong2001; @moskalenko2005; @evoli2017]. The modulation of CRs has not previously been included within astrochemical models of molecular clouds due to the difficulty in calculating the attenuation and subsequent decrease in the CRIR [@wakelam2013; @cleeves2014].
Given that CRs are thought to be attenuated, it is expected that the ionization rate should decline within molecular clouds. However, recent observations do not universally show a lower ionization rate. [@favre2017] inferred the CRIR towards 9 protostars and found a CRIR consistent with the rate inferred for galactic CRs. The OMC-2 FIR 4 protocluster, hosting a bright protostar, is observed to have a CRIR 1000 higher than the expected rate from galactic CRs [@ceccarelli2014; @fontani2017; @favre2018]. [@gaches2018b] show that this system can be modelled assuming the central source is accelerating protons and electrons within the accretion shocks on the protostar’s surface. In general, accreting, embedded protostars may accelerate enough CRs to cancel the effect of the attenuation of external CRs at high column densities, producing a nearly constant ionization rate throughout the cloud [@padovani2016; @gaches2018b].
Typical accretion shocks and shocks generated by protostellar jets satisfy the physical conditions necessary to accelerate protons and electrons [@padovani2016; @gaches2018b]. Accretion shocks in particular are a promising source of CRs since they are strong, with velocities exceeding 100 km/s and temperatures of millions of degrees Kelvin [@hartmann2016]. [@gaches2018b] calculated the spectrum of accelerated protons in protostellar accretion shocks and the attenuation through the natal core assuming that the CRs free-stream outwards. These models suggest that clusters of a few hundred protostars accelerate enough CRs into the surrounding cloud to exceed the ionization rate from Galactic CRs.
In this study, we explore the effects of protostellar CRs on molecular cloud chemistry by employing the model of [@gaches2018b]. We implement an approximation for CR attenuation into the astrochemistry code [3d-pdr]{} [@bisbas2012] to account for CR ionization rate gradients. We investigate the signatures of a spatially varying ionization rate. We further explore the impact of protostellar CR sources and their observable signatures.
The layout of the paper is as follows. In §\[sec:methods\] we present the CR and protostellar models and describe the implementation of CR attenuation into [3d-pdr]{}. We discuss our results in §\[sec:res\]. Finally, in §\[sec:disc\] we create observational predictions and compare them to observations.
Methods {#sec:methods}
=======
Protocluster Model {#sec:proto_model}
------------------
We generate protoclusters following the method of [@gaches2018a], where the model cluster is parameterized by the number of stars and gas surface density, N$_*$ and $\Sigma_{\rm cl}$, respectively. These parameters are connected to the star formation efficiency $\varepsilon_g = M_*/M_{\rm gas}$, where M$_{\rm gas}$ is related to $\Sigma_{\rm cl}$ following [@mckee2003] $\Sigma_{\rm cl} = \frac{M_{\rm gas}}{\pi R^2}$, where the cloud radius, $R$, is determined from the density distribution (See §\[sec:dens\]). We model protoclusters with surface densities in the range $0.1 \leq \Sigma_{\rm cl} \leq 10$ g cm$^{-2}$ and with a number of stars in the range $10^2 \leq N_* \leq 10^4$. In this parameter space, we always consider $\varepsilon_g \leq 25\%$.
We generate $N_{\rm cl} = 20$ protoclusters for each point in the parameter space and adopt the average CR spectrum for the chemistry modelling. We use the Tapered Turbulent Core (TTC) accretion history model [@mckee2003
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The XMM-OM instrument extends the spectral coverage of the XMM-Newton observatory into the ultraviolet and optical range. It provides imaging and time-resolved data on targets simultaneously with observations in the EPIC and RGS. It also has the ability to track stars in its field of view, thus providing an improved post-facto aspect solution for the spacecraft. An overview of the XMM-OM and its operation is given, together with current information on the performance of the instrument.'
author:
- 'K. O. Mason, A. Breeveld, R. Much, M. Carter, F. A. Cordova, M. S. Cropper, J. Fordham, H. Huckle, C. Ho, H. Kawakami, J. Kennea, T. Kennedy, J. Mittaz, D. Pandel, W. C. Priedhorsky, T. Sasseen, R. Shirey, P. Smith, J.-M. Vreux'
title: 'The XMM-Newton Optical/UV Monitor Telescope'
---
Introduction
============
The Optical/UV Monitor Telescope (XMM-OM) is a standalone instrument that is mounted on the mirror support platform of XMM-Newton (Jansen et al. 2001) alongside the X-ray mirror modules. It provides coverage between 170 nm and 650 nm of the central 17 arc minute square region of the X-ray field of view (FOV), permitting routine multiwavelength observations of XMM targets simultaneously in the X-ray and ultraviolet/optical bands. Because of the low sky background in space, XMM-OM is able to achieve impressive imaging sensitivity compared to a similar instrument on the ground, and can detect a $B=23.5$ magnitude A-type star in a 1000 s integration in “white” light (6 sigma). It is equipped with a set of broadband filters for colour discrimination. The instrument also has grisms for low-resolution spectroscopy, and an image expander (Magnifier) for improved spatial resolution of sources. Fast timing data can be obtained on sources of interest simultaneously with image data over a larger field.
In the following sections we give an overview of the instrument followed by an account of its operation in orbit and the instrument characteristics.
Instrument overview
===================
The XMM-OM consists of a Telescope Module and a separate Digital Electronics Module, of which there are two identical units for redundancy (see Fig. 1). The Telescope Module contains the telescope optics and detectors, the detector processing electronics and power supply. There are two distinct detector chains, again for redundancy. The Digital Electronics Module houses the Instrument Control Unit, which handles communications with the spacecraft and commanding of the instrument, and the Data Processing Unit, which pre-processes the data from the instrument before it is telemetered to the ground.
(8.8,10) (10,0)
Optics
------
The XMM-OM uses a Ritchey Chrétien telescope design modified by field flattening optics built into the detector window. The f/2 primary mirror has a 0.3 m diameter and feeds a hyperboloid secondary which modifies the f-ratio to 12.7. A $45$[$^{\circ}$]{} flat mirror located behind the primary can be rotated to address one of the two redundant detector chains. In each chain there is a filter wheel and detector system. The filter wheel has 11 apertures, one of which is blanked off to serve as a shutter, preventing light from reaching the detector. Another seven filter locations house lenticular filters, six of which constitute a set of broad band filters for colour discrimination in the UV and optical between 180 nm and 580 nm (see Table 2 for a list of filters and their wavelength bands). The seventh is a “white light” filter which transmits light over the full range of the detector to give maximum sensitivity to point sources. The remaining filter positions contain two grisms, one optimised for the UV and the other for the optical range, and a 4 field expander (or Magnifier) to provide high spatial resolution in a 380–650 nm band of the central portion of the (FOV).
Detector
--------
The detector is a microchannelplate-intensified CCD (Fordham et al. 1992). Incoming photons are converted into photoelectrons in an S20 photocathode deposited on the inside of the detector window. The photoelectrons are proximity focussed onto a microchannelplate stack, which amplifies the signal by a factor of a million, before the resulting electrons are converted back into photons by a P46 phosphor screen. Light from the phosphor screen is passed through a fibre taper which compensates for the difference in physical size between the microchannelplate stack and the fast-scan CCD used to detect the photons. The resulting photon splash on the CCD covers several neighbouring CCD pixels (with a FWHM of approximately 1.1 CCD pixels, if fitted with a Gaussian). The splash is centroided, using a 33 CCD pixel subarray to yield the position of the incoming photon to a fraction of a CCD pixel (Kawakami et al. 1994). An active area of 256256 CCD pixels is used, and incoming photon events are centroided to 1/8th of a CCD pixel to yield 20482048 pixels on the sky, each 0.4765 arc seconds square. In this paper, to avoid confusion, while CCD pixels (256256 in FOV) will be referred to explicitly, a pixel refers to a centroided pixel (20482048 in FOV). As described later, images are normally taken with pixels binned 22 or at full sampling.
The CCD is read out rapidly (every 11 ms if the full CCD format is being used) to maximise the coincidence threshold (see sect. 5.2).
Telescope mechanical configuration
----------------------------------
The XMM-OM telescope module consists of a stray light baffle and a primary and secondary mirror assembly, followed by the detector module, detector processing electronics and telescope module power supply unit. The separation of the primary and secondary mirrors is critical to achieving the image quality of the telescope. The separation is maintained to a level of 2 by invar support rods that connect the secondary spider to the primary mirror mount. Heat generated by the detector electronics is transferred to the baffle by heat pipes spaced azimuthally around the telescope, and radiated into space. In this way the telescope module is maintained in an isothermal condition, at a similar temperature to the mirror support platform. This minimizes changes in the primary/secondary mirror separation due to thermal stresses in the invar rods. Fine focussing of the telescope is achieved through two sets of commandable heaters. One set of heaters is mounted on the invar support rods. When these heaters are activated, they cause the rods to expand, separating the primary and secondary mirrors. A second set of heaters on the secondary mirror support brings the secondary mirror closer to the primary when activated. The total range of fine focus adjustment available is $\pm10\mu$m.
The filter wheel is powered by stepper motor, which drives the wheel in one direction only. The filters are arranged taking into account the need to distribute the more massive elements (grisms, Magnifier) uniformly across the wheel.
Digital Electronics Module
--------------------------
There are two identical Digital Electronics Modules (DEM) serving respectively the two redundant detector chains. These units are mounted on the mirror support platform, separate from the telescope module. Each DEM contains an Instrument Control Unit (ICU) and a Digital Processing Unit (DPU). The ICU commands the XMM-OM and handles communications between the XMM-OM and the spacecraft.
The DPU is an image processing computer that digests the raw data from the instrument and applies a non-destructive compression algorithm before the data are telemetered to the ground via the ICU. The DPU supports two main science data collection modes, which can be used simultaneously. In Fast Mode, data from a small region of the detector are assembled into time bins. In Image Mode, data from a large region are extracted to create an image. These modes are described in more detail in the next sect. The DPU autonomously selects up to 10 guide stars from the full XMM-OM image and monitors their position in detector coordinates at intervals that are typically set in the range 10–20 seconds, referred to as a tracking frame. These data provide a record of the drift of the spacecraft during the observation accurate to $\sim 0.1$ arc second. The drift data are used within the DPU to correct Image Mode data for spacecraft drift (see sect. 5.5).
Observing with XMM-OM
=====================
Specifying windows
------------------
The full FOV of XMM-OM is a square 1717 arc minutes, covering the central portion of the X-ray FOV. Within this field the observer can define a number of data collection windows around targets or fields of interest. Up to five different Science Windows can be defined with the restriction that their boundaries may not overlap. However, one window can be completely contained within another.
Because of constraints on the telemetry rate available, it is not possible to transmit the full data on every photon that XMM-OM detects. Instead a choice has to be made between image coverage and time resolution. Thus two types of Science Window can be defined, referred to as Image Mode and Fast Mode. A maximum of two of the five available science windows can be Fast Mode.
*Image Mode emphasizes spatial coverage at the expense of timing information. Images can be taken at the full sampling of the instrument or binned by a factor of
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Given a disk $O$ in the plane called the objective, we want to find $n$ small disks $P_1,\ldots,P_n$ called the pupils such that $\bigcup_{i,j=1}^nP_i\ominus P_j\supseteq O$, where $\ominus$ denotes the Minkowski difference operator, while minimizing the number of pupils, the sum of the radii or the total area of the pupils. This problem is motivated by the construction of very large telescopes from several smaller ones by so-called Optical Aperture Synthesis. In this paper, we provide exact, approximate and heuristic solutions to several variations of the problem.'
author:
- |
Trung Nguyen$^1$[^1], Jean-Daniel Boissonnat$^1$,\
Fréderic Falzon$^2$ and Christian Knauer$^3$\
\
[$^1$Geometrica project, INRIA Sophia Antipolis, France]{}\
[$^2$Research department, Alcatel Alenia Space, France]{}\
[$^3$Institut für Informatik, Freie Universität Berlin, Germany]{}
bibliography:
- 'bibfile.bib'
title: 'A disk-covering problem with application in optical interferometry'
---
Introduction {#sec:intro}
============
The diameter of the pupil of a telescope is proportional to its resolution power. A simple calculus shows that we would need a telescope having a diameter of approximately $20m$ to observe the Earth from a high orbit [@NBBFT06]. Needless to say, such an instrument would not be adapted to the observation from space. In order not to build too large pupils, Optical Aperture Synthesis is adopted to synthesize (very) large pupils by interferometrically combining several smaller pupils [@disrupt05] (see Fig. \[fig:instr\]). The auto-correlation support (ACS) of a system of pupils denotes all the observable spatial frequency domain.
The underlying problem can be stated in geometric terms as follows. Given an objective $O$ supposed to be a disk, design a set of disks $\mathcal{P}=\{P_1,\ldots,P_n\}$ such that its ACS $\mathcal{D}$ covers entirely the objective while minimizing some cost function. Here $\mathcal{D} = \bigcup_{i,j=1}^n(P_i\ominus P_j)$ where $\ominus$ denotes the Minkowski difference operator. The cost function may include the number of pupils, the sum of the radii or the total area of the pupils, etc. This problem is a variant of the disk-covering problem. To the best of our knowledge, the variant we consider is new and the interferometry problem has not been considered before from a geometric perspective. This paper is a follow-up of our initial investigation [@NBBFT06]. The reader interested in the general disk-covering problem or some other variants can refer to [@alt2006mcc; @CB05; @booth2003cac].
\[fig:instr\]
![[Examples of using Optical Aperture Synthesis to synthesize large pupils [@disrupt05]]{}[]{data-label="fig:SOO-const"}](soo-ex.ps "fig:"){height="3cm"} ![[Examples of using Optical Aperture Synthesis to synthesize large pupils [@disrupt05]]{}[]{data-label="fig:SOO-const"}](multi2.ps "fig:"){height="3cm"}
The outline of this paper is as follows. In section \[sec:apollonius\_diagram\], we introduce Apollonius diagrams (additively weighted Voronoi diagrams) which play a central role in our study, and use them to decide whether the objective is covered. Section \[sec:three\_pupils\] deals with the case of three pupils for which we provide an optimal solution. We describe in section \[sec:approximation\_number\_pupils\] a constant-factor approximation algorithm for the case where the pupils are restricted to have the same radius. In section \[sec:fixed-center\_problem\], we consider the centers of the pupils to be given and provide efficient algorithms to minimize the sum of the radii or the total area of the pupils under the constraint that the ACS covers the objective. Finally, section \[sec:fixed-radius\_problem\] considers the problem where the radii of the pupils are known but their positions are unknown.
Apollonius diagrams and the decision problem {#sec:apollonius_diagram}
============================================
\[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Fact]{}
Apollonius diagrams (aka Additively weighted Voronoi diagrams)
--------------------------------------------------------------
Let $\mathcal{D}=\{ D_1,\ldots ,D_N\}$ be a set of $N$ disks in the plane. We denote by $c_i$ the center of $D_i$ and by $\rho_i$ its radius. Let $\|.\|$ denote the Euclidean distance and $\partial S$ denote the boundary of a subset of points $S$. The [*distance*]{} of a point $x$ to the circle $\partial D_i$ is defined as $$\delta_i(x)=\|x-c_i\|-\rho_i.$$
For a point $x$, $\delta_i(x)$ is $<0,0,>0$ depending whether $x$ lies inside, on the boundary of, or outside $D_i$. The [*Apollonius cell*]{} of $D_i$ consists of the points whose distance to $\partial D_i$ is less than or equal to their distance to any other circle of $\mathcal{D}$: $$A_i=\{x\in\mathbb{R}^2\mid \delta_i(x)\leq \delta_j(x), j=1,\ldots,N \}.$$
![[An Apollonius diagram of 8 disks in the Euclidean plane. The black disk has no cell.]{}[]{data-label="fig:power_diagram"}](apo_graph.ps){height="4cm"}
Unlike the case of points, it is possible that a disk may have an empty cell. This happens when the disk is inside another disk. The one-dimensional connected sets of points that belong to exactly two Apollonius cells are called [*Apollonius edges*]{}, while points that belong to at least three Apollonius cells are called [*Apollonius vertices*]{}. The collection of the cells, edges and vertices forms the [*Apollonius diagram*]{} of $\mathcal{D}$, denoted by $Apo(\mathcal{D})$ (see Fig. \[fig:power\_diagram\]). The Apollonius diagram $Apo(\mathcal{D})$ can be computed in time $O(N\log N)$ which is worst-case optimal [@KY02], and robust and efficient implementations exist [@cgal]. More information on Apollonius diagrams can be found in [@BWY06; @KY02]. We start by stating some properties of Apollonius diagrams. Let $B_{ij}$ define the bisector of two disks $D_i$ and $D_j$ $$B_{ij}=\{x\in\mathbb{R}^2\mid \delta_i(x) = \delta_j(x) \}.$$
\[lem:bisector\_is\_linear\] The restriction of $\delta_i$ and $\delta_j$ to $B_{ij}$ are unimodal functions. More precisely, these functions decrease linearly to a minimum and then increase linearly.
Consider two disks $D_i$ and $D_j$ with radii $\rho_i, \rho_j$ and centers, w.l.o.g., $c_i = (-c, 0)$ and $c_j = (c, 0)$. The bisector of $D_i$ and $D_j$ is a sheet of the hyperbola whose equation is $$\frac{x^2}{a^2} - \frac{y^2}{c^2 - a^2} = 1,$$ where $a = |\rho_j - \rho_i|/2$. Then the distance of a point with abscissa $x$ on the hyperbola to $c_i$ is a linear function of $x$: $d = \pm(ex + a)$, where $e = \frac{c}{a}$ is the eccentricity of the hyperbola and sign $\pm$ is positive if $\rho_i\leq \rho_j$ and negative otherwise.
\[cor:cover\_cell\_by\_minidisk\] Any arc $pq$ contained in the edge of a cell $A_i$ is included in the smallest disk of center $c_i$ that contains $p$ and $q$.
Since the distance function to $D_i$ of the points on arc $pq$ is unimodal by Lemma \[lem:bisector\_is\_linear\], it reaches a maximum at $p$ or $q$. Hence any disk with center $c_i$ that contains $p$ and $q$ covers the whole arc.
The Apollonius cell $A_i$ is included in the disk centered at $c_i$ that contains the set of its vertices.
If $A_i$ is unbounded we are done. Otherwise, as $A_i$ is star-shaped [@BWY06], it is included in a disk if its edges are.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we present an online two-level vehicle trajectory prediction framework for urban autonomous driving where there are complex contextual factors, such as lane geometries, road constructions, traffic regulations and moving agents. Our method combines high-level policy anticipation with low-level context reasoning. We leverage a long short-term memory (LSTM) network to anticipate the vehicle’s driving policy (e.g., forward, yield, turn left, turn right, etc.) using its sequential history observations. The policy is then used to guide a low-level optimization-based context reasoning process. We show that it is essential to incorporate the prior policy anticipation due to the multimodal nature of the future trajectory. Moreover, contrary to existing regression-based trajectory prediction methods, our optimization-based reasoning process can cope with complex contextual factors. The final output of the two-level reasoning process is a continuous trajectory that automatically adapts to different traffic configurations and accurately predicts future vehicle motions. The performance of the proposed framework is analyzed and validated in an emerging autonomous driving simulation platform (CARLA).'
author:
- 'Wenchao Ding and Shaojie Shen [^1]'
bibliography:
- 'paper.bib'
title: |
**Online Vehicle Trajectory Prediction using Policy Anticipation Network\
and Optimization-based Context Reasoning**
---
Introduction {#sec:introduction}
============
In recent years, there has been growing interest in building fully autonomous vehicles. Our requirement of such vehicles is to have accurate anticipation over other traffic participants so that their planned motions are neither too aggressive nor too conservative. To achieve this goal, autonomous vehicles are expected to reason about the behavior and intentions of surrounding vehicles and subsequently predicts future trajectories of these vehicles.
Given an urban driving environment where there are complex latent factors such as lane geometries, traffic regulations, road constructions and dynamical agents, the complexity of the prediction problem is high. Under such a scenario, there are two challenges to be addressed. First, given the complex environment, it is essential to consider the multimodal nature of the future trajectory [@lee2017desire]. For example, at the intersection as depicted in Fig. \[fig:motivation\_example\], there are two distinct choices, moving forward and turning left, which result in totally different future trajectories. Second, the prediction method must be highly flexible and able to easily adapt to the complex contextual factors.
Many handcrafted prediction models, such as [@agamennoni2012estimation; @laugier2011probabilistic; @lefevre2012evaluating; @havlak2014discrete], may lack flexibility and require refactoring when a new contextual factor is introduced. Meanwhile, other methods, especially the popular RNN-based models [@kim2017probabilistic; @alahi2016social], treat the trajectory prediction as a pure regression problem in spite of the multimodal nature of the future trajectory. We are therefore motivated to develop a flexible trajectory prediction framework which can easily adapt to various complex urban environments while incorporating high-level intentions to enhance the prediction accuracy.
In this paper, we propose an *online* two-level vehicle trajectory prediction framework. We develop a policy anticipation network using a long short-term memory (LSTM) network to anticipate high-level policies of vehicles (such as moving forward, yielding, turning and lane changing) based on sequential past observations. Given the high-level policy, we propose an optimization-based context reasoning process in which the complex contextual information is naturally encoded in a multi-layer cost map structure. A policy interpreter is set up to bridge the high-level and low-level reasoning by transforming the policy to a trajectory initial guess of the non-linear optimization. The policy anticipation network is used to capture the intention and guide the trajectory prediction process. Our optimization-based context reasoning process can easily adapt to different traffic configurations by transforming different factors into a unified notation of cost.
The motivation for modeling trajectory prediction as an optimization problem is that human drivers internally balance their maneuvers in terms of the “cost”. For example, driving through red lights or breaking speed limits would risk receiving penalties, and human drivers have an inborn ability to balance various kinds of costs during driving. The optimization-based reasoning process can be easily extended by adding another cost term to the unified cost map structure.
The idea of modeling drivers as optimizing agents is not new [@wolf2008artificial; @abbeel2004apprenticeship; @bahram2016combined; @sadigh2016planning], especially in the field of imitating human driving behaviors using inverse reinforcement learning (IRL). However, from the prediction perspective, the multimodal nature of the future trajectory [@lee2017desire; @deo2018convolutional] is not well modeled by the optimization process. For example, the non-linear optimization process may converge to either of the two possible intentions in Fig. \[fig:motivation\_example\]. To this end, we propose the policy anticipation network, which guides the optimization process to the anticipated high-level intention. Note that our optimization-based context reasoning can also incorporate the IRL technique for weight tuning, which is left as important future work.
We summarize the contributions of this paper as follows:
- An online two-level trajectory prediction framework which incorporates the multimodal nature of future trajectories.
- A highly flexible optimization-based context reasoning process which incorporates a multi-layer cost map structure to encode various contextual factors.
- Integration of the vehicle trajectory prediction framework and presentation of the results on accuracy, efficiency, and flexibility in various traffic configurations.
The related literature is reviewed in Sect. \[sec:related\_works\]. A system overview is given in Sect. \[sec:overview\]. The main methodology is presented in Sect. \[sec:policy\] and Sect. \[sec:optimization\]. The implementation details and experimental results are provided in Sect. \[sec:implementation\] and Sect. \[sec:results\]. Conclusions and future work are given in Sect. \[sec:conclusion\].
Related Works {#sec:related_works}
=============
The problem of vehicle trajectory prediction has been actively studied in the literature. As concluded in [@lefevre2014survey], there are three levels of prediction models, namely, physics-based, maneuver-based and interaction-aware motion models. Physics-based motion models use dynamic and kinematic vehicle models to propagate future states [@ammoun2009real; @brannstrom2010model]. However, the prediction results only hold for the very short-term (less than one second). Maneuver-based motion models are more advanced in the sense that the model may forecast relatively complex maneuvers, such as lane change and turns at intersections, by revealing the maneuver pattern. Many of the works on this level present a probabilistic framework to account for the uncertainty and variation of the motion patterns, such as Gaussian processes (GPs) [@tran2014online; @laugier2011probabilistic], Monte Carlo sampling [@eidehall2008statistical], Gaussian mixture models (GMMs) [@havlak2014discrete] and hidden Markov models [@aoude2012driver]. However, they typically assume vehicles are independent entities and fail to model interactions within the context and with other agents.
Interaction-aware models, on the other hand, take the driving context and vehicle interactions into account, and most of them, such as [@gindele2015learning; @lefevre2012evaluating] and [@agamennoni2012estimation], are based on dynamic Bayesian networks (DBNs). Though these methods are context-aware, they require refactoring the models when considering a new contextual factor. Our method belongs to the interaction-aware level. Compared to the DBN-based prediction methods, our method is more flexible and can be easily adapted to different traffic configurations.
It is notable that recurrent neural networks (RNNs) and their variants, such as LSTM networks, have recently been applied to predict or track moving targets, as in [@kim2017probabilistic; @khosroshahi2016surround] and [@ondruska2016deep]. Our policy anticipation network shares a similar structure with [@khosroshahi2016surround]. But the fundamental difference is that the network in [@khosroshahi2016surround] is only used to analyze the maneuver pattern at an intersection and cannot actively predict the future trajectories. Many learning-based end-to-end trajectory prediction models [@kim2017probabilistic; @alahi2016social; @deo2018convolutional] lack the ability to encode the contextual information. In [@lee2017desire], Lee *et al.* suggest combining IRL with an environment feature map to learn the interaction with contextual factors. However, this requires a large amount of training data to generalize due to the high complexity of the model. Also, it is hard to learn the interaction in some rare driving situations, such as red light offences.
System Overview {#sec:overview}
===============
The overview of our vehicle trajectory prediction framework is shown in Fig. \[fig:framework\]. During the high-level reasoning, the sequential state observations are fed to the policy anticipation network, which provides the future policy that a vehicle is likely to execute. Together with the map information, the policy can be properly interpreted in the driving context and a reference prediction is generated and fed to the optimization-based context reasoning process. The optimization process renders various environment observations and encodes them into the multi-layer cost map structure. A non-linear optimization process is then conducted to generate the predicted vehicle trajectory.
Policy Anticipation and Interpretation {#sec:policy}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The spectra of two early B-type supergiant stars in the Sculptor spiral galaxy NGC 300 are analysed by means of non-LTE line blanketed unified model atmospheres, aimed at determining their chemical composition and the fundamental stellar and wind parameters. For the first time a detailed chemical abundance pattern (He, C, N, O, Mg and Si) is obtained for a B-type supergiant beyond the Local Group. The derived stellar properties are consistent with those of other Local Group B-type supergiants of similar types and metallicities. One of the stars shows a near solar metallicity while the other one resembles more a SMC B supergiant. The effects of the lower metallicity can be detected in the derived wind momentum.'
author:
- 'Miguel Alejandro Urbaneja, Artemio Herrero[^1] , Fabio Bresolin, Rolf-Peter Kudritzki[^2] , Wolfgang Gieren[^3] and Joachim Puls[^4]'
title: 'Quantitative spectral analysis of early B-type supergiants in the Sculptor galaxy NGC 300[^5]'
---
Introduction
============
The 8-10 meter class telescopes and their new generation instruments make it possible to extend the quantitative stellar spectroscopy beyond the Local Group. Early B-type supergiant stars are ideal targets for detailed spectroscopy even at low resolution (R$\sim$1000). Their blue spectra are rich in metal features which allows us the analysis of chemical species like C, N, O, Si and Mg. Although our knowledge of the evolution of massive stars still has open questions, most of the recent works indicate that the blue luminous supergiants do not show any contamination of their oxygen surface abundances during the early stages of their evolution, neither the O-types [@villamariz2002], nor the B-types [@smartt1997; @monteverde2000; @smartt2002], nor the A-types [@venn1995; @takeda1998; @przybilla2002], which enables a direct comparison between the stellar oxygen abundances and the ones derived from regions. This has become extremely important, especially in the extragalactic field where oxygen is used as the primary metallicity indicator, due to the fact that at high metallicity (larger than approx. 0.5 solar) strong line methods must be used, for which the choice of the calibration strongly influences the derived abundances [@kewley2002; @pilyugin2002]. In addition to chemical abundance studies, blue luminous stars have strong radiatively driven mass outflows which can provide us with information on extragalatic distances by means of the Wind Momentum - Luminosity Relationship, WLR [@kudritzki2000 and references therein].
Recently, and within a wide program aimed at the spectroscopy study of luminous blue stars beyond the Local Group, first steps have been done for A-type supergiants in NGC 3621 [6.7 Mpc away, @bresolin2001]. Quantitative spectroscopy has been shown to be possible for A-type supergiants [@bresolin2002a] and Wolf-Rayet stars [@bresolin2002b] in NGC 300, 2.02 Mpc away in the Sculptor group. Here we report the first quantitative analysis of B-type supergiants (hereafter B-Sg) out of the Local Group, presenting the detailed chemical pattern along with the stellar parameters and the wind properties. The technique will be applied in a forthcoming paper to a large set of early B-Sg located at several galactocentric distances in order to derive radial abundance gradients of the $\alpha$-elements. Combined with the results of a similar study of A-type supergiants it will provide a wealth of information on the chemical evolution of the host galaxy NGC 300.
Observations
============
The stars are part of a spectroscopic survey of photometrically selected blue luminous supergiants in the Sculptor galaxy NGC 300, obtained at the VLT with the FORS multiobject spectrograph, and described in detail by @bresolin2002a, which presents a spectral catalog of 70 luminous blue supergiants in the blue region ($\sim$ 4000 - 5000 Å). The selected stars are identified as B-12 and A-9 in that spectral catalog (see their Table 2 and finding charts). In September 2001 the spectra of the H$\alpha$ region were obtained in order to measure the mass-loss rates, which provide us with a complete coverage of the 3800 - 7200 Å wavelength range at R$\sim$1000 resolution. The reader is referred to @bresolin2002a for a detailed description of the observations and reduction process, as well as for the photometry and the spectral classification of the stars.
Spectral analysis
=================
The spectra of early B-Sg are dominated by the lines, followed by /, /, / and , in addition to H and lines. At high resolution it is possible to detect some other metal lines of , / and but, due to their intrinsic weakness, these lines do not have any influence in the analysis at low resolution and could hardly be used to fix the abundance of such elements. Fig. \[fig1\] shows the high resolution - high S/N ratio (R$\sim$15000, SNR$\sim$350) blue spectrum of the Galactic supergiant HD14956 (B1.5Ia), and the same spectrum degraded to the resolution of the NGC 300 data, R$\sim$1000 (labeled as [*\#d*]{} in the figure). We have also included the identification of the more important lines. As can be seen, only a few strong lines remain isolated at that low resolution, therefore the analysis must be based on the comparison of the observed spectra to a set of model atmospheres that include a vast number of lines in the calculation of the emergent fluxes. We have taken into account more than two hundreds metal lines in the 3800 - 6000 Å wavelength range. It is important to include extense metal line lists because of the fact that some spectral features are formed by the contribution of several chemical species (e.g. the strong blend of O, N and C at $\sim$ 4650 Å). We have excluded some strong isolated lines because our atomic models do not consider the levels involved in these transitions. Nevertheless, these lines are isolated and have no influence on the results.
Even considering the noise effects in the lower resolution FORS spectra (displayed also in Fig. \[fig1\]), strong metal features can still be detected and used for a detailed chemical abundance analysis. In particular a wealth of information can be extracted from the selected regions at 4070, 4320, 4420 (), 4550 - 4570 (), 4600 - 4660 (, , and ) and 5010 ( and ).
Atmosphere models
-----------------
We use the newest version of the FASTWIND code [first presented by @santolayarey1997] which solves the radiation transfer in a moving media by means of suitable approximations which simplify the numerical treatment of the problem but without affecting the physical significance of the results. The atmospheric structure is treated in a consistent way, assuming a $\beta$-velocity law in the wind, ensuring a smooth transition between the “photosphere” and the “wind”; the temperature structure is approximated by means of [*non-LTE Hopf functions*]{} carefully chosen to ensure the flux conservation better than 2 % at any depth point; rate equations are solved in the co-moving frame scheme, with the coupling between the radiation field and the rate equations solved using local ALOs [following @puls1991]. This new version includes the effects of the [*line blanketing*]{}. The reader is referred to Puls et al. (2003, in preparation) for a detailed description. We have analysed two Galactic stars, 10 Lac (O9V) and HD209975 (O9.5Ib) in order to compare our results with the ones obtained with other codes. In the case of 10 Lac, our results agree with the recent ones by @herrero2002 [see their comparison to the results by Hubeny et al. 1998]. The derived parameters for HD209975 are consistent with the results by @villamariz2002 which used plane-parallel model with line blocking.
A model is prescribed by the effective temperature $T_{eff}$, the surface gravity [*log g*]{}, the stellar radius $R_*$ (all these three quantities are defined at $\tau_{Ross.} = 2/3$), the mass-loss rate $\dot{M}$, the wind terminal velocity $v_\infty$, the $\beta$ exponent of the wind velocity law, the He abundance $Y_{He}$, the microturbulent velocity $v_{turb}$ and, in the case of B-type stars, the [*Si*]{} abundance. The $T_{eff}$ is well determined from the triplet and the blends of (with at 4090 Å and with / at 4120 Å), and the surface gravity from the Balmer hydrogen lines, provided that the mass-loss rate information is extracted from the H$\alpha$ profile. An important issue concerns the wind terminal velocity, that must be adopted from a spectral type - v$_\infty$ empirical calibration [@haser1995; @kudritzki2000]. The assumed terminal velocity affects the derived $\dot{M}$ and the [*log g*]{}. But, with the joined information from H$\alpha$ and H$\beta$, the mass-loss rate and v$_\infty$ can be constrained to yield reasonable uncertainties in [*log g*]{}. The stellar radius is derived interactively from the absolute magnitude, deduced from the apparent magnitude after adopting a distance modulus [$\mu =
26.53$, @freedman2001], and the model emergent flux [@kud
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Auger recombination is a non-radiative process, where the recombination energy of an electron-hole pair is transferred to a third charge carrier. It is a common effect in colloidal quantum dots that quenches the radiative emission with an Auger recombination time below nanoseconds. In self-assembled QDs, the Auger recombination has been observed with a much longer recombination time in the order of microseconds. Here, we use two-color laser excitation on the exciton and trion transition in resonance fluorescence on a single self-assembled quantum dot to monitor in real-time every quantum event of the Auger process. Full counting statistics on the random telegraph signal give access to the cumulants and demonstrate the tunability of the Fano factor from a Poissonian to a sub-Poissonian distribution by Auger-mediated electron emission from the dot. Therefore, the Auger process can be used to tune optically the charge carrier occupation of the dot by the incident laser intensity; independently from the electron tunneling from the reservoir by the gate voltage. Our findings are not only highly relevant for the understanding of the Auger process, it also demonstrates the perspective of the Auger effect for controlling precisely the charge state in a quantum system by optical means.'
author:
- 'P. Lochner'
- 'A. Kurzmann'
- 'J. Kerski'
- 'P. Stegmann'
- 'J. König'
- 'A. D. Wieck'
- 'A. Ludwig'
- 'A. Lorke'
- 'M. Geller'
title: 'Real-time detection of every Auger recombination in a self-assembled quantum dot'
---
Keywords: Quantum dots, Resonance fluorescence, Auger recombination, Full counting statistics, Random telegraph signal\
The excitonic transitions in self-assembled quantum dots (QDs) [@Bimberg1999; @Petroff2001] realize perfectly a two-level system in a solid-state environment. These transitions can be used to generate single photon sources [@Michler2000; @Yuan2001] with high photon indistinguishability[@Santori2002; @Matthiesen2013], an important prerequisite to use quantum dots as building blocks in (optical) quantum information and communication technologies[@Kimble2008; @Ladd2010]. Moreover, self-assembled QDs are still one of the best model systems to study in an artificial atom the carrier dynamics[@Kurzmann2016b; @Geller2019], the spin- and angular-momentum properties[@Bayer2000; @Vamivakas2009] and charge carrier interactions[@Labud2014]. One important effect of carrier interactions is the Auger process: An electron-hole pair recombines and instead of emitting a photon, the recombination energy is transferred to a third charge carrier, which is then energetically ejected from the QD[@Kharchenko1996; @Efros1997; @Fisher2005; @Jha2009]. This is a common effect, mostly studied in colloidal QDs, where it quenches the radiative emission with recombination times in the order of picoseconds to nanoseconds[@Vaxenburg2015; @Klimov2000; @Park2014]. This limits the efficiency of optical devices containing QDs like LEDs[@Caruge2008; @Cho2009] or single photon sources[@Brokmann2004; @Michler2000a; @Lounis2000]. In self-assembled QDs, Auger recombination was speculated to be absent, and only recently, it was directly observed in optical measurements on a single self-assembled QD coupled to a charge reservoir with recombination times in the order of microseconds[@Kurzmann2016]. As a single Auger process is a quantum event, it is unpredictable and only the statistical evaluation of many processes gives access to the physical information of the recombination process[@Levitov1996; @Blanter2000]. The most in-depth evaluation - the so-called full counting statistics - becomes possible when each single quantum event in a time trace is recorded. Such real-time detection in optical experiments on a single self-assembled QD have until now only been shown for the statistical process of electron tunneling between the QD and a charge reservoir, where tunneling and spin-flip rates could be tuned by the applied electric and magnetic field[@Kurzmann2019].
Here, Auger recombination in a single self-assembled QD is investigated by optical real-time measurements of the random telegraph signal. With the technique of two-laser excitation, we are able to detect every single quantum event of the Auger recombination. These events take place in the single QD, leaving the quantum dot empty until single-electron tunneling into the QD from the charge reservoir takes place again. This reservoir is coupled to the QD with a small tunneling rate in the order of ms$^{-1}$. The laser intensity, exciting the trion transition, precisely controls the electron emission by the Auger recombination and, hence, the average occupation with an electron. It also tunes the Fano factor from a Poissonian to a sub-Poissonian distribution, which we observe in analyzing the random telegraph signal by methods of full counting statistics.
The investigated sample was grown by molecular beam epitaxy (MBE) with a single layer of self-assembled In(Ga)As QDs embedded in a p-i-n diode (see Supporting Information for details). A highly n-doped GaAs layer acts as charge reservoir, which is coupled to the QDs via a tunneling barrier, while a highly p-doped GaAs layer defines an epitaxial gate[@Ludwig2017]. An applied gate voltage $V_\text{G}$ shifts energetically the QD states with respect to the Fermi energy in the electron reservoir and controls the charge state of the dots by electron tunneling through the tunneling barrier. The sample is integrated into a confocal microscope setup within a bath cryostat at 4.2K for resonant fluorescence (RF) measurements (see Methods).
{width="1\columnwidth"}
Figure \[1\] shows the RF of the neutral exciton (X^0^) and the negatively charged exciton, called trion (X^-^). A RF measurement as function of gate voltage in Figure \[1\]**b** shows the fine-structure split exciton[@Hoegele2004] with an average linewidth of about 1.8$\upmu$eV at low excitation intensity ($1.6\cdot10^{-3}\,\upmu$W/$\upmu$m$^2$). Please note, that this measurement was recorded at a laser energy where the exciton gets into resonance at negative gate voltages because here, the measurement conditions were the best. The quantum-confined Stark effect shifts the exciton resonance X^0^ for higher gate voltages to higher frequencies up to 325.760THz, seen in Figure \[1\]**a**. This quadratic Stark shift of the two exciton transitions[@Li2000] is indicated by two white lines. At a voltage of about 0.375V (dashed vertical line in Fig. \[1\]**a**), the electron ground state in the dot is in resonance with the Fermi energy in the charge reservoir. An electron tunnels into the QD and the exciton transition vanishes while the trion transition can be excited at lower frequencies from 324.5095THz to 324.5115THz.
The spectrum of the exciton (blue dots) and the trion transition (red dots) under two-laser excitation is shown in Figure \[1\]**c**. The trion transition is measured at a laser frequency of 324.511THz (corresponding to the red line, “Laser 1” in Fig. \[1\]**a**) and a laser excitation intensity of $8\cdot10^{-6}\,\upmu$W/$\upmu$m$^2$ at a gate voltage of 0.515V. The exciton spectrum in Figure \[1\]**c** was obtained simultaneously by a second laser 2 on the exciton transition (blue line in Fig. \[1\]**a** at 325.7622THz) with a laser excitation intensity of $1.6\cdot10^{-3}\,\upmu$W/$\upmu$m$^2$, as the Auger recombination with rate $\gamma_\text{a}$ leads to an empty QD until an electron tunnels into the dot from the reservoir with rate $\gamma_\text{In}=\gamma_\text{In}^0+\gamma_\text{In}^\text{X}$. This rate comprises the tunneling into the empty dot $\gamma_\text{In}^0$ and the tunneling into the dot charged with an exciton $\gamma_\text{In}^\text{X}$[@Seidl2005] (see Fig. \[1\]**d** for a schematic representation). This has been explained previously in Kurzmann et al.[@Kurzmann2016] with the important conclusion that the intensity ratio between trion/exciton intensity in equilibrium measurements is given by the ratio between Auger/tunneling rate $\gamma_\text{a}/\gamma_\text{In}$. As the tunneling rate $\gamma_\text{In}$ in the sample used here is in the range of ms$^{-1}$, the Auger rate $\gamma_\text{a}$ exceeds the tunneling rate by more than two orders of magnitude (see below). As a consequence, the intensity of the trion transition in equilibrium is by more than two orders of magnitude smaller than the exciton transition.
{width="0.5\columnwidth"}
The interplay between
|
{
"pile_set_name": "ArXiv"
}
|
---
address:
- 'Universität Essen, FB6 Mathematik, 45117 Essen, Germany'
- 'University of Michigan, Ann Arbor, MI 48109, USA'
author:
- Manuel Blickle
- Robert Lazarsfeld
bibliography:
- 'MultiplierNotes.bib'
title: |
An Informal introduction to\
multiplier ideals
---
[^1]
Introduction
============
Given a smooth complex variety $X$ and an ideal (or ideal sheaf) ${{\mathfrak{a}}}$ on $X$, one can attach to ${{\mathfrak{a}}}$ a collection of *multiplier ideals* ${{{\ensuremath{{\mathcal{J}}}}}({{{\mathfrak{a}}}^c})}$ depending on a rational weighting parameter $c > 0$. These ideals, and the vanishing theorems they satisfy, have found many applications in recent years. In the global setting they have been used to study pluricanonical and other linear series on a projective variety ([@Demailly93c], [@Angehrn-Siu95a], [@Siu98a], [@Ein-Lazarsfeld97a], [@ELNull], [@Demailly99b]). More recently they have led to the discovery of some surprising uniform results in local algebra ([@ELS1], [@ELS2], [@ELSV]). The purpose of these lectures is to give an easy-going and gentle introduction to the algebraically-oriented local side of the theory.
Multiplier ideals can be approached (and historically emerged) from three different viewpoints. In commutative algebra they were introduced and studied by Lipman [@lip.adj] in connection with the Briançon-Skoda theorem.[^2] On the analytic side of the field, Nadel [@Nadel90] attached a multiplier ideal to any plurisubharmonic function, and proved a Kodaira-type vanishing theorem for them.[^3] This machine was developed and applied with great success by Demailly, Siu and others. Algebro-geometrically, the foundations were laid in passing by Esnault and Viehweg in connection with their work involving the Kawamata-Viehweg vanishing theorem. More systematic developments of the geometric theory were subsequently undertaken by Ein, Kawamata and the second author. We will take the geometric approach here.
The present notes follow closely a short course on multiplier ideals given by the second author at the Introductory Workshop for the Commutative Algebra Program at the MSRI in September 2002[^4]. The three main lectures were supplemented with a presentation by the first author on multiplier ideals associated to monomial ideals (which appears here in §3). We have tried to preserve in this write-up the informal tone of these talks: thus we emphasize simplicity over generality in statements of results, and we present very few proofs. Our primary hope is to give the reader a feeling for what multiplier ideals are and how they are used. For a detailed development of the theory from an algebro-geometric perspective we refer to Part Three of the forthcoming book [@PAG]. The analytic picture is covered in Demailly’s lectures [@Dem.Mult].
We conclude this Introduction by fixing the set-up in which we work and giving a brief preview of what is to come. Throughout these notes, $X$ denotes a smooth affine variety over an algebraically closed field $k$ of characteristic zero and $R = k[X]$ is the coordinate ring of $X$, so that $X = {{\operatorname{Spec}}}R$. We consider a non-zero ideal ${{\mathfrak{a}}}\subseteq k[X]$ (or equivalently a sheaf of ideals ${\mathfrak{a}}\subseteq {{\ensuremath{{\mathcal{O}}}}}_X$). Given a rational number $c \geq 0$ our plan is to define and study the multiplier ideal $${{\ensuremath{{\mathcal{J}}}}}(c \cdot {\mathfrak{a}})\ =\ {{\ensuremath{{\mathcal{J}}}}}({\mathfrak{a}}^c) \ \subseteq \ k[X].$$ As we proceed, there are two ideas to keep in mind. The first is that ${{\ensuremath{{\mathcal{J}}}}}({\mathfrak{a}}^c)$ measures in a somewhat subtle manner the singularities of the divisor of a typical function $f$ in ${\mathfrak{a}}$: for fixed $c$, “nastier" singularities are reflected by “deeper" multiplier ideals. Secondly, ${{{\ensuremath{{\mathcal{J}}}}}({{{\mathfrak{a}}}^c})}$ enjoys remarkable formal properties arising from the Kawamata-Viehweg-Nadel Vanishing theorem. One can view the power of multiplier ideals as arising from the confluence of these facts.
The theory of multiplier ideals described here has striking parallels with the theory of tight closure developed by Hochster and Huneke in positive characteristic. Many of the uniform local results that can be established geometrically via multiplier ideals can also be proven (in more general algebraic settings) via tight closure. For some time the actual connections between the two theories were not well understood. However very recent work [@HaraYosh], [@Takagi.MultTest] of Hara-Yoshida and Takagi has generalized tight closure theory to define a so called test ideal $\tau({\mathfrak{a}})$, which corresponds to the multiplier ideal ${{\ensuremath{{\mathcal{J}}}}}({\mathfrak{a}})$ under reduction to positive characteristic. This provides a first big step towards identifying concretely the links between these theories.
Concerning the organization of these notes, we start in §2 by giving the basic definition and several examples. Multiplier ideals of monomial ideals are discussed in detail in §3. Invariants arising from multiplier ideals, with some applications to uniform Artin-Rees numbers, are taken up in §4. Section 5 is devoted to a discussion of some basic results about multiplier ideals, notably Skoda’s theorem and the restriction and subaddivity theorems. We consider asymptotic constructions in §6, with applications to uniform bounds for symbolic powers following [@ELS1].
We are grateful to Karen Smith for suggestions concerning these notes.
Definition and Examples {#sec.defex}
=======================
As just stated, $X$ is a smooth affine variety of dimension $n$ over an algebraically closed field of characteristic zero, and we fix an ideal ${{\mathfrak{a}}}\subseteq k[X]$ in the coordinate ring of $X$. Very little is lost by focusing on the case $X = {{\ensuremath{\mathbb{C}}}}^n$ of affine $n$-space over the complex numbers ${{\ensuremath{\mathbb{C}}}}$, so that ${{\mathfrak{a}}}\subseteq {{\ensuremath{\mathbb{C}}}}[x_1, \ldots, x_n]$ is an ideal in the polynomial ring in $n$ variables.
Log resolution of an ideal
--------------------------
The starting point is to realize the ideal ${{\mathfrak{a}}}$ geometrically.
A *log resolution* of an ideal sheaf ${\mathfrak{a}}\subseteq {{\ensuremath{{\mathcal{O}}}}}_X$ is a proper, birational map $\mu: Y
{\xrightarrow{\ \ }}X$ whose exceptional locus is a divisor $E = \text{Exceptional}(\mu)$ such that
1. $Y$ is non-singular.
2. ${\mathfrak{a}}\cdot {{\ensuremath{{\mathcal{O}}}}}_{Y} =\mu^{-1}{\mathfrak{a}}= {{\ensuremath{{\mathcal{O}}}}}_{Y}(-F)$ with $F=\sum r_iE_i$ an effective divisor.
3. $F+E$ has simple normal crossing support.
Recall that a (Weil) divisor $D=\sum \alpha_i D_i$ has simple normal crossing support if each of its irreducible components $D_i$ is smooth, and if locally analytically one has coordinates $x_1,\ldots,x_n$ of $Y$ such that ${{\operatorname{Supp}}}D=\sum D_i$ is defined by $x_1\cdot\ldots\cdot x_a$ for some $a$ between $1$ and $n$. In other words, all the irreducible components of $D$ are smooth and intersect transversally. The existence of a log resolution for any sheaf of ideals in any variety over a field of characteristic zero is essentially Hironaka’s celebrated result on resolution of singularities [@Hironaka.ResSing]. Nowadays there are more elementary constructions of such resolutions, for instance [@Bierstone-Milman97], [@EncVill.Desing] or [@Paranjape].
Let $X={{\ensuremath{\mathbb{A}}}}^2={{\operatorname{Spec}}}k[x,y]$ and ${\mathfrak{a}}=
(x^2,y^2)$. Blowing up the origin in ${{\ensuremath{\mathbb{A}}}}^2$ yields $$Y = {\mathit{Bl}}_0({{\ensuremath{\mathbb{A}}}}^2) {\xrightarrow{\ \mu\ }} {{\ensuremath{\mathbb{A}}}}^2=X.$$ Clearly, $Y$ is nonsingular. Computing on the chart for which the blowup $\mu$ is a map from ${{\ensuremath{\mathbb{A}}}}^2 {\xrightarrow{\ \ }}{{\ensuremath{\mathbb{A}}}}^2$ given by $(u,v)
\mapsto (u,uv)$ shows that ${\mathfrak{a}}\cdot {{\ensuremath{{\mathcal{O}}}}}_{Y} = {{\ensuremath{{\mathcal{O}}}}}_{Y}(-2E)$. On the described chart we have ${\mathfrak{a}}\cdot {{\ensuremath{{\mathcal{O}}}}}_Y = (u^2,u^2v^2)=(u^2)$ and $(u=0)$ is the equation of
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The goal of branch length estimation in phylogenetic inference is to estimate the divergence time between a set of sequences based on compositional differences between them. A number of software is currently available facilitating branch lengths estimation for homogeneous and stationary evolutionary models. Homogeneity of the evolutionary process imposes fixed rates of evolution throughout the tree. In complex data problems this assumption is likely to put the results of the analyses in question.
In this work we propose an algorithm for parameter and branch lengths inference in the discrete-time Markov processes on trees. This broad class of nonhomogeneous models comprises the general Markov model and all its submodels, including both stationary and nonstationary models. Here, we adapted the well-known Expectation-Maximization algorithm and present a detailed performance study of this approach for a selection of nonhomogeneous evolutionary models. We conducted an extensive performance assessment on multiple sequence alignments simulated under a variety of settings. We demonstrated high accuracy of the tool in parameter estimation and branch lengths recovery, proving the method to be a valuable tool for phylogenetic inference in real life problems. ${\texttt{Empar}}$ is an open-source C++ implementation of the methods introduced in this paper and is the first tool designed to handle nonhomogeneous data.
[: nucleotide substitution models; branch lengths; maximum-likelihood; expectation-maximization algorithm.]{}
author:
- 'A. M. Kedzierska$^{1,2}$'
- 'M. Casanellas$^{2,**}$'
title: '${\texttt{Empar}}$: EM-based algorithm for parameter estimation of Markov models on trees.'
---
{#section .unnumbered}
Assuming that an evolutionary process can be represented in a phylogenetic tree, the tips of the tree are assigned operational taxonomic units (OTUs) whose composition is known. Here, the OTUs are thought of as the DNA sequences of either a single or distinct taxa. Internal vertices represent ancestral sequences and inferring the branch lengths of the tree provides information about the speciation time.
Choice of the evolutionary model and the method of inference have a direct impact on the accuracy and consistency of the results [@SulSwo97; @Fel78; @BruHal99; @Pen94; @HueHil93; @Schwartz2010]. We assume that the sites of a multiple sequence alignment (MSA) are independent and identically distributed (i.i.d. hypothesis of all sites undergoing the same process without an effect on each other), the evolution of a set of OTUs along a phylogenetic tree ${\tau}$ can be modeled by the evolution of a single character under a hidden Markov process on ${\tau}$.
Markovian evolutionary processes assign a conditional substitution (transition) matrix to every edge of ${\tau}$. Most current software packages are based on the continuous-time Markov processes where the transition matrix associated to an edge $e$ is given in the form $\exp(Q^e
t_e)$, where $Q^ee$ is an instantaneous mutation rate matrix. Although in some cases the rate matrices are allowed to vary between different lineages (cf. [@Galtier1998],[@YY99]), it is not uncommon to equate them to a *homogeneous* rate matrix $Q$, which is constant for every lineage in ${\tau}$.
Relaxing the homogeneity assumption is an important step towards increased reliability of inference (see [@mitoch]). In this work, we consider a class of processes more general than the homogeneous ones: the discrete-time Markov processes. If ${\tau}$ is rooted, these models are given by a root distribution $\pi,$ and a set of transition matrices $A^e$ (e.g. chap. 8 of @Semple2003). The transition matrices $A^e$ can freely vary for distinct edges and are not assumed to be of exponential form, thus are highly applicable in the analyses of non-homogeneous data. Among these models we find the general Markov model (${\mathtt{GMM}}$) and all its submodels, e.g. discrete-time versions of the Jukes-Cantor model (denoted as ${\mathtt{JC69}^{\ast}}$), Kimura two-parameters (${\mathtt{K80}^{\ast}}$) and Kimura 3-parameters models (${\mathtt{K81}^{\ast}}$), and the strand symmetric model ${\mathtt{SSM}}$. Though the discrete-time models provide a more realistic fit to the data [@YY99; @Ripplinger2008; @Ripplinger2010], their complexity requires a solid inferential framework for accurate parameter estimation. In continuous-time models, *maximum-likelihood estimation* (MLE) was found to outperform Bayesian methods [@Schwartz2010]. The most popular programs of phylogenetic inference (PAML [@Yang1997], PHYLIP [@Felsenstein1989], PAUP\* [@PAUP]) are restricted to the homogeneous models. Though more realistic, the use of nonhomogeneous models in phylogenetic inference is not yet an established practice. Recently, [@Jayaswal2011] proposed two new non-homogeneous models. With the objective of testing stationarity, homogeneity and inferring the proportion of invariable sites, the authors propose an iterative procedure based on the *Expectation Maximization* (${\mathtt{EM}}$) algorithm to estimate parameters of the non-homogeneous models (cf. [@barryhartigan87]). The ${\mathtt{EM}}$ algorithm was formally introduced by [@Dempster1977] (cf. @Hartley1958). It is a popular tool to handle incomplete data problems or problems that can be posed as such (e.g. missing data problems, models with latent variables, mixture or cluster learning). This iterative procedure globally optimizes all the parameters conditional on the estimates of the hidden data and computes the maximum likelihood estimate in the scenarios, where, unlike in the fully-observed model, the analytic solution to the likelihood equations are rendered intractable. An exhaustive list of references and applications can be found in [@Tanner1996], and more recently in [@Ambroise1998]. Here, we extend on the work of [@Jayaswal2011] and present ${\texttt{Empar}}$, a MLE method based on the ${\mathtt{EM}}$ algorithm which allows for estimating the parameters of the (discrete-time) Markov evolutionary models. ${\texttt{Empar}}$ is an implementation suitable for phylogenetic trees on any number of leaves and currently includes the following evolutionary models: ${\mathtt{JC69}^{\ast}}$, ${\mathtt{K80}^{\ast}}$, ${\mathtt{K81}^{\ast}}$, ${\mathtt{SSM}}$ and ${\mathtt{GMM}}.$
We test the proposed method on simulated data and analyze the accuracy of the parameter and branch length recovery. The tests are conducted in a settings analogue to that of [@Schwartz2010] and evaluate the performance of ${\texttt{Empar}}$ on the four and six-taxon trees with several sets of branch lengths, ${\mathtt{JC69}^{\ast}}$ and ${\mathtt{K81}^{\ast}}$ models under varying alignment lengths. We present an in-depth theoretical study, investigating the dependence of the performance on factors such as model complexity, size of the tree, positioning of the branches, data and total tree lengths.
Our findings suggest that the method is a reliable tool for parameter inference of small sets of taxa, best results obtained for shorter branches.
The algorithm underlying ${\texttt{Empar}}$ was implemented in C++ and is freely available to download at <http://genome.crg.es/cgi-bin/phylo_mod_sel/AlgEmpar.pl>.
METHODS {#methods .unnumbered}
=======
Models {#models .unnumbered}
------
We fix a set of $n$ taxa labeling the leaves of a rooted tree ${\tau}$. We denote by $N({\tau})$ the set of all nodes of ${\tau}$, the set of leaves as $L({\tau})$, the set of interior nodes as $Int({\tau}),$ and the set of edges as $E({\tau}).$ We are given a DNA multiple sequence alignment (MSA) associated to the taxa in ${\tau}$ and a discrete-time Markov process on ${\tau}$ associated to an evolutionary by a model ${\mathcal{M}}$, where the nodes in${\tau}$ are discrete random variables with values in the set of nucleotides $\{{\mathtt{A}},{\mathtt{C}},{\mathtt{G}},{\mathtt{T}}\}$. We assume that all sites in the alignment are i.i.d. and model evolution per site as follows: for each edge $e$ of ${\tau}$ we collect the conditional probabilities $P(y|x,e)$ (nucleotide $x$ being replaced by $y$ at the descendant node of $e$) in a transition matrix $A^e=(P(y|x,e))_{x,y}$; $\pi=(\pi_{{\mathtt{A}}},\pi_{{\mathtt{C}}},\pi_{{\mathtt{G}}},\pi_{{\mathtt{T}}})$ is the distribution of nucleotides at the root $r$ of ${\tau}$ and $\xi=\{\pi, (A^e)_{e}\}$ the set of continuous parameters of ${\mathcal{M}}$ on ${\tau}$. We denote by $X$ the set of $4^n$ possible patterns at the leaves and $Y$ the set of $4^{|Int({\tau})|}$ possible patterns at the interior nodes of ${\tau}.$ In what follows, the joint probability of observing $\textbf{x}=(x_l)_{l \in L({\tau})} \in X$ at the leaves and nucleotides $\textbf{y}=(y_v)_{v\in
Int({\tau})} \in Y$ at the interior nodes in ${\tau}$ is calculated as $$p_{\textbf{x},\textbf{y}}(\xi)=\pi_{y_r}\prod_{v \in N({\tau
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose driven dissipative Majorana platforms for the stabilization and manipulation of robust quantum states. For Majorana box setups, in the presence of environmental electromagnetic noise and with tunnel couplings to quantum dots, we show that the time evolution of the Majorana sector is governed by a Lindblad master equation over a wide parameter regime. For the single-box case, arbitrary pure states (‘dark states’) can be stabilized by adjusting suitable gate voltages. For devices with two tunnel-coupled boxes, we outline how to engineer dark spaces, i.e., manifolds of degenerate dark states, and how to stabilize fault-tolerant Bell states. The proposed Majorana-based dark space platforms rely on the constructive interplay of topological protection mechanisms and the autonomous quantum error correction capabilities of engineered driven dissipative systems. Once a working Majorana platform becomes available, only standard hardware requirements are needed to implement our ideas.'
author:
- 'Matthias Gau,$^{1,2}$ Reinhold Egger,$^{1}$ Alex Zazunov,$^{1}$ and Yuval Gefen$^{2}$'
title: Towards dark space stabilization and manipulation in driven dissipative Majorana platforms
---
\#1\#2[\#1|\#2]{}\#1[|\#1|]{}\#1[\#1]{}\#1[|\#1]{}\#1[\#1|]{}
Introduction {#sec1}
============
It has been known for a long time that the dynamics of open quantum systems subject to external driving forces and coupled to environmental modes (‘heat bath’) can be described by master equations [@Weiss2007; @Breuer2006; @Gardiner2004]. For a Markovian bath, the memory time of the bath represents the shortest time scale of the problem. The master equation is then of Lindblad type [@Lindblad1976; @Lindblad1983], where a Hamiltonian describes the coherent time evolution of the system’s density matrix and a Lindbladian captures the dissipative dynamics. (We here use ‘Lindbladian’ for the dissipator terms in the master equations below.) The Lindblad equation is the most general Markovian master equation which preserves the trace and positive semi-definiteness of the density matrix.
A major development over the past two decades has come from the realization that driven dissipative (DD) quantum systems can be stabilized in a pure quantum state by appropriate engineering of the driving fields and of the coupling to the dissipative environment [@Plenio1999; @Beige2000; @Plenio2002; @Diehl2008; @Kraus2008; @Diehl2010; @Diehl2011; @Bardyn2013; @Zanardi2014; @Albert2014; @Jacobs2014; @Albert2016; @Goldman2016; @Wiseman2010]. Such states are eigenstates of the corresponding Lindbladian with zero eigenvalue, i.e., the operation of the Lindbladian leaves them inert. We therefore will refer to these DD stabilized states as *dark states* in what follows. Rather than viewing the coupling to a dissipative environment as foe (e.g., leading to decoherence of quantum states and undermining the utilization of similar platforms for quantum information processing), the combined effect of drive and dissipation can thus be harnessed to engineer quantum-coherent pure states. Going beyond dark states, the stabilization of a *dark space* [@Iemini2015; @Iemini2016; @Santos2020] — a manifold spanned by multiple degenerate dark states — raises the prospects of employing such systems as viable platform for quantum information processing. Reference [@Touzard2018] reports on recent experimental results in this direction.
Using trapped ions or superconducting qubits, the above ideas have already allowed for first qubit stabilization experiments [@Geerlings2013; @Lu2017; @Touzard2018], for the implementation of quantum simulators [@Barreiro2011; @Schindler2013], and for the generation of selected highly entangled multi-particle states [@Shankar2013; @Leghtas2013; @Reiter2016; @Liu2016]. Systems composed of many coupled qubits stabilized by DD mechanisms could eventually result in universal quantum computation platforms [@Verstraete2009; @Fujii2014], where fault tolerance is the consequence of autonomous error correction [@Terhal2015] due to the engineered dissipative environment, without the need for active feedback [@Wiseman2010; @Kerckhoff2010; @Murch2012; @Kapit2015; @Kapit2016]. Recent experimental progress on autonomous error correction in DD qubit systems has been described in Refs. [@Leghtas2013; @Liu2016; @Reiter2017; @Puri2019]. At present, reported fidelities in DD qubit setups (which by construction are stable in time) are typically below 90$\%$ for state stabilization, with significantly lower fidelities for single- or two-qubit gate operations.
Another important and at first glance unrelated development towards the (so far elusive) goal of fault-tolerant universal quantum computation comes from the field of topological quantum computation [@Nayak2008]. By using topological quasiparticles [@Wen2017] for encoding and processing quantum information, the latter is nonlocally distributed in space and thereby protected against local environmental fluctuations. In general terms, for practically useful and scalable DD systems with multiple degenerate dark states, the coupling to the environment has to be carefully engineered such that it is blind to all system operators acting within the targeted dark space manifold [@Facchi2000]. It will thus be imperative to avoid residual (uncontrolled and unwanted) noise sources. In that regard, platforms harboring topological quasiparticles may offer a key advantage since they should come with a strongly reduced intrinsic sensitivity to residual environmental fluctuations as compared to conventional systems. The simplest candidate for topological quasiparticles is given by Majorana bound states (MBSs), which are localized zero-energy states in topological superconductors. For Majorana reviews, see Refs. [@Alicea2012; @Leijnse2012; @Beenakker2013; @Sarma2015; @Aguado2017; @Lutchyn2018; @Zhang2019a]. Topological codes relying on MBSs have so far been discussed in the context of active error correction [@Alicea2011; @Terhal2012; @Hyart2013; @Vijay2015; @Aasen2016; @Landau2016; @Plugge2016; @Plugge2017; @Karzig2017; @Litinski2017; @Wille2019], where periodically repeated stabilizer measurements are needed for fault tolerance. It remains an important challenge to devise feasible and scalable Majorana platforms exploiting passive error correction strategies, where DD mechanisms serve to continuously measure the system in a way that the desired highly entangled many-body quantum state becomes stabilized automatically, see, e.g., Ref. [@Herold2017]. While this ambitious goal is beyond the scope of our work, we here analyze related questions for DD systems with up to eight MBSs.
For a mesoscopic floating (not grounded) topological superconductor harboring four MBSs, strong charging effects [@Fu2010] imply that the ground state is doubly degenerate under Coulomb valley conditions (see Sec. \[sec2a\] for details). Such a superconducting island is therefore a good candidate for a topologically protected Majorana qubit, named Majorana box qubit [@Plugge2017] or tetron [@Karzig2017]. Thanks to the nonlocal Majorana encoding of quantum information, such a qubit allows for unique addressability options via electron cotunneling when quantum dots (QDs) or normal leads are attached to the island by tunneling contacts, see also Refs. [@Gau2018; @Munk2019]. Majorana qubits have not yet been experimentally realized. However, the recent emergence of new Majorana platforms (see, e.g., Refs. [@Liu2018b; @Zhang2018b; @Wang2018b; @Sajadi2018; @Ghatak2018; @Murani2019]) in addition to the semiconductor nanowire platform mainly explored so far [@Lutchyn2018; @Zhang2019a] indicates that they may be available in the foreseeable future. We note that alternative Majorana qubit designs have been put forward, e.g., in Refs. [@Terhal2012; @Hyart2013; @Aasen2016]. Many of the ideas discussed below can be adapted to those setups as well.
Motivation and goals of this work
---------------------------------
We here show that once available, Majorana box devices yield highly attractive platforms for implementing DD protocols aimed at the realization of dark states and/or dark spaces. The driving field is applied to the tunnel link connecting a pair of QDs, and dissipation is due to environmental electromagnetic noise. To the best of our knowledge, apart from a distantly related proposal for the DD stabilization of Majorana-based quantum memories [@Bardyn2016], no studies of DD Majorana systems have appeared in the literature so far. We note that the DD engineering of MBSs in cold-atom based Kitaev chains [@Diehl2011; @Bardyn2013; @Goldman2016] differs from our ideas: We consider topological superconductors harboring native MBSs, and then subject the resulting Majorana systems to DD stabilization and manipulation protocols targeting dark states and/or dark spaces. Our unique platform enables us to employ QDs as external knobs to be used not only for state engineering but also for state manipulation.
Our motivation for designing and studying novel DD stabilization and manipulation schemes using Majorana platforms rests on several arguments and expectations:
1. Since uncontrolled environmental effects are largely suppressed by topological protection mechanisms, one may reach higher fidelities than those reported so far for DD dark state or dark space implementions using conventional
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'E. Daddi'
- 'F. Valentino'
- 'R. M. Rich'
- 'J. D. Neill'
- 'M. Gronke'
- 'D. O’Sullivan'
- 'D. Elbaz'
- 'F. Bournaud'
- 'A. Finoguenov'
- 'A. Marchal'
- 'I. Delvecchio'
- 'S. Jin'
- 'D. Liu'
- 'A. Calabro'
- 'R. Coogan'
- 'C. D’Eugenio'
- 'R. Gobat'
- 'B. S. Kalita'
- 'P. Laursen'
- 'D.C. Martin'
- 'A. Puglisi'
- 'E. Schinnerer'
- 'V. Strazzullo'
- 'T. Wang'
date: 'Received / Accepted '
title: 'Three Lyman-$\alpha$ emitting filaments converging to a massive galaxy group at z=2.91: a case for cold gas infall'
---
Introduction
============
A fundamental phenomenon required to explain the evolution of massive galaxies at high redshifts is the efficient accretion of cold gas streaming along filaments, surviving the shocks at the virial radii of their massive halos and delivering the required fuel to galaxies (Dekel et al. 2009; Kere[š]{} et al. 2005). This scenario is intimately connected to our current understanding of the star formation and growth of galaxies at [*cosmic noon*]{} $1<z<3$ (and earlier), whose key observational features might be summarized with two basic tenets: the existence of tight correlations between the stellar mass and star formation rates (SFRs) in galaxies (the so-called Main Sequence of star formation; Noeske et al 2007; Elbaz et al 2007; Daddi et al 2007; and many others works) and the systematic increase of gas fractions along with specific SFRs as a function of redshift for typical Main Sequence galaxies (Daddi et al 2008; 2010; Tacconi et al 2010; Magdis et al 2012; Genzel et al. 2015; plus many others). The finding that star forming galaxies at these redshifts are much more common than quiescent systems (e.g., Ilbert et al 2010), coupled to the tight Main Sequence correlations imply that star formation in galaxies occurs and is persistent over timescales much longer than their typical stellar doubling times and gas consumption timescales, which requires constant replenishment of their gas reservoirs (e.g., Lilly et al 2013).
Cold accretion frameworks quite satisfactorily account for this observational evidence, as they predict that cold material, nearly ready to form stars, accretes at rates proportional to the hosting halo mass (Neinstein & Dekel 2008; Dekel et al. 2013), thus naturally resulting in Main-Sequence like behaviour (as recognised by theory even before observational confirmation, see e.g. Finlator et al. 2006). Also, accretion rates at fixed mass are predicted to evolve rapidly with redshift, with trends (scaling as $(1+z)^{\alpha}$ with $\alpha\approx2$–3) that correspond well to the evolving behaviour of the Main Sequence normalization (e.g., Sargent et al 2012) and gas fractions (Magdis et al 2012; Genzel et al 2015). Not everything is fully reconciled, as for example a tension between predicted versus observed star formation rates in typical galaxies at [*cosmic noon*]{} has persisted for over a decade (e.g., Daddi et al. 2007) but it is generally understood as due to limitations in the modelling of feedback and the subsequent implications for gas consumption and the baryon cycle (Somerville & Dave 2015).
Now, despite more than a decade of effort, direct, convincing observational confirmation of the existence such cold accreting gas are still lacking, so that the theory is being necessarily questioned. On the observational side it appears that outflows are actually widespread in the circumgalactic gas around galaxies, with hardly any sign of inflows (Steidel et al. 2010). From the theoretical side, the latest generation, high resolution simulations now call into question whether streams can survive the interaction with the hot baryons in halos and remain stable (Nelson et al 2015; Mandelker et al 2019). Also, numerical simulations of cold streams have been questioned for not having the required resolution to capture the small scale gas physics (Cornuault et al. 2018), making it unclear whether predictions can be taken quantitatively. This uncertainty on the feeding of galaxy activity also limits our understanding of feedback processes (e.g., Gabor & Bournaud 2014; Dekel & Mandelker 2014)
It is widely recognised that the most promising avenue to reveal these cold gas streams is through their collisionally excited emission (Dijkstra & Loeb 2009; Goerdt et al. 2010; Rosdhal & Blaizot 2012), possibly enhanced by hydrodynamical instabilities (e.g., Mandelker et al 2020a). Much more difficult is to ascertain whether any observed extended emission is due to collisions or rather to recombinations following photo-ionization from star formation and/or AGN activity. Even more fundamental is the difficulty to properly distinguish emission as coming from outflowing or infalling gas, given that broadly they would give rise to similar instability-driven phenomenology (e.g., Cornuault et al. 2018).
Giant nebulae are now routinely discovered around QSOs at redshifts $2<z<4$ (e.g., Borisova et al 2016; Arrigoni Battaia et al 2019; Cai et al 2019; O’Sullivan et al 2020) with detections as high as $z\sim6.6$ (Farina et al. 2019) and could potentially provide large samples to statistically search for the role of infall. Filamentary structures, sometimes found in QSO nebulae (Cantalupo et al. 2014; Hennawi et al. 2015), might be consistent with gas infall (e.g., Martin et al. 2015a; 2019). However, it is not easy to rule out alternative interpretations: outflows (Fiore et al. 2017; Guo et al 2020; Veilleux et al 2020) overshadow expected infall in luminous QSO hosting halos by orders of magnitudes for both energy and gas flows (see quantitative discussion later in this work). Also, the emission is there certainly photoionized by the QSO hard UV emerging photons, making it prohibitive not only to gauge if any gravitational driven is at all present in QSO nebulae but also if any infall is actually taking place.
Filaments shining in have been recently found also in the SSA22a-LAB1 protocluster environment (Umehata et al. 2020), but the relatively giant nebula does not appear to be consistent with arising from infall (Herenz et al. 2020) and the filaments are situated at locations that are currently impossible to directly connect to individual dark matter (DM) halos. Both QSO filaments and the SSA22a-LAB1 filaments are remarkably extended over Mpc scales, much more than any putative hosting halo virial radius. These features led these studies to assert connections to the cosmic web. However, theory predicts that cold streams should have a detectable Lya surface brightness only within the virial radius of massive halos, where the hot gas can efficiently confine them and enhance their density (Dekel et al. 2009, Dijkstra & Loeb 2009). We conclude that the nature and origin of reported filaments detected up to date are therefore in doubt.
A critical test for models would then be the search for cold accreting gas in distant and massive halos and in environments where the contrast with competing mechanism for gas flows and for powering the detectable Lya emission is maximal. The first requirement follows from the fact that the dark and baryonic matter accretion rates increase with both the halo mass and redshift (Neinstein & Dekel 2008, Dekel et al. 2009), and is not trivial to address when considering that massive halos become more rare in the distant Universe because of their hierarchical assembly. Moreover, the necessity to exclude alternative mechanisms suggests a move away from extreme sources such as QSOs, focusing on structures where the black hole and star formation activities proceed at a standard pace. Both lines of argument point to high redshift clusters or groups as ideal testbeds for comparing theory to observations and searching for the evidence of cold accreting gas, as already seminally suggested in Valentino et al. (2015; see their Fig. 17 and related discussion) and Overzier et al. (2016; see their Fig. 11 and related discussion). This is because such clusters/group would provide the opportunity to search for non-photoionized in an environment where the role of outflows could be minimal and where filaments could be studied in connection to the halo they are streaming into, thus enabling quantitative comparison to cold accretion theory. This work presents one such plausible candidate.
Following the serendipitous discovery (Valentino et al. 2016) of a giant halo centered on the X-ray detected cluster CL 1449 at $z=1.99$, we have pursued this avenue and started systematic observations of several structures at $2<z<3.5$ with the Keck Cosmic Web Imager (KCWI), searching for
|
{
"pile_set_name": "ArXiv"
}
|
[**Comment on “Domain Structure in a Superconducting Ferromagnet”**]{}\
According to Faurè and Buzdin [@FB] in a superconducting ferromagnet a domain structure with a period small compared with the London penetration depth $\lambda$ can arise. They claim that this contradicts the conclusion of Ref. that ferromagnetic domain structure in the Meissner state of a superconducting ferromagnet is absent at equilibrium. Actually, there is no contradiction: The results of Ref. have only been misunderstood.
First of all it is necessary to properly define what is a ferromagnetic domain structure. A distinctive feature of a ferromagnetic state is a nonzero average spontaneous magnetization $\vec M$ in a [*macroscopic*]{} volume. This takes place even in a ferromagnet with domains, since in ferromagnets the domain size $l$ is macroscopic. It depends on the size and shape of the sample and on the orientation of $\vec
M$ with respect to the sample surface. For example, in a ferromagnetic slab of thickness $L$, but infinite in other directions, there are no domains if $\vec M$ is parallel to the slab surface. But if $\vec M$ is normal to the surface the stripe domains of the macroscopic size $l \propto
\sqrt{L}$ appear at equilibrium [@LL].
On the other hand, from the very beginning of studying the coexistence of the ferromagnetism and superconductivity it was known that competition between ferromagnetism and superconductivity may lead to structures with periodic variation of the $\vec M$ direction in space. The period of these structures is determined by the intrinsic parameters of the material, is normally smaller than $\lambda$, and does not depend on the size and shape of the sample. Appearance of this structure means that ferromagnetism has lost competition with superconductivity and the “superconducting ferromagnet” is not a ferromagnet in a strict sense: this is an antiferromagnetic structure with a large but finite period. Various types of such structures were known: cryptoferromagnet alignment of Anderson and Suhl [@AS], spiral structure of Blount and Varma [@BV], or domain structure of Krey [@Krey]. One can find these and other references in the review [@BB] cited in Ref. . The second paragraph in Ref. clearly emphasized the difference between the ferromagnetic macroscopic domains and these structures (let us call them intrinsic domain structures) and specifically warned that the paper addressed the case when the material is stable with respect to formation of intrinsic domains.
Faurè and Buzdin [@FB] considered the intrinsic domain structure, which was analyzed by Krey [@Krey] more than 30 years ago. They rederived the structure parameters obtained by him. The domain size given by Faurè and Buzdin in Eq. (7), $l
\sim \tilde w^{1/3} \lambda^{2/3}$, coincides with that given by Krey in his Eq. (30) (apart from notations). Here $\tilde w \sim (K/2\pi M^2) \delta$, $K$ is the energy of the easy-axis anisotropy, and $\delta$ is the domain-wall thickness. The condition for formation of this structure obtained by Krey also coincides with that of Faurè and Buzdin: $\lambda > \tilde w$. Thus in the limit $L\to \infty$ they obtained the intrinsic domain structure in the state which is globally antiferromagntic. The structure can appear in any sample whatever its demagnetization factors are, in particular, in the slab of thickness $L$ independently of whether $\vec M$ is normal or parallel to the slab plane. Certainly the results of Ref. cannot be relevant for this state as clearly warned there. Faurè and Buzdin claimed that their results for thin slabs (small $L$) disagree with Ref. , though Ref. did not consider finite-$L$ corrections at all addressing (like Refs. ) only the macroscopic limit, when $L$ exceeds any intrinsic scales (including $\lambda$) or any combination of them. Only then the difference between intrinsic domains and [*macroscopic*]{} domains has a clear meaning.
Though time and again Faurè and Buzdin stressed contradiction to Ref. , in reality they confirmed its conclusion: If the superconducting ferromagnet is stable with respect to formation of intrinsic domains, macroscopic domains also do not appear. They claim that the area of stability, for which the analysis of Ref. is relevant, corresponds to “the nonrealistic limit of vanishing $\lambda$”. In reality Krey’s stability condition $ \lambda <
\tilde w \sim (K/2\pi M^2)\delta$ (not $ \lambda \ll \tilde w $ !) is not so severe and allows the values of $\lambda$ essentially larger than the domain-wall width $\delta$. Indeed, the ratio $K/2\pi M^2$, which is called the quality factor of the magnetic material, can be rather high. This is required for various applications of magnetic materials [@MS]. The quality factor is especially high for weak ferromagnetism, which is the most probable case for the coexistence of ferromagnetism and superconductivity.
E.B. Sonin\
Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel\
PACS numbers: 75.60.Ch, 74.25.Ha, 74.90.+n
[99]{}
M. Faurè and A.I. Buzdin, Phys. Rev. Lett [**94**]{}, 187202 (2005). E.B. Sonin, Phys. Rev. B [**66**]{}, 100504(R) (2002). L.D. Landau and E.M. Lifshitz, [*Electrodynamics of Continuous Media*]{} (Pergamon Press, Oxford, 1984). P.W. Anderson and H. Suhl, Phys. Rev. [**116**]{}, 898 (1959). E.L. Blount and C.M. Varma, Phys. Rev. Lett [**42**]{}, 1079 (1979). U. Krey, Intern. J. Magnetism, [**3**]{}, 65 (1972). L. N. Bulaevskii, A. I. Buzdin, M. L. Kulić, and S. V. Panjukov, Adv. Phys. [**34**]{}, 176 (1985). A.P. Malozemoff and J.C. Slonczewski, [*Magnetic Domain Walls in Bubble Materials*]{} (Academic Press, N.Y., 1979).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Let $U$ be an operator in a Hilbert space $\mathcal{H}_{0}$, and let $\mathcal{K}\subset\mathcal{H}_{0}$ be a closed and invariant subspace. Suppose there is a period-$2$ unitary operator $J$ in $\mathcal{H}_{0}$ such that $JUJ=U^{\ast}$, and $PJP\geq0$, where $P$ denotes the projection of $\mathcal{H}_{0}$ onto $\mathcal{K}$. We show that there is then a Hilbert space $\mathcal{H}\left( \mathcal{K}\right) $, a contractive operator $W\colon\mathcal{K}\rightarrow\mathcal{H}\left( \mathcal{K}\right) $, and a selfadjoint operator $S=S\left( U\right) $ in $\mathcal{H}\left(
\mathcal{K}\right) $ such that $W^{\ast}W=PJP$, $W$ has dense range, and $SW=WUP$. Moreover, given $\left( \mathcal{K},J\right) $ with the stated properties, the system $\left( \mathcal{H}\left( \mathcal{K}\right)
,W,S\right) $ is unique up to unitary equivalence, and subject to the three conditions in the conclusion. We also provide an operator-theoretic model of this structure where $U|_{\mathcal{K}}$ is a pure shift of infinite multiplicity, and where we show that $\ker\left( W\right) =0$. For that case, we describe the spectrum of the selfadjoint operator $S\left( U\right)
$ in terms of structural properties of $U$. In the model, $U$ will be realized as a unitary scaling operator of the form$$f\left( x\right) \longmapsto f\left( cx\right) ,\qquad c>1,$$ and the spectrum of $S\left( U_{c}\right) $ is then computed in terms of the given number $c$.
address: |
Department of Mathematics\
The University of Iowa\
Iowa City, IA 52242-1419\
U.S.A.
author:
- 'Palle E. T. Jorgensen'
bibliography:
- 'jorgen.bib'
title: Diagonalizing operators with reflection symmetry
---
[^1]
\[Int\]Introduction
===================
The paper is motivated by two problems one from mathematical physics, and the other from the interface of integral transforms and interpolation theory. The first problem is that of changing the spectrum of an operator, or a one-parameter group of operators, with a view to getting a new spectrum with physical desiderata (see, e.g., [@Seg98]), for example creating a mass gap, and still preserving quasi-equivalence of the two underlying operator systems. In the other problem we study how Hilbert space functional completions change under the variation of a parameter in the integral kernel of the transform in question. The motivating example here is derived from a certain version of the Segal–Bargmann transform. For more detail on the background and the applications alluded to in the Introduction, we refer to the two previous joint papers [@JoOl98] and [@JoOl99], as well as [@Nee94] and [@Hal98].
Let $U$ be an operator in a Hilbert space $\mathcal{H}_{0}$, and let $J$ be a period-$2$ unitary operator in $\mathcal{H}_{0}$ such that$$JUJ=U^{\ast}. \label{eqInt.1}$$ We think of (\[eqInt.1\]) as a reflection symmetry for the given operator $U$. In this case, $U$ and its adjoint $U^{\ast}$ have the same spectrum, but, of course, $U$ need not be selfadjoint. Nonetheless, we shall think of (\[eqInt.1\]) as a notion which generalizes selfadjointness. As an example, let the Hilbert space $\mathcal{H}_{0}=L^{2}\left( \mathbb{T}\right) $, $$\left( Uf\right) \left( z\right) =zf\left( z\right) ,\qquad f\in
L^{2}\left( \mathbb{T}\right) ,\;z\in\mathbb{T}, \label{eqInt.2ins}$$ and$$Jf\left( z\right) =f\left( \bar{z}\right) . \label{eqInt.3ins}$$ The space $L^{2}\left( \mathbb{T}\right) $ is from Haar measure on the circle group $\mathbb{T}=\left\{ z\in\mathbb{C}\mathrel{;}\left| z\right|
=1\right\} $. It clear that (\[eqInt.1\]) then holds. If $\mathcal{K}=H^{2}\left( \mathbb{T}\right) $ is the Hardy space of functions, $f\left(
z\right) =\sum_{n=0}^{\infty}c_{n}z^{n}$, with $\left\| f\right\| ^{2}=\sum_{n=0}^{\infty}\left| c_{n}\right| ^{2}<\infty$, then we also have$$PJP\geq0 \label{eqInt.2bis}$$ where $P$ denotes the projection onto $H^{2}\left( \mathbb{T}\right) $. In fact$$\left\langle f,Jf\right\rangle =\left| c_{0}\right| ^{2}, \label{eqInt.3bis}$$ where $\left\langle \,\cdot\,,\,\cdot\,\right\rangle $ denotes the inner product in $L^{2}\left( \mathbb{T}\right) $. While our result applies to the multiplicity-one shift, this is a degenerate situation, and the nontrivial applications are for the case of infinite multiplicity.
There is in fact an infinite-multiplicity version of the above which we proceed to describe. Let $0<s<1$ be given, and let $\mathcal{H}_{s}$ be the Hilbert space whose norm $\left\| f\right\| _{s}$ is given by$$\left\| f\right\| _{s}^{2}=\int_{\mathbb{R}}\int_{\mathbb{R}}\overline
{f\left( x\right) }\,\left| x-y\right| ^{s-1}f\left( y\right) \,dx\,dy.
\label{eqInt.4}$$ Let $a\in\mathbb{R}_{+}$ be given, and set$$\left( U\left( a\right) f\right) \left( x\right) =a^{s+1}f\left(
a^{2}x\right) . \label{eqInt.5}$$ It is clear that then $a\mapsto U\left( a\right) $ is a unitary representation of the multiplicative group $\mathbb{R}_{+}$ acting on the Hilbert space $\mathcal{H}_{s}$. It can be checked that $\left\| f\right\|
_{s}$ in (\[eqInt.4\]) is finite for all $f\in C_{c}\left( \mathbb{R}\right) $ ($=$ the space of compactly supported functions on the line). Now let $\mathcal{K}$ ($=\mathcal{K}_{s}$) be the closure of $C_{c}\left(
-1,1\right) $ in $\mathcal{H}_{s}$ relative to the norm $\left\|
\,\cdot\,\right\| _{s}$ of (\[eqInt.4\]). It is then immediate that $U\left( a\right) $, for $a>1$, leaves $\mathcal{K}_{s}$ invariant, i.e., it restricts to a semigroup of isometries $\left\{ U\left( a\right) \mathrel
{;}a>1\right\} $ acting on $\mathcal{K}_{s}$. Setting$$\left( Jf\right) \left( x\right) =\left| x\right| ^{-s-1}f\left(
\frac{1}{x}\right) ,\qquad x\in\mathbb{R}\setminus\left\{ 0\right\} ,
\label{eqInt.6}$$ we check that $J$ is then a period-$2$ unitary in $\mathcal{H}_{s}$, and that $$JU\left( a\right) J=U\left( a\right) ^{\ast}=U\left( a^{-1}\right)
\label{eqInt.7}$$ and$$\left\langle f,Jf\right\rangle _{\mathcal{H}_{s}}\geq0,\qquad\forall
\,f\in\mathcal{K}_{s}, \label{eqInt.8}$$ where $\left\langle \,\cdot\,,\,\cdot\,\right\rangle _{\mathcal{H}_{s}}$ is the inner product $$\left\langle f_{1},f_{2}\right\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We discuss the properties of the distributions of energies of minima obtained by gradient descent in complex energy landscapes. We find strikingly similar phenomenology across several prototypical models. We particularly focus on the distribution of energies of minima in the analytically well-understood $p$-spin-interaction spin glass model. We numerically find non-Gaussian distributions that resemble the Tracy-Widom distributions often found in problems of random correlated variables, and non-trivial finite-size scaling. Based on this, we propose a picture of gradient descent dynamics that highlights the importance of a first-passage process in the eigenvalues of the Hessian. This picture provides a concrete link to problems in which the Tracy-Widom distribution is established. Aspects of this first-passage view of gradient-descent dynamics are generic for non-convex complex landscapes, rationalizing the commonality that we find across models.'
author:
- 'Horst-Holger Boltz'
- Jorge Kurchan
- 'Andrea J. Liu'
title: Fluctuation Distributions of Energy Minima in Complex Landscapes
---
Introduction
============
The notion of an underlying complex energy landscape in glassy, disordered systems is useful [@goldstein; @stillinger; @wales; @onuchic; @heuer1997; @*heuer2008; @krzakala2007; @berthier2011; @charbonneau2014] to the extent that the landscape can be reduced to relatively few properties that are relevant to observed phenomena. The complexity, which counts stationary points in the landscape (minima, saddles, maxima) is an example of such a property. An energy landscape is complex if the number of stationary points depends exponentially on the system size.
An intuitive approach to probing complexity is to do a naive search for minima using gradient descent. [@numrec] One follows an initial configuration along the (negative) gradient flow of the energy until a stationary point (vanishing gradient) is found. Because a numerical descent almost certainly ends in a minimum, gradient descent does not only constitute the simplest form of physical dynamics in a complex landscape, a quench to zero temperature, but also the most intuitive and simplest form of optimization. If one starts with flatly sampled random initial positions (corresponding to infinite-temperature $T=\infty$ configurations), gradient descent has the added advantage of sampling local minima with a probability that can be calculated because it is proportional to the volumes of their basins of attraction [@xu2011; @frenkel2017]. Finally, in addition to being a local optimization strategy, gradient descent is also the archetypal greedy algorithm, particularly if one considers a discretized version as one does with any numerical implementation: in every time step the displacement with the largest expected loss in energy is chosen. Within the field of glassy systems, gradient descent is used to obtain “inherent structures” [@stillinger1984; @*stillinger1985; @sciortino1999; @sastry2001; @debenedetti2001], *i.e.* the minima at the bottom of the local basin of attraction around which the system thermally fluctuates, while in machine learning, gradient descent is the original go-to learning strategy [@lecun2015]. Gradient descent is also used to obtain jammed packings of repulsive soft spheres, which are the least stable packings that are mechanically rigid [@ohern2003; @goodrich2014].
Here we look at the shape of the distribution of minima obtained by gradient descent for several different models, with particular focus on the spherical $p$-spin-interaction spin glass. Such distributions, for example for jamming, have been assumed to be Gaussian [@ohern2003]. Our central finding is that for all of these models, the distributions are non-Gaussian with non-trivial tail exponents on one side that are consistent with the Tracy-Widom distribution. We rationalize this finding with a novel perspective that might be the starting point for an eventual analytical approach.
In Sec. II we introduce the models studied. We then present our numerical results in Sec. III and use established results for the $p$-spin model in Sec. IV to formulate a toy process that allows us to understand these numerical results. We close in Sec. V with some final remarks on the applicability of these ideas to other contexts.
Models & Complexity
===================
We study various models with complex landscapes. A unifying perspective is provided by all of them being random constraint-satisfaction problems, i.e. assemblies of equations or inequalities. Generically, the question of interest is whether a specific realization allows for an assignment of the variables that satisfies all constraints or whether there are frustrations (which are easily introduced in randomized problems) that prevent satisfaction of all of the constraints. Generically there is a (SAT/UNSAT) transition between a phase where a satisfying assignment is possible (SAT) and a phase where this is not possible (UNSAT) upon tweaking the hardness of the satisfaction problem, e.g. by changing a control parameter such as the ratio of (in-)equalities and variables. Versions with discrete (particularly Boolean) variables are of fundamental importance to computer science[@cook1971], whereas SAT/UNSAT transitions in continuous constraint-satisfaction problems are conjectured to form an important universality class [@franz2017] in statistical physics. The focus of our attention is the spherical $p$-spin model which we therefore introduce first, before the $k$-SAT, perceptron and jamming models.
#### The p-spin model.
Specifically we consider the spherical $p$-spin model[@crisanti1992; @kurchan1996]: *i.e.*, we have $N$ spins $S_i$ whose combined length is constrained to $\sum S_i^2=N$ (leaving effectively $N-1$ degrees of freedom) with an energy functional $$\begin{aligned}
H &= \sum_{i_1<i_2<\ldots<i_p} J_{i_1,i_2,\ldots,i_p} S_{i_1} S_{i_2} \ldots S_{i_p}\end{aligned}$$ containing random Gaussian couplings $J$ with mean zero and variance $\langle J^2\rangle_c = N/\#J$. Here, $\#J \sim N^p$ is the number of terms appearing in the energy functional (while adhering to the constraint of ascending indices). We use this convention to account for finite-size effects from lower-order terms, but ultimately only the scaling with $N$ is important. Note that particularly in the older physics literature a different convention is used that introduces an additional factor of two here. This energy is an extensive quantity scaling with system size and we therefore also introduce the corresponding intensive quantity $\varepsilon = E/N$. As the qualitative nature of the energy landscape defined by this functional is independent of $p$ for $p>2$ ($p=2$ corresponds to a convex eigenvalue problem and therefore only has a single, trivially global minimum), we choose to limit ourselves to the numerically most accessible case of $p=3$. Still, the cost of a simple evaluation of the energy inevitably scales as $N^p$.
The energy scale $\varepsilon_{th}=-2\sqrt{(p-1)/p}=-\sqrt{8/3}$ is called the threshold energy as it constitutes the upper energy boundary below which an exponentially large number of stationary points exist. This is quantified by looking at the (cumulative) complexities. If we define $\mathcal{N}_k(\varepsilon)$ to be the number of stationary points of index $k$ with an (intensive) energy not larger than $\varepsilon$, the corresponding complexity $\Sigma(\varepsilon)$ is given by $$\begin{aligned}
\Sigma(\varepsilon) &= \frac{1}{N} \log \mathcal{N}_k(\varepsilon) \text{.} \label{eq:complex}\end{aligned}$$ The complexity was studied earlier within the TAP approach [@crisanti] and has been the subject of rigorous mathematical analysis in the limit of large $N$ [@auffinger]. Remarkably, a qualitatively similar structure has been found for rather small system sizes by numerical enumeration of the critical points [@mehta].
In this paper, we focus on the shape of the distribution of energies of minima, as obtained by gradient descent for finite systems. This corresponds to the shape of the normalized distribution corresponding to $\mathcal{N}_{k \equiv 0}(\varepsilon)$. The distribution of final energies found as a result of gradient descent for the $p$-spin model is shown in fig. \[fig:combo\_pdf\] (a).
For a suitable choice of couplings, the $p$-spin model provides a natural energy landscape for the optimization problem corresponding to a $k$-SAT decision problem [@mezard2002]. The model also provides insight into structural glasses [@cugliandolo1993]. It is also a valuable model in its own right. The overall gestalt of the energy landscape, as captured by the complexities, is the relevant property that drives interest in the $p$-spin model as a prototypical complex energy landscape. Physical systems usually have a well-defined notion of a ground-state energy which sets a lower bound to an extensive number of minima. Additionally, the existence of an upper bound reflects that “over-frustration” of a complex system–it is exponentially hard to construct a state with an energy less favorable than some native scale.
#### The $k$-SAT model.
The prototypical satisfiability problem is that of Boolean (or propositional) satisfaction,
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We study classically scale invariant models in which the Standard Model Higgs mass term is replaced in the Lagrangian by a Higgs portal coupling to a complex scalar field of a dark sector. We focus on models that are weakly coupled with the quartic scalar couplings nearly vanishing at the Planck scale. The dark sector contains fermions and scalars charged under dark $SU(2)\times
U(1)$ gauge interactions. Radiative breaking of the dark gauge group triggers electroweak symmetry breaking through the Higgs portal coupling. Requiring both a Higgs boson mass of 125.5 GeV and stability of the Higgs potential up to the Planck scale implies that the radiative breaking of the dark gauge group occurs at the TeV scale. We present a particular model which features a long-range abelian dark force. The dominant dark matter component is neutral dark fermions, with the correct thermal relic abundance, and in reach of future direct detection experiments. The model also has lighter stable dark fermions charged under the dark force, with observable effects on galactic-scale structure. Collider signatures include a dark sector scalar boson with mass $\lesssim 250$ GeV that decays through mixing with the Higgs boson, and can be detected at the LHC. The Higgs boson, as well as the new scalar, may have significant invisible decays into dark sector particles.
author:
- 'Wolfgang Altmannshofer$^1$, William A. Bardeen$^2$, Martin Bauer$^{2,3}$, Marcela Carena$^{2,3,4}$, Joseph D. Lykken$^2$'
title: 'Light Dark Matter, Naturalness, and the Radiative Origin of the Electroweak Scale'
---
Introduction \[sec:intro\]
==========================
The Standard Model (SM) is a renormalizable quantum field theory that makes unambiguous predictions for elementary particle processes over a very large range of energy scales. Apart from a possible metastable vacuum, the SM has no theoretical inconsistencies at least up to the Planck scale at which we expect gravity to become strong and quantum field theories to break down. If this scenario is realized in nature, the Higgs mass parameter seems artificially small compared to the Planck scale. However, in the SM itself the Higgs mass parameter is the only explicit scale in the theory, and therefore it is only multiplicatively renormalized [@Bardeen:1995kv].
An interesting modification of the SM is given by requiring that the Higgs mass term vanishes at some very high energy (UV) scale; in this case it will not be generated by SM radiative corrections at lower scales either. The tree-level potential has only a quartic term, and the full Lagrangian is classically scale invariant. Electroweak symmetry breaking could be triggered, in principle, by the one-loop corrections to the effective potential $$\begin{aligned}
\label{eq:veff1}
V_\mathrm{eff}(h)=\frac{\lambda}{2} h^4+B\,h^4\,\log(h^2/\mu^2)\,,\end{aligned}$$ in which $\mu$ denotes the renormalization scale and $B$ is a loop suppressed function of the couplings. Such a possibility has been envisioned by Coleman and Weinberg [@Coleman:1973jx]. A very attractive feature of the Coleman-Weinberg (CW) symmetry breaking mechanism is, that for couplings of order 1 at some renormalization scale in the UV, $\mu=\mu_\mathrm{UV}$, the minimum of the potential appears at an exponentially smaller scale $$\begin{aligned}
\langle h \rangle \propto \mu_\mathrm{UV}\,
e^{-\lambda(\mu_\mathrm{UV})/B}\,.\end{aligned}$$ Therefore, similar to the large disparity between the Planck scale and the confinement scale of QCD, the large disparity between the Planck scale and the electroweak scale is explained through renormalization group running [@Gildener:1976ih].
However, in the SM the CW mechanism is ruled out. The dominant contribution to the effective potential comes from the top quark, which renders it unbounded from below, since it enters the coefficient $B$ in with a negative sign. In order to overcome the top quark contribution and to reproduce the measured Higgs mass, one would need to extend the SM by bosonic degrees of freedom with sizable couplings to the Higgs [@Dermisek:2013pta; @Hill:2014mqa].
Another motivation for extending the SM is the strong observational evidence for dark matter (DM), plausibly in the form of weakly interacting heavy particles. Even in the absence of a Higgs mass parameter in the UV, such particles will generically introduce additive corrections to the Higgs mass parameter and spoil the CW dynamics in the absence of additional symmetries. This motivates an alternative implementation of the CW mechanism, first proposed by Hempfling [@Hempfling:1996ht]. In this model, the Higgs couples to one extra scalar, which through dynamics of a hidden sector undergoes CW symmetry breaking and communicates the corresponding mass scale through the Higgs portal to the SM. Dark matter can then be given by any of the new hidden sector fields that govern the renormalization group evolution of the scalar potential in the dark sector.
There has been a lot of recent interest in models that implement various aspects of these basic ideas [@Foot:2007iy; @Iso:2009ss; @Foot:2010av; @AlexanderNunneley:2010nw; @Iso:2012jn; @Englert:2013gz; @Farina:2013mla; @Heikinheimo:2013fta; @Hambye:2013dgv; @Carone:2013wla; @Farzinnia:2013pga; @Dermisek:2013pta; @Khoze:2013uia; @Tamarit:2013vda; @Gabrielli:2013hma; @Steele:2013fka; @Hashimoto:2013hta; @Holthausen:2013ota; @Hashimoto:2014ela; @Hill:2014mqa; @Radovcic:2014rea; @Khoze:2014xha; @Farzinnia:2014xia; @Pelaggi:2014wba]. Here we will focus on implementations with dark sectors that are fairly simple and thus predictive. In Section \[sec:mot\] we comment on issues of naturalness as applied to classically scale invariant modificiations of the SM, without claiming to resolve these issues. In Section \[sec:TeVDM\] we show, that in extensions of the SM with no explicit mass scales, the combination of a Higgs mass term generated through CW symmetry breaking together with the restriction to have a stable vacuum up to the Planck scale generically sets an upper bound on the dark matter mass scale of the order of a few TeV. Furthermore, the CW mechanism requires sizable couplings for gauge fields in the hidden sector, so that the simplest models in the literature are in addition subject to a lower bound on the DM mass of several hundred GeV. In Section \[sec:model\] we present a model with additional fermions in the hidden sector that can be dark matter candidates with masses at the electroweak scale or below. In Sections \[sec:higgs\], \[sec:DM\] and \[sec:out\], we discuss the collider and dark matter phenomenology of the model. In Section \[sec:out\], we also comment on further implications of this model for the dynamics of galaxy structure formation and a possible first order electroweak phase transition. We conclude in Section \[sec:con\].
The one loop effective potential of the discussed model and the one loop beta functions of the dark sector couplings are collected in Appendices \[sec:Veff\] and \[sec:betafunctions\]. For the beta functions and anomalous dimensions, we follow the methods, conventions and notation of Machacek and Vaughn [@Machacek:1983tz; @Machacek:1983fi; @Machacek:1984zw], with the improvements and extensions introduced by Luo and Xiao [@Luo:2002ey; @Luo:2002ti; @Luo:2002iq]. For the effective potentials, we follow the methods and conventions of Martin [@Martin:2001vx]. There are slight differences of notation in the literature: for example compared to [@Degrassi:2012ry; @Buttazzo:2013uya], our scalar self-coupling is twice as large, and our convention for anomalous dimensions has the opposite sign.
Motivation {#sec:mot}
==========
A Coleman-Weinberg mechanism as the origin of electroweak symmetry breaking was first considered by Gildener and Weinberg [@Gildener:1976ih]. In the absence of the Higgs mass term, the Lagrangian of the SM exhibits classical scale invariance that is softly broken by quantum effects - the well known scale anomaly. In UV completions of the SM, the physical thresholds associated with new massive states would constitute an explicit breaking of this symmetry. This introduces the need for a fine-tuning of the bare Higgs mass parameter against radiative corrections involving more massive particles. The fact that the Higgs mass parameter is not protected by a symmetry from these radiative corrections is known as the naturalness or hierarchy problem.
If the SM is UV completed by a conformal or supersymmetric (SUSY) theory, the Higgs mass parameter is radiatively stable above the scale at which this completion sets in; thus if this scale is not too high, the hierarchy problem is solved. This has led to the expectation that such a UV completion is realized in the vicinity of the electroweak scale. However, the new degrees of freedom predicted by either supersymmetric or conformal UV completions have not been observed, yet. This raises the prospect that the UV scale at which they set in is considerably higher than the electroweak scale, leaving the natural
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.'
author:
- 'D. López-Cámara, Brian J. Morsony, Mitchell C. Begelman, Davide Lazzati'
title: 'Three-dimensional AMR simulations of long-duration Gamma-Ray Burst jets inside massive progenitor stars'
---
Introduction {#sec:intro}
============
Long-duration gamma-ray bursts (GRBs) are produced by collimated relativistic outflows [@sari99] ejected in the core of massive stars at the end of their evolution [@w93; @hjorth03; @s03; @wb06]. Since their relativistic outflows have to propagate through their progenitor star material and exit the star before producing the gamma-ray photons, an outstanding issue with this scenario is to understand the mechanisms that prevent the entrainment of baryons in the light, hot jet [@mw99; @aloy00].
On the other hand, even if the jet-star interaction cannot slow down the jet, it has a strong impact on its dynamics [@mor07] and can supply enough energy to explode the star as a supernova [@khokhlov99; @mwh01; @wheeler02; @maeda03; @laz12]. In most cases, the study of the jet-star interaction has been performed numerically, with analytic models used only for guidance [@aloy02; @gomez04; @mor07; @matzner03; @brom11]. Even so, studying the propagation of a relativistic outflow that is continuously shocked by a much denser environment is not trivial since the length-scale of features in the relativistic material is typically $\sim R/\Gamma$ and therefore a large dynamical range is involved. When possible, adaptive mesh refinement (AMR) codes have been adopted [@mor07; @mor10; @laz09; @laz10; @laz11b; @nag11], and the simulations have been limited to two dimensions [@mw99; @aloy00; @mwh01; @zwm03; @miz06; @mor07; @mor10; @laz09; @laz10; @laz11b; @miz09; @nag11]. These studies have shown that even though the jet material is relativistic, the jet-head propagates sub-relativistically inside the star, thereby allowing causal contact between the bow shock at the head of the jet and the star. The shocked star material therefore drains at the sides of the jet producing a hot cocoon [@rr02; @laz05] instead of being entrained in the jet.
Two dimensional (2D) simulations can provide important answers to the outstanding questions listed above. However, they are plagued by artifacts due to the presence of a symmetry axis in the center of the jet. First, a plug of dense material accumulates in front of the jet head, slowing down its propagation and creating plumes of hot plasma at wide angles (see Figure 1 in @laz10 for an example). Second, recollimation shocks coming from the sides of the jet bounce strongly off the jet axis in 2D simulations, while they could dissipate more efficiently in a simulation at the natural dimensionality. Finally, the role of turbulence and instabilities cannot be properly explored in 2D simulations. @wang08 found that in some cases a three dimensional (3D) relativistic jet would break apart and not be able produce a successful GRB (while in 2D it would produce a successful GRB).
While 3D simulations of GRB jets have been attempted in the past [@zwh04], they were performed with a fixed grid code, casting doubt on their capability to resolve the required small scales. A 3D test-case with AMR was presented by @wang08, but since the jet-progenitor evolution varied drastically as a function of the numerical resolution (unlike our study), not much could be inferred from their study. Thus, in this paper we present, for the first time, 3D adaptive mesh refinement (AMR) simulations of GRB jets crossing a pre-SN progenitor and then flowing through the interstellar medium.
This paper is organized as follows. We first describe the physics, initial setup, and the numerical simulations in Section \[sec:input\], followed by our results and discussion in Section \[sec:results\]. Conclusions are given in Section \[sec:conc\].
Physics, initial setup and simulations {#sec:input}
======================================
Physics and initial setup {#sec:phys&initsetup}
-------------------------
As what now seems to be the generic model used for long GRBs [@mor07; @mor10; @laz09; @laz11a; @laz11b; @laz12; @lc09; @lc10; @lind10; @lind12; @nag11 for example], we consider the one-dimensional (1D) pre-supernova 16TI model from @wh06 as our initial stellar configuration. Initially (in the zero-age main sequence) model 16TI is a 16$\,M_\odot$ Wolf-Rayet star with 0.01$\,Z_\odot$ metallicity, and $3.3 \times 10^{52}$ erg s equatorial angular momentum. The final outcome of such model is a pre-SN progenitor with 13.95$\,M_\odot$ and nearly half the size of the sun ($R_0 = 4.1 \times
10^{10}$ cm). Assuming spherical symmetry, the 1D density and pressure profiles were mapped onto a 3D configuration that we assumed to be initially without rotation. The internal energy and the temperature were calculated assuming a relativistic polytropic equation of state ($\gamma$=4/3). The pre-SN progenitor was immersed in an interstellar medium (ISM) with constant density ($\rho_{\rm{{ism}}}=10^{-10}$ g cm$^{-3}$). Even though a wind environment would probably be more appropriate, we note that within the size of our simulation the dynamical role of the ambient medium is negligible and the results are therefore insensitive to the chosen ambient medium profile.
A relativistic jet commencing its flow at the center of the pre-SN progenitor was imposed at all times as a boundary inflow condition. The jet was launched at the center of the star (in fact slightly above it), flowing upwards in the polar direction (x=z=0, y=R$_{\rm{{i}}}=$10$^9$cm). The imposed jet had a half-opening angle of $\theta_0$=10$^{{o}}$, a constant luminosity of L$_0=5.33
\times$10$^{50}$ erg s$^{-1}$, an initial Lorentz Factor of $\Gamma_{\rm{{0}}}$=5, and a ratio of internal over rest-mass energy equal to $\eta_0$=80 [@mor07; @mor10; @laz09]. In order to break the 2D axis symmetry, the jet was slightly asymmetric. For the latter, we set the jet with a 1% density and pressure asymmetry on either side of a line in the XZ plane 40 degrees from the X axis. Differently from @wang08 (3D numerical study in which a two dimensional symmetrical initial setup was assumed) our initial setup resembles that from model 3A in @zwh04 enhanced with a small perturbation in the jet.
Numerical simulations {#sec:sims}
---------------------
In order to follow the temporal evolution of our initial setup, we solved the 3D gas-dynamic equations using the FLASH code (version 2.5) in cartesian coordinates [@fryx00]. The simulation domain covered the top half of the pre-SN progenitor star as well as the ISM it is immersed in (see for example panel $a$ from Figure \[fig1\]). The boundaries were set at y$_{\rm{min}}$=10$^9$cm, y$_{\rm{ {max}}}$=2.4$\times 10^{11}$ cm, x$_{\rm{ {max}}}$=-x$_{\rm{ {min}}}$=6$\times 10^{10}$ cm, and z$_{\rm{ {max}}}$=-z$_{\rm{ {min}}}$=6$\times 10^{10}$ cm. Only the equatorial plane (y=y$_{\rm{ {min}}}$) was set with a reflective boundary condition, all the other boundaries were set with transmission conditions. We used a 10-level binary adaptive grid with square-shaped pixels ($\Delta{x}=\Delta{y}=\Delta{z} \equiv \Delta$). The highest refinement level (also referred to as the finest resolution level) were accessible only at the core of the pre-SN star were the jet is injected and initially propagates. Moving away from the stellar core, the maximum level of refinement was progressively decreased. In
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider a class of infinite-dimensional, modular, graded Lie algebras, which includes the graded Lie algebra associated to the Nottingham group with respect to its lower central series. We identify two subclasses of [*Nottingham Lie algebras*]{} as loop algebras of finite-dimensional simple Lie algebras of Hamiltonian Cartan type. A property of Laguerre polynomials of derivations, which is related to toral switching, plays a crucial role in our constructions.'
address:
- |
Dipartimento di Matematica e Applicazioni\
Università degli Studi di Milano - Bicocca\
via Cozzi 53\
I-20125 Milano\
Italy
- |
Dipartimento di Matematica\
Università degli Studi di Trento\
via Sommarive 14\
I-38050 Povo (Trento)\
Italy
author:
- Marina Avitabile
- Sandro Mattarei
bibliography:
- 'References.bib'
title: |
Nottingham [L]{}ie algebras with diamonds\
of finite and infinite type
---
Introduction
============
[*Nottingham Lie algebras*]{} owe their name to one remarkable special case: the graded Lie ring with the lower central series of the Nottingham group, over the field of $p$ elements, with $p>2$ [@Jenn; @John; @Cam]. Although the Nottingham group over ${\mathbb F}_p$ is a very complex object, its lower central series descends in a simple and regular way, the lower central quotients being generally one-dimensional, except for one quotient every $p-1$ being two-dimensional. The essential feature which leads to the generalization considered here is that the Nottingham group is a pro-$p$ group of [*width two*]{} and [*obliquity zero*]{}, in the language of [@KL-GP]. Pro-$p$ groups with those two properties were called [*thin*]{} in [@CMNS], and the properties carry over naturally to graded Lie algebras.
The following equivalent but more practical definition is available for Lie algebras. A [*thin Lie algebra*]{} is a graded Lie algebra $L=\bigoplus_{k=1}^{\infty}L_{k}$ over a field ${\mathbb F}$, with $\dim(L_1)=2$ and satisfying the [*covering property*]{} $$\label{eq:covering}
L_{i+1}=[u, L_{1}] \quad \textrm{for all $0 \neq u\in L_{i}$, for all $i\geq 1$}.$$ For simplicity in this paper we supplement this definition with the assumption that $L$ has infinite dimension. It follows that a thin Lie algebra $L$ has trivial centre, is generated by $L_1$, and that homogeneous components can only have dimension one or two. Those of dimension two are called [*diamond*]{} (for reasons relating to the lattice of open normal subgroups in the pro-$p$ group case, see [@CMNS]), but there are good reasons to grant the status of [*fake diamonds*]{} to certain one-dimensional components.
Diamonds are numbered in the order of occurrence (including the fake ones, properly recognized), starting with the [*first*]{} diamond $L_1$. It may happen that $L_1$ is the only diamond, but then $L$ belongs to the distinguished family of [*graded Lie algebra of maximal class,*]{} which were introduced in [@CMN]. Because those Lie algebras were completely classified in [@CN; @Ju:maximal] (in the infinite-dimensional case), we conveniently exclude them from the definition of thin Lie algebras. Then the degree $k$ where the second diamond occurs becomes the main parameter of a thin Lie algebra. According to [@AviJur] (but see the simpler proof in [@AviMat:A-Z Section 3], where an updated definition accommodates for the peculiarities of characteristic two), the parameter $k$ can only take the values $3$, $5$, $q$ or $2q-1$, for some power $q$ of the characteristic $p$ in case this is positive. Each of the last two cases comprises a large variety of thin Lie algebras, whose features are best described separately. The introduction of [@CaMa:Hamiltonian] contains a detailed exposition of the thin Lie algebras with second diamond in degree $2q-1$. The thin Lie algebras with second diamond in degree $q$ were named [*Nottingham Lie algebras*]{} in [@CaMa:Nottingham].
According to [@CaMa:Nottingham], each diamond past the first in a Nottingham Lie algebra is assigned a [*type,*]{} taking values in the underlying field augmented with infinity, in a unique way which we recall in Section \[sec:nott\]. The second diamond of a Nottingham Lie algebra always has type $-1$. Diamonds of type zero or one are really one-dimensional components, but it is convenient to allow them in certain degrees and dub them [*fake*]{}. Fake diamonds are recognized among the one-dimensional components from certain relations which hold in them. Because $1\equiv -1 \bmod 2$, in the special case of characteristic two the second diamond of a Nottingham Lie algebra would be fake, and so one needs some care to recognize a Nottingham Lie algebra as such. This and more peculiarities of Nottingham Lie algebras of characteristic two are discussed extensively in [@AviMat:A-Z], but in the present paper we conveniently assume $p>2$. We will, however, add separate remarks on how our results need to be modified in characteristic two.
Various diamond patterns are possible, and we describe them in Section \[sec:nott\]. There are Nottingham Lie algebras with all diamonds of infinite type, and others with all diamonds of finite type. The goal of this paper is a construction (whence an existence proof) for Nottingham Lie algebras having diamonds of both finite and infinite types. They have diamonds in all degrees congruent to $1$ modulo $q-1$, but only one diamond every $p^s$ diamonds has finite type (starting with the second diamond, of necessity). Furthermore, the finite diamond types follow an arithmetic progression, giving the algebras a periodic structure. A distinction arises according to whether this arithmetic progression is entirely contained in the prime field, or not. Fake diamonds only occur in the former case. This is actually the easier case, and its special case where the arithmetic progression is the constant sequence $-1$ was already considered in [@AviMat:A-Z].
We state and prove our main results in Sections \[sec:prime-field\] and \[sec:big-field\], respectively.
As for other examples of thin Lie algebras produced so far (including those with second diamond in degree $2q-1$), those with a periodic structure can be obtained through a [*loop algebra*]{} construction, starting from a finite-dimensional Lie algebra with a suitable cyclic grading. In most cases those finite-dimensional Lie algebras are simple (or close to simple) Lie algebras of the Cartan type $H$ (Hamiltonian). In this paper we need two types of simple Hamiltonian Lie algebras, of dimension a power of $p$ and two less than a power of $p$, whose definitions we recall in Section \[sec:Cartan\].
While the correct Lie algebra to employ is generally easy to guess for dimension reasons (where the absence of fake diamonds, or the presence of one or two in each period, calls for a Lie algebra of dimension a power of $p$, or one or two less), explicitly producing a suitable cyclic grading can be a real challenge without the proper tool. In lucky situations one can obtain the required cyclic gradings starting from the natural gradings of those Hamiltonian Lie algebras. In other cases one needs to pass to new gradings by what we may call a [*grading switching*]{}. This procedure is related to [*toral switching,*]{} a fundamental tool in the theory of modular Lie algebras, but it needs to be more general in one respect, because the cyclic grading of interest may not be associated to a torus in any way.
Like toral switching, grading switching is based on taking some version of an exponential of a derivation of the Lie algebra. A toral switching based on [*Artin-Hasse exponentials,*]{} which makes sense for nilpotent derivations $D$ and reduces to [*truncated exponentials*]{} $\sum_{i=0}^{p-1}D^i/i!$ when $D^p=0$, was described in [@Mat:Artin-Hasse]. In the present paper this would only allow one to deal with the case where the arithmetic progression of finite diamond types is contained in the prime field. In the other case we need a special instance of a completely general version of grading switching, valid for arbitrary derivations, which is developed in [@AviMat:Laguerre]. We provide the required details in Section \[sec:Laguerre\].
In his master’s thesis [@Sca:thesis], written under the direction of the first author, Claudio Scarbolo gave a construction for our Nottingham Lie algebras of Section \[sec:big-field\] in the case of characteristic two, based on direct calculations and taking advantage of certain peculiarities of the characteristic (see Remark \[rem:char2-big\]).
Nottingham Lie algebras {#sec:nott}
=======================
We summarize here the more complete discussion of Nottingham Lie algebras given in [@AviMat:A-Z Section 2], restricting ourselves to information which is essential to our present goals. Thus, suppose $L=\bigoplus_{i=1}^{\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The standard and renormalized coupled cluster methods with singles, doubles, and noniterative triples and their generalizations to excited states, based on the equation of motion coupled cluster approach, are applied to the $^4$He and $^{16}$O nuclei. A comparison of coupled cluster results with the results of the exact diagonalization of the Hamiltonian in the same model space shows that the quantum chemistry inspired coupled cluster approximations provide an excellent description of ground and excited states of nuclei. The bulk of the correlation effects is obtained at the coupled cluster singles and doubles level. Triples, treated noniteratively, provide the virtually exact description.'
author:
- 'K. Kowalski$^1$, D.J. Dean$^2$, M. Hjorth-Jensen$^3$, T. Papenbrock$^{2,4}$, and P. Piecuch$^{1,5}$'
title: Coupled cluster calculations of ground and excited states of nuclei
---
The description of finite nuclei requires an understanding of both ground- and excited-state properties based on a given nuclear Hamiltonian. While much progress has been made in employing the Green’s Function Monte Carlo [@pieper02] and no-core shell model [@bruce2] techniques, these methods have apparent limitations to light nuclei. Given that present nuclear structure research facilities and the proposed Rare Isotope Accelerator will continue to open significant territory into regions of medium-mass and heavier nuclei, it becomes imperative to investigate methods that will allow for a description of medium-mass systems. Coupled cluster theory is a particularly promising candidate for such an endeavor due to its enormous success in quantum chemistry .
Coupled cluster theory originated in nuclear physics [@coester58; @coester60] around 1960. Early studies in the seventies [@kum78] probed ground-state properties in limited spaces with free nucleon-nucleon interactions available at the time. The subject was revisited only recently by Guardiola [*et al.*]{} [@bishop96], for further theoretical development, and by Mihaila and Heisenberg [@hm99], for coupled cluster calculations using realistic two- and three-nucleon bare interactions and expansions in the inverse particle-hole energy spacings. However, much of the impressive development in coupled cluster theory made in quantum chemistry in the last 15-20 years [@Bartlett95; @Paldus99; @comp_chem_rev00; @Piecuch02a; @Piecuch02b] still awaits applications to the nuclear many-body problem.
In this Letter, we apply quantum chemistry inspired coupled cluster methods to finite nuclei. We show that the coupled cluster approach is numerically inexpensive and accurate by comparing our results for $^{4}$He with results from exact diagonalization in a model space consisting of four major oscillator shells. For the first time, we apply coupled cluster theory to excited states in nuclei, exploiting the equation of motion coupled cluster formalism [@Stanton:1993; @Piecuch99]. We discuss several approximations within coupled cluster theory and also compute the ground state of the $^{16}$O nucleus within the same model space. We remind the reader that certain acronyms have become standard in quantum chemistry. For this reason, we use the same abbreviations in this Letter.
Coupled cluster theory is based on an exponential ansatz for the ground-state wave function $|\Psi_{0}\rangle=\exp(T) |\Phi\rangle$. Here $T$ is the cluster operator and $|\Phi\rangle$ is the reference determinant. In the CCSD (“coupled cluster with singles and doubles”) method, we truncate the many-body expansion of the cluster operator $T$ at two-body components. The truncated cluster operator $T^{\rm (CCSD)}$, used in the CCSD calculations, has the form [@purvis82]: $T^{\rm (CCSD)} = T_{1}
+ T_{2}$. Here $T_1=\sum_{i,a} t_a^i a^{a} a_{i}$ and $T_2= \frac{1}{4} \sum_{ij,ab} t_{ab}^{ij} a^{a} a^{b} a_{j} a_{i}$ are the singly and doubly excited clusters, with indices $i,j,k$ ($a,b,c$) designating the single-particle states occupied (unoccupied) in the reference Slater determinant $|\Phi\rangle$ and $a^{p}$ ($a_{p}$) representing the creation (annihilation) operators. We determine the singly and doubly excited cluster amplitudes $t_a^i$ and $t_{ab}^{ij}$, defining $T_1$ and $T_2$, respectively, by solving the nonlinear system of algebraic equations, $\langle \Phi_{i}^{a} | \bar{H}^{\rm (CCSD)}|\Phi\rangle = 0$, $\langle \Phi_{ij}^{ab} | \bar{H}^{\rm (CCSD)}|\Phi\rangle = 0$, where $\bar{H}^{{\rm (CCSD)}} = \exp(-T^{\rm (CCSD)}) \, H \, \exp(T^{\rm (CCSD)})$ is the similarity-transformed Hamiltonian and $|\Phi_{i}^{a}\rangle$ and $|\Phi_{ij}^{ab}\rangle$ are the singly and doubly excited Slater determinants, respectively. Once $T_1$ and $T_2$ amplitudes are determined, we calculate the ground-state CCSD energy $E_{0}^{\rm (CCSD)}$ as $\langle\Phi|\bar{H}^{{\rm (CCSD)}}|\Phi\rangle $. For the excited states $|\Psi_{K}\rangle$ and energies $E_{K}^{\rm (CCSD)}$ ($K > 0$), we apply the EOMCCSD (“equation of motion CCSD”) approximation, in which $$|\Psi_{K}\rangle=R_{K}^{\rm (CCSD)} \exp(T^{\rm (CCSD)}) |\Phi\rangle .
\label{eomfun}$$ Here $R_{K}^{\rm (CCSD)} = R_{0}+ R_{1} + R_{2}$ is a sum of the reference ($R_{0}$), one-body ($R_{1}$), and two-body ($R_{2}$) components that are obtained by diagonalizing the similarity-transformed Hamiltonian $\bar{H}^{{\rm (CCSD)}}$ in the same space of singly and doubly excited determinants $|\Phi_{i}^{a}\rangle$ and $|\Phi_{ij}^{ab}\rangle$ as used in the ground-state CCSD calculations [@Stanton:1993; @Piecuch99].
The CCSD and EOMCCSD methods are expected to describe the bulk of the correlation effects with inexpensive computational steps that scale as $n_{o}^{2} n_{u}^{4}$, where $n_{o}$ ($n_{u}$) is the number of occupied (unoccupied) single-particle orbitals. While the inclusion of triply excited clusters $T_{3}$ and three-body excitation operators $R_{3}$ increases the accuracy of the method, the resulting full CCSDT (“T” stands for “triples”) [@Noga:1987a] and EOMCCSDT [@Kowalski:2001d] methods scale as $n_{o}^{3} n_{u}^{5}$ and are rather expensive. For this reason, we add the [*a posteriori*]{} corrections due to triples to the CCSD/EOMCCSD energies, which require $n_{o}^{3} n_{u}^{4}$ noniterative steps. The ground- and excited-state triples corrections, $\delta_{0}$ and $\delta_{K}$ ($K>0$), respectively, are calcultated with the CR-CCSD(T) (“completely renormalized CCSD(T)”) approach [@Piecuch02a; @Piecuch02b; @Kowalski00; @Kowalski03] in which $$\delta_{K} = \mbox{$\frac{1}{36}$} \sum_{ijk,abc} \langle \tilde{\Psi}_{K} | \Phi_{ijk}^{abc} \rangle
\, {\cal M}_{abc}^{ijk}(K) / \Delta_{K} \;\; (K \geq 0).
\label{deltak}$$ Here $|\Phi_{ijk}^{abc}\rangle$ are the triply excited determinants and ${\cal M}_{abc}^{ijk}(K)$ are the generalized moments of the CCSD ($K=0$) and EOMCCSD ($K > 0$) equations [@Kowalski00; @Kowalski03; @Kowalski01], $${\cal M}_{abc}^{ijk}(K) =
\langle \Phi_{ijk}^{abc} | \bar{H}^{{\rm (CCSD)}}
S_{K}^{\rm (CCSD)} | \Phi \rangle \;,
\label{mk}$$ where $S^{\rm (CCSD)}_0=1$ and $S^{\rm (CCSD)}_K=R_K^{\rm (CCSD)}$ for $K > 0$. They can be calculated using the CCSD and EOMCCSD cluster and excitation operators $T^{\rm (CCSD)}$ and $R_{K}^{\rm (CCSD)}$, respectively. The $\Delta_{K}$ denominators are defined as $$\Delta_{K}=
\langle \tilde{\Psi}_{K} | S_{K}^{\rm (CCSD)} \exp(T^{\rm (CCSD)}) |\Phi\rangle \, .
\label{denomk}$$ The states $|\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose to synthesize feasible caging grasps for a target object through computing *Caging Loops*, a closed curve defined in the *shape embedding space* of the object. Different from the traditional methods, our approach *decouples* caging loops from the surface geometry of target objects through working in the embedding space. This enables us to synthesize caging loops encompassing multiple topological holes, instead of always tied with one specific handle which could be too small to be graspable by the robot gripper. Our method extracts caging loops through a topological analysis of the distance field defined for the target surface in the embedding space, based on a rigorous theoretical study on the relation between caging loops and the field topology. Due to the decoupling, our method can tolerate incomplete and noisy surface geometry of an unknown target object captured on-the-fly. We implemented our method with a robotic gripper and demonstrate through extensive experiments that our method can synthesize reliable grasps for objects with complex surface geometry and topology and in various scales.'
author:
- 'Jian Liu$^{1}$, Shiqing Xin$^{1}$, Zengfu Gao$^{1}$, Kai Xu$^{2}$, Changhe Tu$^{1}$ and Baoquan Chen$^{1}$[^1]'
bibliography:
- 'reference.bib'
title: 'Caging Loops in Shape Embedding Space: Theory and Computation'
---
Introduction {#sec:intro}
============
As an important type of robot grasping, caging grasps [@Rodriguez-2011FromCT; @Wan-2013AN; @Diankov-2008ManipulationPW], as compared to force-closure grasps [@Zhu-PlanningFG2004; @Ding-Computing3O2000; @Borst-GraspingTD2003; @Ferrari-PlanningOG1992], are advantageous in handling target objects with unknown or uncertain surface geometry and/or friction properties. This makes caging grasps more practically applicable in a wide spectrum of real scenarios. We are especially interested in a simple yet effective type of caging grasp formed by *caging loops*. A caging loop is a closed curve in three dimensional space computed around some part of the target object and used to guide robot grippers to form a caging grasp.
Existing methods on 3D caging grasp are based either on the geometric (e.g. [@Zarubin-2013CagingCO]) or the topological (e.g. [@Pokorny-2013GraspingOW]) information of the target surface, or even both [@Kwok-2016RopeCA]. A common issue to these methods is that the computed caging curves seriously depend on topological and geometrical features of objects, while being oblivious to the relative size between the target object and the gripper. Taking the genus-4 Indonesian-Lady model in Fig. \[fig:teaser\] for example. The six handles on the model are all seemingly good candidates for grasping. However, when the size of the model is too small compared to the robot gripper, these handles will no longer be graspable since the holes may be too small for the fingers to pass through. In such case, a more feasible grasp would be enclosing the object with a loop encompassing multiple handles (see Fig. \[fig:teaser\](top) and Fig. \[fig:scale\](a)).
Another issue with geometry-based caging curves is that they easily lead to non-convex spatial curves which are not suited for guiding the gripper configuration. The example in Fig. \[fig:scale\](d) demonstrates such case, where the gripper penetrates into the object due to the non-convexity of the caging loop. Estimating a convex hull for the spatial loop still cannot guarantee a penetration-free configuration.
Motivation And Contribution
---------------------------
![Grasping a 3D-printed Indonesian-Lady model (the top and middle row) in two sizes. Our method is able to synthesize caging loops (red circles) encompassing multiple topological handles, when the object is too small to be grasped on one handle (top row). When the object is large, our method naturally grasps one handle (middle row). The two cases are integrated seamlessly in method. The bottom row shows how a caging loop computed in the embedding encloses the two handles of a pliers. The 3D objects are acquired by two RGBD cameras and reconstructed on-the-fly (middle column). As a reference, a human grasp is shown to the left for each object.[]{data-label="fig:teaser"}](teaser.png){width="0.95\columnwidth"}
These examples motivated us in seeking to “fill up” those small topological holes and “smooth out” the geometric details on the target surface, before computing caging curves. Therefore, we advocate computing caging loops in the *embedding space* of the target surface, through a topological analysis of the distance field defined for the target surface in the embedding space.
We conduct a theoretical study on the fundamental relationship between caging loops and Morse singularities (including minimal, maximal and saddle points) of a spatial distance function. Based on that, we develop an algorithm of caging loop extraction through saddle point detection and analysis, within a proper grasping space defined in account of the gripper size. Working with a distance function defined in the embedding space naturally decouples the shape of caging loops from the geometric details of the target surface, while still keeping them aware of the overall shape of the target object. The caging loops are properly placed and scaled based on the relative size of the gripper against the target shape, rather than always tied with a specific handle as in traditional approaches. Another benefit of working in embedding space is the tolerance of incomplete and noisy surface geometry of the target object. This makes our method especially suited for synthesizing grasps for unknown objects which are captured and reconstructed on-the-fly, with a minimal effort of robot observation. In our implementation (see Fig. \[fig:overview\]), two depth cameras are deployed to capture the target object from two (front and back) views. Even with such a sparse capturing and low-quality reconstruction, our method can still synthesize feasible caging loops for robust grasping. We found this simple idea leads to a robust and efficient algorithm, with theoretical guarantees. We implemented our algorithm in a grasping system composed of a Barrett WAM robotic arm with a three-finger gripper and two Xtion Pro RGB-D cameras. Only depth images are used for reconstructing the target surface based on the depth fusion technique [@Newcombe-2011KinectFusionRD]. We have conducted numerous evaluations with both synthetic and real examples to evaluate the performance of our method. We show that our system is able to robustly grasp objects with complex surface geometry and topology and in various scales.
Our work makes the following contributions:
- We propose a novel method for caging grasp synthesis through topological analysis of shape-aware distance field defined in shape embedding space. The method is able to generate relative-scale-aware caging loops for unknown objects captured on-the-fly.
- We provide a rigorous study on the relation between the topology of distance field and caging loops, based on Morse theory, and derive a robust algorithm for caging loop estimation. We also provide a handful of provably effective techniques to reduce the computational cost of our method.
- We implement our method in a grasping system using robot gripper, and conduct thorough evaluations and comparisons with both synthetic and real objects.
![By decoupling caging loops from target surfaces, our method synthesizes feasible caging grasps for objects containing tiny topological handles (a) or presenting concave surface geometry (b); see the red circles and the corresponding grasps to the right. In contrast, the loops (yellow circles in c and d) computed over the target surfaces incur gripper-object collision; see the gripper parts in red color in the bottom row.[]{data-label="fig:topologyAndGeometry"}](topologyAndGeometry1.png "fig:"){width="0.95\linewidth"}\
\
![By decoupling caging loops from target surfaces, our method synthesizes feasible caging grasps for objects containing tiny topological handles (a) or presenting concave surface geometry (b); see the red circles and the corresponding grasps to the right. In contrast, the loops (yellow circles in c and d) computed over the target surfaces incur gripper-object collision; see the gripper parts in red color in the bottom row.[]{data-label="fig:topologyAndGeometry"}](topologyAndGeometry2.png "fig:"){width="0.95\linewidth"}\
\[fig:scale\]
Related Work
------------
Robot grasping is a long-standing yet actively studied research topic in the fields of robotics and vision. Force-closure and caging are two typical approaches that have been developed to synthesize grasps. Force-closure methods [@Bicchi-OnTC1995; @Miller-GraspitAV2004; @Liu-QualitativeTA1999; @Howard-OnTS1996; @Ding-Computing3O2000; @Zhu-PlanningFG2004] concentrate on finding a stable grasping configuration for the grippers where a mechanical equilibrium is achieved. The advantage of such approach is that the synthesized grasps are usually physically feasible. The method, however, requires that the 3D shape of the target model is known *a priori* and cannot tolerate much the surface defect such as missing data. Furthermore, the contact area between the gripper and the target surface is often small, leading to unsteady gras
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Non classical rotational inertia observed in rotating supersolid $He^4$ can be accounted for by a gravitomagnetic London moment similar to the one recently reported in rotating superconductive rings.'
author:
- 'C. J. de Matos[^1]'
title: Gravitomagnetic London Moment in Rotating Supersolid $He^4$
---
Non Classical Rotational Inertia (NCRI) was predicted by London 50 years ago [@London]. It was eventually verified experimentally by Hess and Fairbank [@Hess], who set Helium in a suspended bucket into rotation above the critical temperature $T_c$ at which superfluidity sets in, and then cooled it (with the bucket still rotating at angular velocity $\omega$) through $T_c$. They found that, provided $\omega$ is less than a critical value $\omega_c$, the apparent moment of inertia of the helium - that is, the ratio of its angular momentum to $\omega$ - is not given by the classical value $I_{classical}$ but rather by $$I(T)=\frac{L}{\omega}=I_{classical}\Bigg[1-f_s(T)\Bigg]\label{1}$$ where $f_s(T)=\rho^*/\rho$ is the superfluid fraction. In liquid helium, $f_s(T)$ tends to 1 in the zero-temperature limit and to zero when $T=T_c$, $\rho^*$ and $\rho$ being respectively the mass density of superfluid helium, and the mass density of normal helium.
NCRI has recently been observed in rotating supersolid $He^4$ by Kim and Chan [@Kim]. They measured the resonance frequency of a torsional oscillator that contains an annulus of solid $He^4$. Below $230 mK$, the frequency experiences a relative increase that depends on the temperature and drive amplitude and reaches a maximum of about four parts in $10^5$. Having excluded by various control experiments, other explanations, they conclude that the data indicate a change in the moment of inertia of the supersolid, which according to Equ.(\[1\]), corresponds to a maximum supersolid fraction ($f_s(T)=\rho^*_{supersolid \,
He^4}/\rho_{normal \, solid \, He^4}$) of $~0.017$ (see figure \[chan\]).
Tajmar and the author recently [@Tajmar1] [@de; @Matos1] [@Tajmar2] observed a gravitomagnetic London moment, $B_g \,
[Rad/s]$, in rotating superconductive rings. $$B_g=2\omega f_s(T)\label{2}$$ Where $\omega$ is the angular velocity of the ring, and $f_s(T)=\rho^*/\rho$ is the Cooper pairs fraction, $\rho^*$ being the Cooper pairs mass density and $\rho$ the superconductor’s bulk density.
Assuming that a rotating supersolid $He^4$ crystal also exhibits a gravitomagnetic London moment, $B_g$ proportional to the supersolid fraction, its angular momentum would be given by $$L=I_{classical}\Bigg[\omega-\frac{1}{2} B_g\Bigg]\label{3}$$ Doing Equ.(\[2\]) into Equ.(\[3\]) we find back Equ.(\[1\])!
Therefore we conclude that the rotation of supersolid $He^4$ exhibits a gravitomagnetic London moment similar to the one observed in rotating superconductive rings, which can account for the observed NCRI in this physical system. Kim and Chan’s experiment would tend to confirm the existence of the gravitomagnetic London moment in rotating quantum materials.
[99]{}
London, F., “Superfluids”, wiley New York 1954, Vol II, P. 144
Hess, G. B., Fairbank, W. M., Phys. Rev. Lett., **19** 216 (1967)
Kim, E., Chan, W., Science, **305**, 1941 (2004)
Tajmar, M., Plesescu, F., Marhold, K., de Matos, C.J., “Experimental Detection of the Gravitomagentic London Moment”, 2006, gr-qc/0603033
de Matos, C. J., “Gravitoelectromagnetism and Dark Energy in Superconductors”, to appear in Int. J. Mod. Phys. D, (2007). (also available gr-qc:/0607004)
Tajmar, M., Plesescu, F., Marhold, K. “Measurement of Gravitomagentic and acceleration fields around a rotating superconductor”, 2006, gr-qc/0610015
[^1]: ESA-HQ, European Space Agency, 8-10 rue Mario Nikis, 75015 Paris, France, e-mail: Clovis.de.Matos@esa.int
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show that for almost every polynomial $P(x,y)$ with complex coefficients, the difference of the logarithmic Mahler measures of $P(x,y)$ and $P(x,x^n)$ can be expanded in a type of formal series similar to an asymptotic power series expansion in powers of $1/n$. This generalizes a result of Boyd. We also show that such an expansion is unique and provide a formula for its coefficients. When $P$ has algebraic coefficients, the coefficients in the expansion are linear combinations of polylogarithms of algebraic numbers, with algebraic coefficients.'
address: |
Department of Mathematics\
Amherst College\
Amherst, MA 01002 USA
author:
- 'John D. Condon'
bibliography:
- 'mybib.bib'
title: Asymptotic expansion of the difference of two Mahler measures
---
Mahler measure ,asymptotic expansions ,polylogarithms
Introduction {#sec:intro}
============
For a nonzero Laurent polynomial $P\in{\ensuremath{\mathbb{C}}}\bigl[x_1^{\pm 1},\dotsc x_n^{\pm 1}\bigr]$, the *(logarithmic) Mahler measure* of $P$ is defined as $$\label{eq:multivar}
\begin{split}
m(P) &= \int_{0}^{1}\dotsi \int_{0}^{1}\log\bigl|P \bigl(\exp(2\pi i t_{1}),\ldots,\exp(2\pi i t_{n})\bigr)\bigr|
\, d t_{1}\cdots d t_{n} \\[2mm]
&= \frac{1}{(2\pi i)^n}\int_{{\mathbb{T}}^n}\log\bigl|P(x_1,\ldots,x_n)\bigr|
\,\frac{dx_1}{x_1}\cdots\frac{dx_n}{x_n},
\end{split}$$ where ${\mathbb{T}}$ is the unit circle in ${\ensuremath{\mathbb{C}}}$, oriented counter-clockwise. This integral is always finite, even if the zero set of $P$ intersects ${\mathbb{T}}^n$.
When $n=1$, Jensen’s formula implies that if $P(x)=a\prod_{j=1}^d (x-\alpha_j)$, then $$\label{eq:onevar}
m(P) = \log(a) + \sum_{j=1}^{d} {\mathop{\log^+\!}}{\lvert\alpha_{j}\rvert},$$ where, for $r>0$, ${\mathop{\log^+\!}}(r){\mathrel{\mathop:}=}\log\bigl(\max\{r, 1\}\bigr)$. The latter construct (or actually, its exponential) was first studied by D. H. Lehmer [@lehmer] in the 1930s. Mahler introduced three decades later [@mahler].
Boyd [@boyd2] established the following connection (generalized by Lawton [@lawton]) between multivariable and single-variable Mahler measure values.
\[thm:boyd\] For any nonzero Laurent polynomial $P(x,y)$ with complex coefficients, $$m\bigl(P(x,y)\bigr) = \lim_{n\to\infty} m\bigl(P(x,x^n)\bigr).$$
Also in [@boyd2], Boyd proved the following result, which shows the rate at which the above limit converges in the case of $P(x,y)=1+x+y$:
\[prop:boydasymp\] For all positive integers $n$, $$\label{eq:boydexpn}
m(1+x+x^n) - m(1+x+y) = \frac{c(n)}{n^2} + {\mathop{O}_{}}\biggl(\frac{1}{n^3}\biggr),$$ where $c(n)$ depends only on $n$ mod 3: $$c(n)=\begin{cases}
\phantom{-}\sqrt{3} \pi/18 & \mathrm{if}\ n\equiv 0,1 {\allowbreak\mkern10mu({\operator@font mod}\,\,3)} \\[2 mm]
-\sqrt{3} \pi/6 & \mathrm{if}\ n\equiv 2 {\allowbreak\mkern10mu({\operator@font mod}\,\,3)}. \\[2 mm]
\end{cases}$$
Motivated by these results, we examine the difference between these Mahler measures.
For a nonzero Laurent polynomial $P(x,y)$ and a positive integer $n$, let $\Delta_{n}(P) {\mathrel{\mathop:}=}m\bigl(P(x,x^n)\bigr)-m\bigl(P(x,y)\bigr)$.
The right side of could be thought of as the beginning of a formal series for $\Delta_{n}(1+x+y)$ of the form $\sum_{k=2}^\infty c_k(n)/n^k$. We will find such an expression for $\Delta_{n}(P)$, for $P=1+x+y$ as well as many other two-variable polynomials.
Such a formal series cannot quite be called an asymptotic power series in $n$, in the sense of [@erdelyi], in that the coefficients in such a series should be independent of $n$. But our coefficients will have a structure that will, in particular, make them bounded as functions of $n$, occasionally depending only on $n {\allowbreak\mkern6mu({\operator@font mod}\,\,m)}$ for some integer $m$.
Statement of results
====================
Unless stated otherwise, all variables and functions are complex-valued. ${\ensuremath{\mathbb{N}}}$ will denote the set of positive integers. For $P\in{\ensuremath{\mathbb{C}}}[z_1,\ldots,z_n]$, let $Z(P)$ denote the affine zero set of $P$ in ${\ensuremath{\mathbb{C}}}^{n}$.
${\mathop{\mathrm{Li}_{k}}}(z)$ will denote the principal branch of the $k$-th polylogarithm function [@lewin]. For $k\ge 2$ (which is all we will need) and ${\lvertz\rvert}\le 1$, this is given by $${\mathop{\mathrm{Li}_{k}}}(z) = \sum_{n=1}^{\infty} \frac{z^n}{n^k}.$$
We will say that a function $\omega:{\ensuremath{\mathbb{R}}}\to{\ensuremath{\mathbb{R}}}$ is *quasiperiodic* if it is the sum of finitely many continuous, periodic functions.
Quasiperiodic functions are clearly bounded (although this is no longer true if the summand functions are not assumed to be continuous [@keleti]).
For a function $f:{\ensuremath{\mathbb{N}}}\to{\ensuremath{\mathbb{R}}}$, we will say that a formal series $\sum_{r=0}^\infty c_r(n)/n^r$ is an *asymptotic pseudo-power series* (or *a.p.p.s.*) *expansion* of $f(n)$ (in powers of $1/n$, as $n\to\infty$) if, for each nonnegative integer $r$, $c_r(n)$ is the restriction to the nonnegative integers of a quasiperiodic function of $n$, and if for all postive integers $n$ and $k$, $$f(n)=\sum_{r=0}^{k-1} \frac{c_r(n)}{n^r} + {\mathop{O_{f,k}}}\biggl(\frac{1}{n^{k}}\biggr).$$ (The subscripts on “${\mathop{O}_{}}$” indicate that the implied constant depends on those subscripts). We will denote this by writing $f(n) {\overset{*}{\sim}}\sum_{r=0}^{\infty} c_r(n)/n^r$.
Asymptotic power series expansions in powers of $1/n$, as defined in [@erdelyi], are the same as a.p.p.s. expansions in which the coefficients $c_r(n)$ do not depend on $n$. We will refer to these as *true* asymptotic power series expansions, for contrast.
True asymptotic power series expansions for a function are uniquely determined by that function. For our series, quasiperiodicity of the coefficients is enough to rescue uniqueness.
\[prop:uniqueness\] If a function $f:{\ensuremath{\mathbb{N}}}\to{\ensuremath{\mathbb{R}}}$ has an a.p.p.s. expansion in powers of $1/n$, that expansion is unique.
The following is our main result.
\[thm:main\] Let $P(x,y)\in{\ensuremath{\mathbb{C}}}[x,y]$ be such that $P$ and ${{\partial P/\partial y}}$ do not have a common zero on ${\mathbb{T}}\times{\ensuremath{\mathbb{C}}}$. Then $\Delta_{n}(P) = m\bigl(P(x,x^n)\bigr)-m\bigl(P(x,y)\bigr)$ has an a.p.p.s. expansion in powers of $1/n$. Specifically, if $
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we consider wireless sensor networks (WSNs) with sensor nodes exhibiting clustering in their deployment. We model the coverage region of such WSNs by Boolean Poisson cluster models (BPCM) where sensors nodes’ location is according to a Poisson cluster process (PCP) and each sensor has an independent sensing range around it. We consider two variants of PCP, in particular [Matérn ]{}and Thomas cluster process to form Boolean [Matérn ]{}and Thomas cluster models. We first derive the capacity functional of these models. Using the derived expressions, we compute the sensing probability of an event and compare it with sensing probability of a WSN modeled by a Boolean Poisson model where sensors are deployed according to a Poisson point process. We also derive the power required for each cluster to collect data from all of its sensors for the three considered WSNs. We show that a BPCM WSN has less power requirement in comparison to the Boolean Poisson WSN, but it suffers from lower coverage, leading to a trade-off between per-cluster power requirement and the sensing performance. A cluster process with desired clustering may provide better coverage while maintaining low power requirements.'
author:
- '[^1]'
title: 'On the Coverage Performance of Boolean-Poisson Cluster Models for Wireless Sensor Networks'
---
Introduction
============
In WSNs, sensors are deployed over a region such as forest or wetlands, to form a wireless network and exchange mutual data to sense an event. WSN may have a central hub to facilitate the joint detection. There are two essential aspects of WSNs. The first aspect is the coverage aspect to maximize the region covered by sensors, termed the coverage or sensing region. This will ensure that at least one sensor can detect the target event with a certain probability. The second aspect is to minimize energy consumption as wireless sensors have a limited power budget. Sensors can form small clusters with each cluster having one head, which acts as a gateway to the central hub [@iyengar2016distributed]. In this hierarchical network, sensors transmit their sensing data to their local cluster heads, which then communicates it to the central hub to jointly make sensing decision. Such clustering can reduce the power requirement of nodes, but can degrade overall coverage. The deployment of sensors in a WSN is generally random. Hence, the tools of stochastic geometry can be applied to model and analyze WSNs. One popular process to model the coverage area of a WSN is the Boolean-Poisson process. The Boolean-Poisson process is defined as the union of independent random objects with their centers located according to a Poisson point process (PPP) [@haenggibook; @liu2004study; @baek2007spatial]. The random objects denote the individual coverage region of sensors while the centers denote sensors’ locations. Owing to the mathematical tractability of PPPs, Boolean-Poisson process is simple yet powerful to derive performance metrics of WSNs such as the probability that a location is not covered, and the expected area of uncovered region [@BaccelliBook; @chiu2013stochastic; @flint2017wireless]. The capacity function of the Boolean-Poisson process, which characterizes the sensing probability of an event, was studied in [@pandey2018modeling]. As the underlying process of the Boolean Poisson model is PPP, the location of sensors nodes is independent of each other in this model. In some scenarios, the deployment of sensors is not entirely independent and the sensors may exhibit clustering in their deployment. This may be due to the easiness in deploying sensors in small groups or to facilitate the communication between the sensor and its gateway by decreasing their mutual distance. The Poisson cluster process (PCP) can be used to model the locations of sensors in such scenarios [@mekikis2018connectivity]. The two important variants of PCP are the [Matérn ]{}cluster process (MCP) and Thomas cluster process (TCP). The characterization of contact and nearest neighbor distance distribution for these processes is presented in [@pandey2019contact; @afshang2017nearesttcp]. To model the coverage region of WSNs exhibiting such clustering, we propose to use Boolean Poisson cluster models (BPCM) where the underlying process to model sensors’ locations is a PCP and each sensor has an independent sensing region around it. There has been limited work to characterize BPCM [*e.g.*]{} [@last1999empty]. However, coverage and sensing performance of WSNs that are deployed according to BPCM has not been studied in detail.\
In this paper, we consider three WSNs that are deployed according to MCP, TCP, and PPP, respectively. The coverage area of these WSNs can be modeled using Boolean MC, Boolean TC, and Boolean Poisson models (or processes). We first derive the capacity functional of Boolean MC and TC models. Using these expressions, we then compute the sensing probability of an event with a compact spread area. We also provide simple bounds for Boolean MC model to help derive insights for the system. We also derive the power required for each cluster to collect data from all of its sensors for the three considered WSNs. Finally, we perform a comparative analysis of these three deployments. We show that clustering decreases the coverage area and sensing probability, especially in the case of sensors with large individual sensing regions. However, it also reduces the power requirement of sensors. In scenarios where sensors have limited power, clustered deployments can provide better coverage performance.
System Model
============
In this paper, we consider a wireless sensor network deployed over $\mathbb{R}^2$ space. The locations of the sensors are modeled by a point process $\Phi$ with density $\lambda$. Each sensor has a sensing range around it denoted by ${\mathsf{S}}_i$ and assumed to be independent of other sensors. The total covered region (the region which falls inside the sensing region of at least one sensor) is given as $$\begin{aligned}
\Psi= \bigcup_{{\mathbf{z}}_i \in \Phi} {\mathbf{z}}_i+{\mathsf{S}}_i,\end{aligned}$$ which is known as a Boolean Process/Model and is a special case of Germ-grain model. Each point is termed as a germ with its sensing region as its grain.
Sensor network
--------------
We assume that sensors follow clusterization where the network is made from many cluster heads with each cluster head responsible to control and communicate with sensors assigned to it. Such network can be modeled using a cluster process. A cluster process consists of daughter point process centered at their parents whose locations are also according to a point process. Let $\Phi_{{\mathrm{p}}}=\{{\mathbf{x}}_i: \forall i \in \mathbb{N}\}$, be a parent point process where ${\mathbf{x}}_i$ is the location of $i$-th parent point (models the location of cluster center or cluster head in WSN) in $\mathbb{R}^2$. For each point ${\mathbf{x}}_i$, there is an associated daughter point process $\Phi^{(i)}_{{\mathrm{d}}}=\{{\mathbf{y}}_j^{(i)}:\forall j\in \mathbb{N}\}$, where ${\mathbf{y}}_j^{(i)}$ is the location of $j$-th daughter point. The absolute location of these points are given as ${\mathbf{z}}_{ij}={\mathbf{x}}_i+{\mathbf{y}}_j^{(i)}.$ Each daughter point process is independent and identically distributed. Now the PP modeling the sensors’ location is the union of all these daughter points $$\begin{aligned}
\Phi&=\bigcup_{{\mathbf{x}}_i\in \Phi_{{\mathrm{p}}}}\{{\mathbf{x}}_i+\Phi_{{\mathrm{d}}}^{(i)}\},\\
&=\left\{{\mathbf{z}}_{ij}:{\mathbf{z}}_{ij}={\mathbf{x}}_i+{\mathbf{y}}_j^{(i)},{\mathbf{x}}_i\in\Phi_{\mathrm{p}},{\mathbf{y}}_j^{(i)}\in\Phi_{{\mathrm{d}}}^{(i)} \,\forall i,j \right\},\end{aligned}$$ and known as the cluster process. It is clear from the above discussion that the cluster head is the parent point to all sensors of the cluster. It is intuitive to keep the cluster head at the center of the cluster and as close as possible to the sensors in the cluster to minimize the energy required in communication. We now consider three PPs to model the locations of WSN:
### [Matérn ]{}cluster process
In MCP, $\Phi_{{\mathrm{p}}}$ is a homogeneous PPP with intensity $\lambda_{\mathrm{p}}$. Each $\Phi_{{\mathrm{d}}}^{(i)}$ is a finite PPP within a ball ${\mathcal{B}}({\mathrm{o}},r_{\mathrm{d}})$. The mean number of points in each daughter point process is $m$ and therefore the intensity $\lambda_{\mathrm{d}}({\mathbf{y}})$, of each daughter point process will be $\frac{m}{\pi r_{\mathrm{d}}^2}{\mathbbm{1}}(||{\mathbf{y}}||\leq r_{\mathrm{d}})$. Total density of the PP is ${\lambda_{\mathrm{M}}}={\lambda_{\mathrm{p}}}m$.
### Thomas cluster process
In TCP, $\Phi_{{\mathrm{p}}}$ is a homogeneous PPP with intensity $\lambda_{\mathrm{p}}$. Each $\Phi_{{\mathrm{d}}}^{(i)}$ is a non-uniform PPP with the intensity $$\begin{aligned}
\lambda_{{\mathrm{d}}}({\mathbf{y}})=\frac{m}{2\pi\sigma^2}\exp\left(-\frac{y^2}{2\sigma^2}\right),\end{aligned}$$ where $m$ is the mean number
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The influence of a thermodynamic constraint on the critical finite-size scaling behavior of three-dimensional Ising and XY models is analyzed by Monte-Carlo simulations. Within the Ising universality class constraints lead to Fisher renormalized critical exponents, which modify the asymptotic form of the scaling arguments of the universal finite-size scaling functions. Within the XY universality class constraints lead to very slowly decaying corrections inside the scaling arguments, which are governed by the specific heat exponent $\alpha$. If the modification of the scaling arguments is properly taken into account in the scaling analysis of the data, finite-size scaling functions are obtained, which are [*independent*]{} of the constraint as anticipated by analytic theory.'
author:
- Michael Krech
title: 'Critical finite-size scaling with constraints:Fisher renormalization revisited'
---
Introduction
============
The theoretical investigation of classical spin systems has played a key role in the understanding of phase transitions, critical behavior, scaling, and universality [@Amit78; @Parisi88]. In particular, the classical Ising, the XY, and the Heisenberg model are the most relevant spin models in three dimensions. Each of these simple models represents a universality class which, apart from the spatial dimensionality and the range of the interactions, is characterized by the number of components of the order parameter , e.g, the magnetization in the case of ferromagnetic models. Real systems, however, suffer from various kinds of imperfections, e.g., lattice defects, impurities, or vacancies. In an experiment, which is designed to probe critical behavior as a function of temperature, the presence of, say, impurities on the lattice constitutes a thermodynamic constraint, because in a given sample the impurity concentration will remain constant during the temperature scans. According to the concepts of thermodynamics the impurity concentration $n_i$ can be written as the derivative of the grand canonical potential with respect to the chemical potential $\mu_i$ of the impurities, where other parameters like the temperature and the volume of the system are kept fixed. Now the question arises how the critical singularities in the grand canonical potential are affected when the thermodynamic ensemble is changed from ’fixed $\mu_i$’ to ’fixed $n_i$’, where the location of the critical temperature $T_c$ depends on the particular values of $\mu_i$ or $n_i$, respectively. The answer to this question has been given a long time ago by Michael Fisher [@Fisher68]. Provided, that the critical singularites have their usual form in the ’fixed $\mu_i$’ ensemble, then the constraint $n_i = const.$ amounts to a reparameterization of the reduced temperature $t = (T-T_c(n_i))/T_c(n_i)$ of the [*constrained*]{} system in terms of the reduced temperature $\tau = (T-T_c(\mu_i))/T_c(\mu_i)$ of the [*unconstrained*]{} system according to [@Fisher68] $$\label{ttau}
t = a \tau + b \tau |\tau|^{-\alpha} + \dots ,$$ where $a$ and $b$ are nonuniversal constants and the dots indicate higher order contributions. Apart from a linear term (\[ttau\]) contains a singular contribution which is characterized by the critical exponent $1 - \alpha$ of the entropy density . Which of the two terms in (\[ttau\]) is the leading one for $t, \tau \to 0$ depends on the sign of $\alpha$. Within the Ising universality class in $d = 3$ dimensions $\alpha \simeq 0.109$ [@LGZJ85] so that $|\tau| \sim |t|^{1/(1-\alpha)}$ to leading order and therefore the critical exponents $\beta$ (order parameter), $\gamma$ (susceptibility), and $\nu$ (correlation length) of the unconstrained system undergo ’Fisher renormalization’ in the constrained system according to [@Fisher68] $$\label{fren}
\beta \to \beta' = \beta / (1 - \alpha), \quad
\gamma \to \gamma' = \gamma / (1 - \alpha), \quad
\nu \to \nu' = \nu / (1 - \alpha) .$$ The specific heat exponent $\alpha$ requires a more careful analysis, because the specific heat is the temperature derivative of the entropy which in addition to the ’renormalization’ displayed in (\[fren\]) causes a sign change $$\label{frena}
\alpha \to \alpha' = -\alpha / (1 - \alpha) .$$ Note that analytic background contributions to the entropy of the unconstrained system become [*singular*]{} in the constrained system due to the singularity in the reparameterization given by (\[ttau\]).
Within the XY universality class in $d = 3$ the exponent $\alpha$ is negative [@LGZJ85], where probably the best current estimate $\alpha \simeq -0.013$ is obtained from an experiment on $^4$He near the superfluid transition [@LSNCI96]. For negative $\alpha$ the linear term on the r.h.s. of (\[ttau\]) is the dominating one for $\tau \to 0$. However, the XY universality class $\alpha$ is so small, that in practice the singular term in (\[ttau\]) can never be neglected. Instead, the singular contribution to (\[ttau\]) gives rise to very slowly decaying correction terms which must not be confused with Wegner corrections to scaling . These correction terms have to be considered in any scaling analysis in order to obtain correct values for the critical exponents.
If the system is finite, which is neccessarily the case for any Monte - Carlo simulation, all critical singularities are rounded, i.e., all quantities are analytic functions of the thermodynamic parameters [@Fisher71] so that a thermal singularity as shown in (\[ttau\]) does not occur. Critical finite-size rounding effects in, e.g., a cubic box $L^d$ are captured by [*universal*]{} finite-size scaling functions [@Fisher71; @Barber83] which restore all critical singularities in the limit $L \to \infty$. Following the line of argument in [@Fisher68], (\[ttau\]) then has to be replaced by $$\label{ttaufL}
t = a \tau + \tau |\tau|^{-\alpha} f(\tau L^{1/\nu}) ,$$ where $f(x)$ is the finite-size scaling function of the entropy density and $x = \tau L^{1/\nu}$ is a convenient choice of its scaling argument. For $\tau \to 0$ at finite $L$ the singular prefactor of $f(x)$ in (\[ttaufL\]) must be cancelled so that one has $f(x) = A |x|^{\alpha} +
\dots$ in the limit $x \to 0$, where $A$ is a nonuniversal constant such that $f(x)/A$ is a [*universal*]{} function of its argument. To leading order in $\tau$ the reparameterization of the reduced temperature $t$ of the constrained system is therefore [*linear*]{} in the reduced temperature $\tau$ of the unconstrained system and one finds $$\label{ttauL}
t = \tau (a + A L^{\alpha/\nu}).$$ According to (\[ttauL\]) the finite-size scaling argument $x$ in the constrained system is given by $$\label{x}
x = \tau L^{1/\nu} = t L^{1/\nu} / (a + A L^{\alpha/\nu}),$$ where the [*shape*]{} of the finite-size scaling functions is maintained [@VD], i.e., the presence of the constraint [*only*]{} affects the form of the scaling argument $x$. For $\alpha > 0$ (\[x\]) asymptotically reduces to $x = t L^{1/\nu'}/A$ for large $L$ in accordance with Fisher renormalization (see (\[fren\])). For $\alpha < 0$ (\[x\]) captures the aforementioned slowly decaying corrections to the asymptotic critical behavior in the XY universality class when a thermodynamic constraint is present. Note that $A > 0$ for the Ising universality class and that $A < 0$ for the XY universality class.
In the remainder of this paper a simple spin model is introduced which can be efficiently simulated with existing Monte - Carlo algorithms both with and without constraints in three dimensions. For the Ising and the XY version of the model finite-size scaling according to (\[x\]) is tested for the modulus of the order parameter, the susceptibility, and the specific heat.
Model and simulation method
===========================
The model system which is investigated here can be described as an $O(N-1)$ symmetric classical ’planar’ ferromagnet in a transverse magnetic field. The model Hamiltonian reads $$\label{H}
{\cal H} = -J \sum_{\langle i j \rangle}
\sum_{x=1}^{N-1} S_i^x S_j^x - h \sum_i S_i^N,$$ where $\langle i j \rangle$ denotes a nearest neighbor pair of spins on a simple cubic lattice in $d = 3$ dimensions. The lattice contains $L$ lattice sites in each direction and in order to avoid surface effects periodic boundary conditions are applied. Each spin ${\bf S}_i$ is a classical spin with $N$ components $\vec{S}_i = \left(S_i^1,S_i^2,
\dots,S_i^N\right)$ with the normalization $|\vec{S}_i| = 1$ for each lattice site $i$. The magnetic field $h$ in (\[H\]) only acts on the $
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
[Multilayer relationships among entities and information about entities must be accompanied by the means to analyze, visualize, and obtain insights from such data. We present open-source software ([`muxViz`]{}) that contains a collection of algorithms for the analysis of multilayer networks, which are an important way to represent a large variety of complex systems throughout science and engineering. We demonstrate the ability of [`muxViz`]{} to analyze and interactively visualize multilayer data using empirical genetic, neuronal, and transportation networks. Our software is available at <https://github.com/manlius/muxViz>.]{} [multilayer networks; software; visualization; multiplex networks; interconnected networks]{}\
2000 Math Subject Classification: 91D30, 05C82, 76M27
author:
- |
Manlio De Domenico[^1]\
[ *Departament d’Enginyeria Informática i Matemátiques,* ]{}\
[ *Universitat Rovira I Virgili, 43007 Tarragona, Spain* ]{}
- |
Mason A. Porter\
[ *Oxford Centre for Industrial and Applied Mathematics,* ]{}\
[ *Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK* ]{}\
and\
[ *CABDyN Complexity Centre, University of Oxford, Oxford OX1 1HP, UK* ]{}
- |
Alexandre Arenas\
[ *Departament d’Enginyeria Informática i Matemátiques,* ]{}\
[ *Universitat Rovira I Virgili, 43007 Tarragona, Spain* ]{}
bibliography:
- 'muxviz\_final.bib'
title: 'MuxViz: A Tool for Multilayer Analysis and Visualization of Networks'
---
Introduction
============
Although the study of networks is old, the analysis of complex systems has benefited particularly during the last two decades from the use of networks to model large systems of interacting agents [@newman2010]. Such efforts have yielded numerous insights in many areas of science and technology[@kitano2002computational; @de2002modeling; @barabasi2004network; @sharan2007network; @beyer2007integrating; @sporns2014contributions; @colizza2006role; @gomez2007dynamical; @gomez2008spreading; @eagle2009inferring; @lazer2009life; @balcan2009multiscale; @kitsak2010identification; @aral2012identifying; @vespignani2012modelling].
In the case of biological networks, connections among genes, proteins, neurons, and other biological entities can indicate that they are part of the same biological pathway or exhibit similar biological functions. Network representations focus on connectivity, and they have now become a paradigmatic way to investigate the organization and functionality of cells [@jeong2000large; @jeong2001lethality; @shen2002network; @maslov2002specificity; @tong2004global; @guimera2005functional; @rosenfeld2005gene; @chen2006wiring; @goh2007human; @costanzo2010genetic], synaptic connectivity [@van1996chaos; @sporns2004motifs; @buzsaki2004neuronal; @sporns2004organization; @mantini2007electrophysiological; @bullmore2009complex; @seeley2009neurodegenerative; @bassett2011dynamic; @nicosia2013remote; @nicosia2013phase], and more. There are also myriad applications to other types of systems (e.g., in sociology, transportation, physics, and more)[@Wasserman1994Social; @boccaletti2006complex; @newman2010; @barthelemy2011spatial; @Holme2012Temporal; @kivela2013multilayer].
In parallel, a large variety of computational techniques have been developed to analyze (and visualize) networks and the information that they encode. In biology, for example, such methods have become important tools for attempting to understand and represent cell functionality. However, although the standard network paradigm has been very successful, it has a fundamental flaw: it forces the aggregation of multilayer information to construct network representations that include only a single type of connection between pairs of entities. This can lead to misleading results, and it is becoming increasingly apparent that a more complicated representation is necessary [@kivela2013multilayer].
Recently, a novel mathematical framework to model and analyze multilayer relationships and their dynamics was developed [@mucha2010community; @dedomenico2013mathematical]. In this framework, one represents the underlying network topology and interaction weights as a *multilayer network*, in which entities can exhibit different relationships simultaneously and can exist on different “layers”. Multilayer networks can encode much richer information than what is possible using the individual layers separately (which is what is usually done). This, in turn, provides a suitable framework for versatile and sophisticated analyses that have already been used successfully to reveal multilayer community structure [@mucha2010community] and to measure important nodes and the correlations between them [@dedomenico2013mathematical; @dedomenico2013centrality; @nicosia2013correlations; @battiston2014structural]. However, to meet the requirements of an operational toolbox to be applied to the analysis of [complex]{} systems, it is of paramount importance to also develop open-source software to visualize multilayer networks and represent the results of analyzing such networks in a meaningful way.
Multilayer networks have already yielded fascinating insights and are experiencing burgeoning popularity. For example, there have been numerous studies to attempt to understand how interdependencies (e.g., [@buldyrev2010catastrophic; @brummitt2012suppressing]), other multilayer structures (e.g., [@lee2012correlated; @radicchi2013abrupt; @cardillo2013emergence; @dedomenico2013centrality; @cozzo2013clustering; @nicosia2013correlations; @cellai2013percolation; @battiston2014structural]), dynamics (e.g., [@yaugan2012analysis; @gomez2012evolution; @cozzo2012stability; @cozzo2013contact; @dedomenico2014navigability]), and control (e.g., [@mario-review2010]) can improve understanding of complex interacting systems. See the recent review article [@kivela2013multilayer] for extensive discussions and a thorough review of results.
[[ The increasing use of more complicated network representations has yielded a new set of challenges: how should one visualize, analyze, and interpret multilayer data. Although there has been progress in numerous applications, many of the key results have concentrated on data from examples like social and transportation networks [@kivela2013multilayer]. Multilayer analysis has rarely been exploited in the investigation of biological networks — even though such a perspective is clearly relevant — and we believe that the lack of appropriate software has contributed to this situation. For example,]{}]{} in a recent study, the genetic and protein-protein interaction networks of *Saccharomyces cerevisiae* were investigated simultaneously [@costanzo2010genetic] to uncover connection patterns. [[ Costanzo *et al* [@costanzo2010genetic] also reported that]{}]{} genetic interactions have an overlap of 10–20% with protein-protein interaction pairs, which is significantly higher than the $3\%$ overlap that they expected based on a random null model. This suggests that many positive and negative interactions occur between — rather than within — complexes and pathways [@costanzo2010genetic] and thereby gives an important example of how exploiting multilayer information might improve understanding of biological structure and functionality.
Although the aforementioned overlap is an indication of correlation between a pair of networks, the analysis of multilayer biological data would benefit greatly from techniques and diagnostics that are able to exploit multiplexity (i.e., multiple different ways to interact) in available information. [[ As has been the case in several studies of social and technological systems [@mucha2010community; @dedomenico2013centrality; @nicosia2013correlations; @battiston2014structural; @dedomenico2014navigability], the analysis of multilayer biological data would benefit greatly from techniques and diagnostics that are able to exploit, e.g., multiplexity (i.e., multiple different ways to interact) in available information.]{}]{}
Methods
=======
The primary contributions of the present work are to address the computational challenge of analysis and visualization of multilayer information by providing a practical methodology, and accompanying software that we call [`muxViz`]{}, for the analysis and the visualization of multilayer networks. In Appendix\[supp:note:5\], we give technical details about the [`muxViz`]{} software.
Visualization {#supp:note:1}
-------------
[[ In multilayer networks, nodes can exist in several layers simultaneously and entities that exist in multiple layers (such nodes have “replicas” on other layers) are connected to each other via interlayer edges. One can visualize a multilayer network in [`muxViz`]{} either using explicit layers or as an edge-colored multigraph [@kivela2013multilayer], in which edges are “colored” according to the different types of relationships between them (see Fig.\[fig:fig1suppl\] for examples of genetic and neuronal multilayer networks). ]{}]{}
[[ The [`muxViz`]{} software focuses predominantly on “multiplex networks”, which refer to networks with multiple relational types and which are arguably
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Using magnetocapacitance data in tilted magnetic fields, we directly determine the chemical potential jump in a strongly correlated two-dimensional electron system in silicon when the filling factor traverses the spin and the cyclotron gaps. The data yield an effective $g$ factor that is close to its value in bulk silicon and does not depend on filling factor. The cyclotron splitting corresponds to the effective mass that is strongly enhanced at low electron densities.'
author:
- 'V. S. Khrapai, A. A. Shashkin, and V. T. Dolgopolov'
title: |
Direct measurements of the spin and the cyclotron gaps\
in a 2D electron system in silicon
---
A two-dimensional (2D) electron system in silicon metal-oxide-semiconductor field-effect transistors (MOSFETs) is remarkable due to strong electron-electron interactions. The Coulomb energy overpowers both the Fermi energy and the cyclotron energy in accessible magnetic fields. The Landau-level-based considerations of many-body gaps [@ando; @yang], which are valid in the weakly interacting limit, cannot be directly applied to this strongly correlated electron system. In a perpendicular magnetic field, the gaps for charge-carrying excitations in the spectrum should originate from cyclotron, spin, and valley splittings and be related to a change of at least one of the following quantum numbers: Landau level, spin, and valley indices. However, the gap correspondence to a particular single-particle splitting is not obvious [@brener], and the origin of the excitations is unclear. In a recent theory [@iordan], the strongly interacting limit has been studied, and it has been predicted that in contrast to the single-particle picture, the many-body gap to create a charge-carrying (iso)spin texture excitation at integer filling factor is determined by the cyclotron energy. This is also in contrast to the square-root magnetic field dependence of the gap expected in the weakly interacting limit [@ando; @yang].
A standard experimental method for determining the gap value in the spectrum of the 2D electron system in a quantizing magnetic field is activation energy measurements at the minima of the longitudinal resistance [@englert; @klein; @dol88; @usher]. Its disadvantage is that it yields a mobility gap which may be different from the gap in the spectrum. In Si MOSFETs, the activation energy as a function of magnetic field was reported to be close to half of the single-particle cyclotron energy for filling factor $\nu=4$, while decreasing progressively for the higher $\nu$ cyclotron gaps [@englert; @klein; @dol88]. At low electron densities, an interplay was observed between the cyclotron and the spin gaps, manifested by the disappearance of the cyclotron ($\nu=4$, 8, and 12) minima of the longitudinal resistance [@krav00]. On the contrary, for the 2D electrons in GaAs/AlGaAs heterostructures, the activation energy at $\nu=2$ exceeded half the single-particle cyclotron energy by about 40% [@usher]. Another, direct method for determining the gap in the spectrum is measurement of the chemical potential jump across the gap [@smith85; @aristov; @valley]. It was applied to the 2D electrons in GaAs [@smith85] and gave cyclotron gap values corresponding to the band electron mass [@aristov]. Recently, the method has been used to study the valley gap at the lowest filling factors in the 2D electron system in silicon which has been found to be strongly enhanced and increase linearly with magnetic field [@valley; @rem].
The effective electron mass, $m$, and $g$ factor in Si MOSFETs have been determined lately from measurements of the parallel magnetic field of full spin polarization in this electron system and of the slope of the metallic temperature dependence of the conductivity in zero magnetic field [@gm]. It is striking that the effective mass becomes strongly enhanced with decreasing electron density, $n_s$, while the $g$ factor remains nearly constant and close to its value in bulk silicon. This result is consistent with accurate measurements of $m$ at low $n_s$ by analyzing the temperature dependence of the Shubnikov-de Haas oscillations in weak magnetic fields in the low-temperature limit [@us; @rem1]. A priori it is unknown whether or not the so-determined values $g$ and $m$ correspond to the spin and the cyclotron splittings in strong perpendicular magnetic fields.
In this paper, we report the first measurements of the chemical potential jump across the spin and the cyclotron gaps in a 2D electron system in silicon in tilted magnetic fields using a magnetocapacitance technique. We find that (i) the $g$ factor is close to its value in bulk silicon and does not change with filling factor, in contrast to the strong dependence of the valley gap on $\nu$; and (ii) the cyclotron splitting is determined by the effective mass that is strongly enhanced at low electron densities. We also verify the systematics of the gaps in that the measured $\nu=4$, 8, and 12 cyclotron gap decreases with parallel magnetic field component by the same amount as the $\nu=2$, 6, and 10 spin gap increases.
Measurements were made in an Oxford dilution refrigerator with a base temperature of $\approx 30$ mK on high-mobility (100)-silicon MOSFETs (with a peak mobility close to 2 m$^2$/Vs at 4.2 K) having the Corbino geometry with diameters 250 and 660 $\mu$m. The gate voltage was modulated with a small ac voltage 15 mV at frequencies in the range 2.5 – 25 Hz and the imaginary current component was measured with high precision using a current-voltage converter and a lock-in amplifier. Care was taken to reach the low frequency limit where the magnetocapacitance, $C(B)$, is not distorted by lateral transport effects. A dip in the magnetocapacitance at integer filling factor is directly related to a jump, $\Delta$, of the chemical potential across a corresponding gap in the spectrum of the 2D electron system, and therefore we determine $\Delta$ by integrating $C(B)$ over the dip in the low temperature limit where the magnetocapacitance saturates and becomes independent of temperature [@valley].
Typical magnetocapacitance traces taken at different electron densities, temperatures, and tilt angles are displayed in Fig. \[fig1\] near the filling factor $\nu=hcn_s/eB_\perp=4$ and $\nu=6$. The magnetocapacitance shows narrow minima at integer $\nu$ which are separated by broad maxima, the oscillation pattern reflecting the modulation of the thermodynamic density of states, $D$, in quantizing magnetic fields: $1/C=1/C_0+1/Ae^2D$ (where $C_0$ is the geometric capacitance between the gate and the 2D electrons, and $A$ is the sample area) [@smith85]. As the magnetic field is increased, the maximum $C$ approaches the geometric capacitance indicated by the dashed lines in Fig. \[fig1\]. Since the magnetocapacitance $C(B)<C_0$ around each maximum is almost independent of magnetic field, this results in asymmetric minima of $C(B)$, the asymmetry being more pronounced for $\nu=4$, 8, and 12. The chemical potential jump at integer $\nu=\nu_0$ is determined by the area of the dip in $C(B)$:
$$\Delta=\frac{Ae^3\nu_0}{hcC_0}\int_{\text{dip}}\frac{C_{\text{ref}}-
C}{C}dB_\perp, \label{Delta}$$
where $C_{\text{ref}}$ is a step function that is defined by two reference levels corresponding to the capacitance values at the low and high field edges of the dip as shown by the dotted line in Fig. \[fig1\]. The so-determined $\Delta$ is smaller than the level splitting by the level width. The last is extracted from the data by substituting $(C_0-C_{\text{ref}})/C$ for the integrand in Eq. (\[Delta\]) and integrating for the case of resolved levels between the magnetic fields $B_1=hcn_s/e(\nu_0+1/2)$ and $B_2=hcn_s/e(\nu_0-1/2)$.
Tilting the magnetic field allows us to verify the systematics of the gaps in the spectrum and probe the lowest-energy charge-carrying excitations. As the thickness of the 2D electron system in Si MOSFETs is small compared to the magnetic length in accessible fields, the parallel field couples largely to the electrons’ spins while the orbital effects are suppressed [@simonian]. Therefore, the variation of a gap with $B_\parallel$ should reflect the change in the excitation energy as the Zeeman splitting, $g\mu_BB$, is increased: the excitation energy change is determined by the difference between the spin projections onto magnetic field for the ground and the lowest excited states. Within single-particle picture, e.g., one can expect that with increasing $B_\parallel$ at fixed $B_\perp$, the spin gap will increase, the valley gap will stay constant, and the cyclotron gap, which is given by the difference between the cyclotron splitting and the sum of the spin and the valley splittings, will decrease. In contrast, for spin textures (so
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we study the mathematical structure and numerical approximation of elliptic problems posed in a (3D) domain $\Omega$ when the right-hand side is a (1D) line source $\Lambda$. The analysis and approximation of such problems is known to be non-standard as the line source causes the solution to be singular. Our main result is a splitting theorem for the solution; we show that the solution admits a split into an explicit, low regularity term capturing the singularity, and a high-regularity correction term $w$ being the solution of a suitable elliptic equation. The splitting theorem states the mathematical structure of the solution; in particular, we find that the solution has anisotropic regularity. More precisely, the solution fails to belong to $H^1$ in the neighbourhood of $\Lambda$, but exhibits piecewise $H^2$-regularity parallel to $\Lambda$. The splitting theorem can further be used to formulate a numerical method in which the solution is approximated via its correction function $w$. This approach has several benefits. Firstly, it recasts the problem as a 3D elliptic problem with a 3D right-hand side belonging to $L^2$, a problem for which the discretizations and solvers are readily available. Secondly, it makes the numerical approximation independent of the discretization of $\Lambda$; thirdly, it improves the approximation properties of the numerical method. We consider here the Galerkin finite element method, and show that the singularity subtraction then recovers optimal convergence rates on uniform meshes, i.e., without needing to refine the mesh around each line segment. The numerical method presented in this paper is therefore well-suited for applications involving a large number of line segments. We illustrate this by treating a dataset (consisting of $\sim 3000$ line segments) describing the vascular system of the brain.'
address:
- 'Department of Mathematics, University of Bergen, Norway. '
- 'Department of Mathematics, Karlstad University, Sweden. '
- 'Department of Mathematics, Technical University of Munich, Germany. '
author:
- 'Ingeborg G. Gjerde'
- Kundan Kumar
- 'Jan M. Nordbotten'
- Barbara Wohlmuth
date: 'October 25, 2018'
title: Splitting method for elliptic equations with line sources
---
[^1]
Introduction
============
Decomposition and regularity properties
=======================================
Numerical methods
=================
Numerical results
=================
**Acknowledgements** The authors thank J. Reichenbach and A. Deistung for bringing our attention to the data used in section \[sec:num-brain\] [@brain], and E. Hanson and E. Hodneland for providing us with the data segmentation and tree extraction.
Conclusions
===========
We studied an elliptic equation having line sources in a 3D domain. The line sources act as Dirac measure defined on a line causing the solutions to be singular on the line itself. Central to this work is the result that the solution admits a split into a singular and a regular part. This allows us to study the nature of the solution as well as to develop a numerical algorithm for solving the problem. Mathematically, we see that the solution has anisotropic regularity, it is smooth along the line source and the line singularity acts as a Dirac point measure in a 2D domain. Our numerical approach solves for the regular part only and therefore obtains optimal convergence rates. We illustrate our approach for several numerical examples including a data set describing the vascular system of a human brain. Our solution approach is mesh-independent and can be adapted to a variety of discretizations. \[sec:conclusions\]
[^1]: This work was partially supported by the Research Council of Norway, project number 250223, and Deutsche Forschungsgemeinschaft, grant number WO-671/11-1.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Small non-autonomous perturbations around an equilibrium of a nonlinear delayed system are studied. Under appropriate assumptions, it is shown that the number of $T$-periodic solutions lying inside a bounded domain $\Omega\subset \R^N$ is, generically, at least $|\chi \pm 1|+1$, where $\chi$ denotes the Euler characteristic of $\Omega$. Moreover, some connections between the associated fixed point operator and the Poincaré operator are explored.'
author:
- 'P. Amster'
- 'M. P. Kuna'
- 'G. Robledo'
title: Multiple solutions for periodic perturbations of a delayed autonomous system near an equilibrium
---
Introduction
============
Let $\Omega\subset \R^N$ be a bounded domain with smooth boundary. An elementary result from the theory of ODEs establishes that if a smooth function $G:\overline\Omega\to \R^N$ is inwardly pointing over $\partial\Omega$, that is $$\label{hart-weak}
\langle G(x),\nu(x)\rangle <0 \qquad x\in \partial\Omega,$$ where $\nu(x)$ denotes the outer normal at $x$, then the solutions of the autonomous system of ordinary differential equations $$u'(t)=G(u(t))$$ with initial data $u(0)=u_0\in \overline \Omega$ are defined and remain inside $\Omega$ for all $t>0$.
[Now, let us denote the space of $T$–periodic continuous functions as $$C_T:=\{u\in C(\R,\R^N):u(t+T)=u(t)\}$$ and, for given $p\in C_{T}$, consider the non-autonomous system $$u'(t)=G(u(t)) + p(t).$$]{}
[If $\overline\Omega$ has the fixed point property, then the above system has at least one $T$-periodic orbit, provided that $\|p\|_\infty$ is small.]{} This is a straightforward consequence of the fact that the time-dependent vector field $G(x)+ p(t)$ is still inwardly pointing for all $t$; hence, the set $\overline \Omega$ is invariant for the associated flow and thus the Poincaré operator given by $Pu_0:=u(T)$ is well defined for $u_0\in \overline\Omega$ and satisfies $P(\overline\Omega)\subset \overline\Omega$.
More generally, observe that, when (\[hart-weak\]) is assumed, the homotopy defined by $h(x,s):= sG(x) - (1-s)\nu(x)$ with $s\in [0,1]$ does not vanish on $\partial\Omega$; whence $$deg_B(G,\Omega,0) = deg_B(-\nu,\Omega,0),$$ where $deg_B$ stands for the Brouwer degree. Thus, it follows from [@hopf] that $deg_B(G,\Omega,0)=(-1)^N\chi(\Omega)$, where $\chi(\Omega)$ denotes the Euler characteristic of $\Omega$.
It is worthy to recall (see *e.g.*,[@wecken]) that if $\overline \Omega$ has the fixed point property, then $\chi(\Omega)$ is different from $0$. This follows easily in the present setting from the fact that if $\chi(\Omega)=0$ then one can construct a field $G$ satisfying (\[hart-weak\]) that does not vanish in $\Omega$. If $\overline\Omega$ has the fixed point property, then there exist (non-constant) $T$-periodic solutions of all periods which, in turn, implies that $G$ vanishes, a contradiction. Interestingly, the converse of the result in [@wecken] is not true; that is, one can easily find $\Omega$ with nonzero Euler characteristic such that $\overline \Omega$ has not the fixed point property. For such a domain, the Poincaré map has obviously a fixed point (because $G$ vanishes in $\Omega$). This yields the conclusion that a fixed point-free map in $C(\overline \Omega,\overline\Omega)$ cannot belong to the closure of the set of all the Poincaré maps associated to the homotopy class of $-\nu$.
Now suppose, independently of the value of $\chi(\Omega)$, that $G$ vanishes at some point $e\in \Omega$, namely, that $e$ is an equilibrium point of the autonomous system. It is well known that if $M:=DG(e)$ is nonsingular, then the degree of $G$ over any small neighbourhood $V$ of $e$ is well defined and coincides with $s(M)$, where $$\label{sM}
s(M):= sgn ({\rm det}(M)).$$ Thus, if $s(M)$ is different from $(-1)^N\chi(\Omega)$, then the excision property of the degree implies that the system has at least another equilibrium point in $\Omega\setminus \overline V$. Furthermore, it follows from Sard’s lemma that, for almost all values $\overline p$ in a neighbourhood of $0\in \R^N$, the mapping $G + \overline p$ has at least $\Gamma$ different zeros in $\Omega$, with $$\label{Gamma}
\Gamma=\Gamma(M):=|\chi(\Omega)- (-1)^{N} s(M)| + 1.$$
Thus, one might expect that if $p\in C(\R,\R^N)$ is $T$-periodic and $\|p\|_\infty$ is small, then the number of $T$-periodic solutions of the non-autonomous system is generically greater or equal to $\Gamma$. Here, ‘generically’ should be understood in the sense of Baire category, that is, the property is valid for all $p$ (close to the origin) in the space of continuous $T$-periodic except for a meager set. It can be shown, indeed, that the fixed point index of the Poincaré map $P$ at $e$ is equal to $(-1)^Ns(M)$ and, moreover, a homotopy argument shows that the degree of $P$ over $\Omega$ is equal to $\chi(\Omega)$. Details are omitted because the result follows from the main theorem of the present paper.
For several reasons, the situation is different for the delayed system $$\label{ec}
u'(t) = g(u(t),u(t-\tau))$$ where, for simplicity, we shall assume that $g:\overline\Omega\times \overline\Omega\to \R^N$ is continuously differentiable. In the first place observe that, due to the delay, the condition that the field $G(x):=g(x,x)$ is inwardly pointing does not necessarily avoid that solutions with initial data $x_0:=\phi\in C([-\tau,0],\overline\Omega)$ may eventually abandon $\overline\Omega$. However, taking into account that $$|u(t_0-\tau)- u(t_0)|
\le \tau \max_{t\in [t_0-\tau,t_0]} |u'(t)|,$$ it follows that the flow-invariance property, now over the set $C([-\tau,0],\overline\Omega)$, is retrieved under the stronger assumption
$$\label{hart}
\langle g(x,y),\nu(x)\rangle < 0 \qquad (x,y)\in
\mathcal A_\tau
(\Omega)$$
where $$\mathcal A_\tau
(\Omega):= \{ (x,y)\in \partial\Omega\times \overline\Omega: |y-x|\le \tau\|g\|_{\infty}\}.$$
In the second place, the previous considerations regarding the Poincaré map become less obvious, since the latter is now defined not over $\overline\Omega$ but over the metric space $C([-\tau,0],\overline\Omega)$. In connection with this fact, we recall that the characteristic equation for the autonomous linear delayed systems is transcendental (also called quasipolynomial equation), so there exist typically infinitely many complex characteristic values.
Throughout the paper, we shall assume as before that system (\[ec\]) has an equilibrium point $e\in \Omega$, that is, such that $g(e,e)=0$. This necessarily occurs when $\chi(\Omega)\neq 0$, although this latter condition shall not be imposed.
Denote by $A,B\in \R^{N\times N}$ the respective matrices $D_xg(e,e)$ and $D_yg(e,e)$. Again, if $A+B$ is nonsingular and $s(A+B)$ is different from $(-1)^N\chi(\Omega)$, then the system has at least one extra equilibrium point in $\Omega$; furthermore, the number of equilibria in $\Omega$ is generically greater or equal to $\Gamma$. This is readily verified by writing the set of all the functions $g\in C^1(\overline\Omega\times\overline\Omega,\R^N)$ satisfying (\[hart\]) as the union of the closed sets $$X_n:=\left\{g\in C^1(\overline\Omega\times\overline\Omega,\R^N):
\langle g(x,y),\nu(x)\rangle \le -\frac 1n \quad\hbox{for $(x, y)\in \mathcal A_\tau
(\Omega)$}
\right\}$$ and noticing that $X_n\cap
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.'
author:
- H Rudolf Fiebig
title: Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method
---
\[sec:intro\]Introduction
=========================
Numerical simulations of quantum chromodynamics (QCD) on a Euclidean space-time lattice provides access to mass spectra of hadronic systems through the analysis of time correlation functions. In theory the latter are linear combinations of exponential functions $$C(t,t_0)=Z_1e^{-E_1(t-t_0)}+Z_2e^{-E_2(t-t_0)}+\ldots\,,
\label{exp2}$$ where the $E_n$ are the excitation energies of the system and the strength coefficients $$Z_n=|\langle n|\hat{\Phi}(t_0)|0\rangle|^2
\label{Zp}$$ are matrix elements of some vacuum-subtracted operator $\hat{\Phi}(t_0)=\Phi(t_0)-\langle 0|\Phi(t_0)| 0\rangle$ between the vacuum $|0\rangle$ and a ground or excited state $|n\rangle, n>0$. In practice the exponential model (\[exp2\]) is fitted to noisy numerical simulation ‘data’. The statistical quality of simulation data rarely is good enough for the two-exponential fit (\[exp2\]) to succeed. It is common practice to look at the large-$t$ behavior of the correlation function $C(t,t_0)$ in a $t$-interval where it is dominated by only one exponential, with the lowest energy, and then make a one-parameter fit to a plateau of the effective-mass function $\mu_{\rm eff}(t,t_0)=-{\partial}\ln C(t,t_0)/{\partial t}$. Possible discretizations are
$$\begin{aligned}
\mu_{\rm eff,0}(t,t_0)&=&-\ln\left(\frac{C(t+1,t_0)}{C(t,t_0)}\right)
\simeq m_{\rm eff,0}\label{eff0}\\
\mu_{\rm eff,1}(t,t_0)&=&\frac{C(t+1,t_0)}{C(t,t_0)}\simeq e^{-m_{\rm eff,1}}\label{eff1}\\
\mu_{\rm eff,2}(t,t_0)&=&\frac{C(t+1,t_0)-C(t-1,t_0)}{2C(t,t_0)}\label{eff2}\\
& &\simeq -\sinh(m_{\rm eff,2})\nonumber\\
\mu_{\rm eff,3}^2(t,t_0)&=&\frac{C(t+1,t_0)+C(t-1,t_0)-2C(t,t_0)}{C(t,t_0)}\nonumber\\
& &\simeq 2(\cosh(m_{\rm eff,3})-1)\label{eff3}\,.\end{aligned}$$
The expressions after the $\simeq$ are the values of $\mu_{\rm eff}$ for a pure plateau of mass $m_{\rm eff}$. The procedure implies the selection of consecutive time slices $t=t_1\ldots t_2$ for which $\mu_{\rm eff}={\rm const}$, within errors, and an appropriate fit. The selection of this, so-called, plateau is a matter of judgment. A condition for reliable results is that the correlation function (\[exp2\]) is dominated by just one exponential term, usually the ground state. The latter can be enhanced by the use of smeared operators [@Alexandrou:1994ti] and fuzzy link variables [@Alb87a]. This analysis procedure discourages consideration of excited states. In fact it will only produce reliably results if those are suppressed. Workarounds involve diagonalization of a correlation matrix of several operators or variational techniques [@Morningstar:1999rf]. Those however, still rely on plateau selection without utilizing the information contained in the entire available time-slice range of a correlation function.
As lattice simulations of QCD now aim at excited hadron states, $N^\ast$’s for example [@Lee:1998cx; @Gockeler:2001db; @Sasaki:2001nf], this situation is unsatisfactory. Alternative methods employing Bayesian inference [@Jar96] are a viable option. The maximum entropy method (MEM), which involves a particular choice of the Bayesian prior probability, falls in this class. Bayesian statistics [@Box73] is a classic subject with a vast range of applications. However, application within the context of lattice QCD is relatively new [@Nakahara:1999vy; @Nakahara:1999bm; @Asakawa:2000pv; @Asakawa:2000tr; @Lepage:2001ym].
In this work we report on our experience using the MEM for extracting spectral mass density functions $\rho(\omega)$ from lattice-generated time correlators $$C(t,t_0)=\int d\omega\,\rho(\omega) e^{-\omega(t-t_0)}\,,
\label{Crho}$$ where a discrete set of time slices $t$ is understood. Discretization of the $\omega$-integral with reasonably fine resolution leads to an ambiguous problem where the number of parameters values $\rho(\omega)$ is (typically much) larger than the number of lattice data $C(t,t_0)$. In the MEM an entropy term involving the spectral density is used as a Bayesian prior to infer $\rho(\omega)$ from the data.
We here apply MEM analysis to sets of lattice correlation functions of a meson-meson system. Those particular simulations are aimed at learning about mechanisms of hadronic interaction. This will be discussed separately [@Fie02c]. The lattice data generated within that project involve local and nonlocal operators. They exhibit a wide range of statistical quality from ‘very good’ to ‘marginally acceptable’.
Our focus here is to utilize those data as a testing ground for Bayesian MEM analysis. In contrast to other works we employ simulated annealing to the solution of the Bayesian inference problem. The main aim of this work is to explore the feasibility of this approach for extracting masses from a lattice simulation using realistic lattice data, including excitations. For the most part this translates into studying the sensitivity of the method to to its native parameters.
\[sec:BayesCF\]Bayesian Inference for Curve Fitting
===================================================
From a Bayesian point of view the spectral density function $\rho$ in (\[Crho\]) is a random variable subject to a certain probability distribution functional ${\cal P}[\rho]$. Solution of the curve fitting problem consists in finding the function $\rho$ which maximizes the conditional probability ${\cal P}[\rho\leftarrow C]$, the [*posterior probability*]{}, given a ‘measured’ data set $C$. Computation of $\rho$ is then based on Bayes’ theorem [@Jar96] $${\cal P}[\rho\leftarrow C]\, {\cal P}[C]
={\cal P}[C\leftarrow \rho]\, {\cal P}[\rho]\,,
\label{BayesT}$$ also known as ‘detailed balance’ in a different context. The functional ${\cal P}[C]$, the [*evidence*]{}, gives the probability of measuring a data set $C$. The conditional probability ${\cal P}[C\leftarrow \rho]$, the [*likelihood function*]{}, determines the probability of measuring $C$ given a spectral function $\rho$. Finally ${\cal P}[\rho]$, the Bayesian [*prior*]{}, defines a constraint on the spectral density function $\rho$. Its choice is a matter of judgment. Ideally, the prior should reflect the physics known about the system, for example an upper limit on the hadronic mass scale. The posterior probability is the product of the likelihood function and the prior ${\cal P}[\rho\leftarrow C] =
{\cal P}[C\leftarrow \rho]\, {\cal P}[\rho]/{\cal P}[C]$, where the [*evidence*]{} merely plays the role of a normalization constant [@Jar96]. Indeed, the normalization condition $\int[d\rho]{\cal P}[\rho\leftarrow C]=1$ applied to (\[BayesT\]) gives ${\cal P}[C] = \int[d\rho]{\cal P}[C\leftarrow \rho]\, {\cal P}[\rho]$. Thus, for a fixed $C$, we have $${\cal P}[\rho\leftarrow C]\propto{\cal P}[C\leftarrow \rho]\, {\cal P}[\rho]\,.
\label{BayesT3}$$ The curve fitting problem requires the product of the [*likelihood function*]{} and the [*prior*]{} function.
\[sec:spectralD\]Spectral density
---------------------------------
Our lattice data come from correlation functions built from heavy-light meson-meson operators $$\Phi_v=v_1\Phi_1+v_2\Phi_2\,,
\label{Phiv}$$ where $\Phi_1$ and $\Phi_2$ involve local and non-local meson-meson fields, respectively, at relative distance $r$, and $v$ are
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Christopher Deninger
date:
title: Number theory and dynamical systems on foliated spaces
---
Introduction
============
In this paper we report on some developments in the search for a dynamical understanding of number theoretical zeta functions that have taken place since my ICM lecture [@D2]. We also point out a number of problems in analysis that will have to be solved in order to make further progress.
In section 2 we give a short introduction to foliations and their cohomology. Section 3 is devoted to progress on the dynamical Lefschetz trace formula for one-codimensional foliations mainly due to Álvarez López and Kordyukov. In section 4 we make the comparison with the “explicit formulas” in analytic number theory. Finally in section 5 we generalize the conjectural dynamical Lefschetz trace formula of section 3 to phase spaces which are more general than manifolds. This was suggested by the number theoretical analogies of section 4.
This account is written from an elementary point of view as far as arithmetic geometry is concerned, in particular motives are not mentioned. In spirit the present article is therefore a sequel to [@D1].
There is a different approach to number theoretical zeta functions using dynamical systems by A. Connes [@Co]. His phase space is a non-commutative quotient of the adèles. Although superficially related, the two approaches seem to be deeply different. Whereas Connes’ approach generalizes readily to automorphic $L$-functions [@So] but not to motivic $L$-functions, it is exactly the opposite with our picture. One may wonder whether there is some kind of Langlands correspondence between the two approaches.
I would like to thank the Belgium and German mathematical societies very much for the opportunity to lecture about this material during the joint BMS–DMV meeting in Liège 2001.
Foliations and their cohomology
===============================
A $d$-dimensional foliation ${{\mathcal F}}= {{\mathcal F}}_X$ on a smooth manifold $X$ of dimension $a$ is a partition of $X$ into immersed connected $d$-dimensional manifolds $F$, the “leaves”. Locally the induced partition should be trivial: Every point of $X$ should have an open neighborhood $U$ diffeomorphic to an open ball $B$ in ${{\mathbb{R}}}^a$ such that the leaves of the induced partition on $U$ correspond to the submanifolds $B \cap ({{\mathbb{R}}}^d \times \{ y \})$ of $B$ for $y$ in ${{\mathbb{R}}}^{a-d}$.
One of the simplest non-trivial examples is the one-dimensional foliation of the two-dimensional torus $T^2 = {{\mathbb{R}}}^2 / {{\mathbb{Z}}}^2$ by lines of irrational slope $\alpha$. These are given by the immersions $${{\mathbb{R}}}\hookrightarrow T^2 \; , \; t \mapsto (x + t \alpha , t) {\;\mathrm{mod}\;}{{\mathbb{Z}}}^2$$ parametrized by $x {\;\mathrm{mod}\;}{{\mathbb{Z}}}+ \alpha {{\mathbb{Z}}}$. In this case every leaf is dense in $T^2$ and the intersection of a global leaf with a small open neighborhood $U$ as above decomposes into countably many connected components. It is the global behaviour which makes foliations complicated. For a comprehensive introduction to foliation theory, the reader may turn to [@Go] for example.
To a foliation ${{\mathcal F}}$ on $X$ we may attach its tangent bundle $T {{\mathcal F}}$ whose total space is the union of the tangent spaces to the leaves. By local triviality of the foliation it is a sub vector bundle of the tangent bundle $TX$. It is integrable i.e. the commutator of any two vector fields with values in $T {{\mathcal F}}$ again takes values in $T{{\mathcal F}}$. Conversely a theorem of Frobenius asserts that every integrable sub vector bundle of $TX$ arises in this way.
Differential forms of order $n$ along the leaves are defined as the smooth sections of the real vector bundle $\Lambda^n T^* {{\mathcal F}}$, $${{\mathcal A}}^n_{{{\mathcal F}}} (X) = \Gamma (X, \Lambda^n T^* {{\mathcal F}}) \; .$$ The same formulas as in the classical case define exterior derivatives along the leaves: $$d^n_{{{\mathcal F}}} : {{\mathcal A}}^n_{{{\mathcal F}}} (X) \longrightarrow {{\mathcal A}}^{n+1}_{{{\mathcal F}}} (X) \; .$$ They satisfy the relation $d^{n+1}_{{{\mathcal F}}} {\mbox{\scriptsize $\,\circ\,$}}d^n_{{{\mathcal F}}} = 0$ so that we can form the leafwise cohomology of ${{\mathcal F}}$: $$H^n_{{{\mathcal F}}} (X) = {\mathrm{Ker}\,}d^n_{{{\mathcal F}}} / {\mathrm{Im}\,}d^{n-1}_{{{\mathcal F}}} \; .$$ For our purposes these invariants are actually too subtle. We therefore consider the reduced leafwise cohomology $${\bar{H}}^n_{{{\mathcal F}}} (X) = {\mathrm{Ker}\,}d^n_{{{\mathcal F}}} / \overline{{\mathrm{Im}\,}d^{n-1}_{{{\mathcal F}}}} \; .$$ Here the quotient is taken with respect to the topological closure of ${\mathrm{Im}\,}d^{n-1}_{{{\mathcal F}}}$ in the natural Fréchet topology on ${{\mathcal A}}^n_{{{\mathcal F}}} (X)$. The reduced cohomologies are nuclear Fréchet spaces. Even if the leaves are dense, already ${\bar{H}}^1_{{{\mathcal F}}} (X)$ can be infinite dimensional.
The cup product pairing induced by the exterior product of forms along the leaves turns ${\bar{H}}^{{\raisebox{0.05cm}{$\scriptscriptstyle \bullet$}}}_{{{\mathcal F}}} (X)$ into a graded commutative ${\bar{H}}^0_{{{\mathcal F}}} (X)$-algebra.
The Poincare Lemma extends to the foliation context and implies that $$H^n_{{{\mathcal F}}} (X) = H^n (X, {{\mathcal R}}) \; .$$ Here ${{\mathcal R}}$ is the sheaf of smooth real valued functions which are locally constant on the leaves. In particular $${\bar{H}}^0_{{{\mathcal F}}} (X) = H^0_{{{\mathcal F}}} (X) = H^0 (X, {{\mathcal R}})$$ consists only of constant functions if ${{\mathcal F}}$ contains a dense leaf.
For the torus foliation above with $\alpha \notin {{\mathbb{Q}}}$ we therefore have ${\bar{H}}^0_{{{\mathcal F}}} (T^2) = {{\mathbb{R}}}$. Some Fourier analysis reveals that ${\bar{H}}^1_{{{\mathcal F}}} (T^2) \cong {{\mathbb{R}}}$. The higher cohomologies vanish since almost by definition we have $$H^n_{{{\mathcal F}}} (X) = 0 \quad \mbox{for all} \; n > d = \dim {{\mathcal F}}\; .$$ For a smooth map $f : X \to Y$ of foliated manifolds which maps leaves into leaves, continuous pullback maps $$f^* : {{\mathcal A}}^n_{{{\mathcal F}}_Y} (Y) \longrightarrow {{\mathcal A}}^n_{{{\mathcal F}}_X} (X)$$ are defined for all $n$. They commute with $d_{{{\mathcal F}}}$ and respect the exterior product of forms. Hence they induce a continuous map of reduced cohomology algebras $$f^* : {\bar{H}}^{{\raisebox{0.05cm}{$\scriptscriptstyle \bullet$}}}_{{{\mathcal F}}_Y} (Y) \longrightarrow {\bar{H}}^{{\raisebox{0.05cm}{$\scriptscriptstyle \bullet$}}}_{{{\mathcal F}}_X} (X) \; .$$ A (complete) flow is a smooth ${{\mathbb{R}}}$-action $\phi : {{\mathbb{R}}}\times X \to X , (t,x) \mapsto \phi^t (x)$. It is called ${{\mathcal F}}$-compatible if every diffeomorphism $\phi^t : X \to X$ maps leaves into leaves. If this is the case we obtain a linear ${{\mathbb{R}}}$-action $t \mapsto \phi^{t*}$ on ${\bar{H}}^n_{{{\mathcal F}}} (X)$ for every $n$. Let $$\Theta : {\bar{H}}^n_{{{\mathcal F}}} (X) \longrightarrow {\bar{H}}^n_{{{\mathcal F}}} (X)$$ denote the infinitesimal generator of $\phi^{t*}$: $$\Theta h = \lim_{t\to 0} \frac{1}{t} (\phi^{t*} h -h ) \; .$$ The limit exists and $\Theta$ is continuous in the Fréchet topology. As $\phi^{t*}$ is an algebra endomorphism of the ${{\mathbb{R}}}$-algebra ${\bar{H}}^{{\raisebox{0.05cm}{$\scriptscriptstyle \bullet$}}}_{{{\mathcal F}}} (X)$ it follows that $\Theta$ is an ${{\mathbb{R}}}$-linear derivation. Thus we have $$\label{eq:1}
\Theta (h_1 \cup h_2) = \Theta h_1 \cup h_2 + h_1 \cup \Theta h_2$$ for all $h_1 , h_2$ in ${\bar{H}}^{{\raisebox{0.05cm}{$\scriptscriptstyle \bullet$}}}_{{{\mathcal F}}} (X)$.
For arbitrary foliations the reduced leafwise cohomology does not seem to have a good structure theory. For Riemannian fol
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Dynamical models for 17 early-type galaxies in the Coma cluster are presented. The galaxy sample consists of flattened, rotating as well as non-rotating early-types including cD and S0 galaxies with luminosities between $M_B = -18.79$ and $M_B = -22.56$. Kinematical long-slit observations cover at least the major and minor axis and extend to $1-4 \, \,{r_\mathrm{eff}}$. Axisymmetric Schwarzschild models are used to derive stellar mass-to-light ratios and dark halo parameters. In every galaxy the best fit with dark matter matches the data better than the best fit without. The statistical significance is over 95 percent for 8 galaxies, around 90 percent for 5 galaxies and for four galaxies it is not significant. For the highly significant cases systematic deviations between observed and modelled kinematics are clearly seen; for the remaining galaxies differences are more statistical in nature. Best-fit models contain 10-50 percent dark matter inside the half-light radius. The central dark matter density is at least one order of magnitude lower than the luminous mass density, independent of the assumed dark matter density profile. The central phase-space density of dark matter is often orders of magnitude lower than in the luminous component, especially when the halo core radius is large. The orbital system of the stars along the major-axis is slightly dominated by radial motions. Some galaxies show tangential anisotropy along the minor-axis, which is correlated with the minor-axis Gauss-Hermite coefficient $H_4$. Changing the balance between data-fit and regularisation constraints does not change the reconstructed mass structure significantly: model anisotropies tend to strengthen if the weight on regularisation is reduced, but the general property of a galaxy to be radially or tangentially anisotropic, respectively, does not change. This paper is aimed to set the basis for a subsequent detailed analysis of luminous and dark matter scaling relations, orbital dynamics and stellar populations.'
author:
- |
J. Thomas$^{1,2}$[^1], R. P. Saglia$^{2}$, R. Bender$^{1,2}$, D. Thomas$^{3}$, K. Gebhardt$^{4}$, J. Magorrian$^{5}$, E. M. Corsini$^{6}$ and G. Wegner$^{7}$\
$^{1}$Universitätssternwarte München, Scheinerstraße 1, D-81679 München, Germany\
$^{2}$Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstraße, D-85748 Garching, Germany\
$^{3}$Institute of Cosmology and Gravitation, Mercantile House, University of Portsmouth, Portsmouth, PO1 2EG, UK\
$^{4}$Department of Astronomy, University of Texas at Austin, C1400, Austin, TX78712, USA\
$^{5}$Theoretical Physics, Department of Physics, University of Oxford, 1 Keble Road, Oxford U.K., OX1 3NP\
$^{6}$Dipartimento di Astronomia, Università di Padova, vicolo dell’Osservatorio 3, I-35122 Padova, Italy\
$^{7}$Department of Physics and Astronomy, 6127 Wilder Laboratory, Dartmouth College, Hanover, NH 03755-3528, USA
date: 'Accepted 1988 December 15. Received 1988 December 14; in original form 1988 October 11'
title: 'Dynamical modelling of luminous and dark matter in 17 Coma early-type galaxies'
---
\[firstpage\]
stellar dynamics – galaxies: elliptical and lenticular, cD – galaxies: kinematics and dynamics — galaxies: structure
Introduction {#mass:outline}
============
Elliptical galaxies are numerous among the brightest galaxies and they harbour a significant fraction of the present-day stellar mass in the universe [@Fuk98; @Ren06]. Key parameters for the understanding of elliptical galaxy formation and evolution are, among others, the central dark matter density, the scaling radius of dark matter, the stellar mass-to-light ratio and the distribution of stellar orbits. While the concentration of the dark matter halo puts constraints on the assembly epoch [@nfw96; @J00; @W02], the orbital state contains imprints of the assembly mechanism of ellipticals [e.g. @vanAl82; @Her92; @Her93; @Wei96; @Dub98; @Nab03; @Jes05].
Information about elliptical galaxy masses are in principle offered through various channels. The analysis of X-ray halo temperatures, the kinematics of occasional gas discs and galaxy-galaxy lensing provide evidence for extended dark matter halos around early-type galaxies (e.g. @Ber93 [@Piz97; @Loe99; @Oos02; @Hoe04; @Fuk06; @Hum06; @Kle06; @Man06]). These methods do not constrain the inner halo-profiles strongly, however. At non-local redshifts strong lensing configurations allow a detailed reconstruction of the mass enclosed inside, say, ${r_\mathrm{eff}}$ (e.g. @Kee01 [@Kop06]). None of the above mentioned observational channels is sensitive to dynamical galaxy parameters, such as the distribution of stellar orbits.
Dynamical modelling of stellar kinematics has the unique advantage that it allows reconstruction of both the mass structure and the orbital state of a galaxy. High-quality observations of the line-of-sight velocity distributions (LOSVDs) out to several ${r_\mathrm{eff}}$ are needed for this purpose. To overcome the problems of measuring absorption line kinematics in the faint outskirts of ellipticals, discrete kinematical tracers such as planetary nebulae or globular clusters can be used to additionally constrain the mass distribution (e.g. @Sag00 [@R03; @Pie06]).
Since stars in galaxies behave collisionlessly to first order, the distribution of stellar orbits is not known a priori and very general dynamical methods are required to probe all the degrees of freedom in the orbital system. So far only one large sample of 21 round, non-rotating giant ellipticals has been probed for dark matter considering at least the full range of [*spherical*]{} models [@Kr00]. These models predict circular velocity curves constant to about 10 per cent and equal luminous and dark matter somewhere inside $1 - 3 \, {r_\mathrm{eff}}$. Reconstructed halos of these models are $\sim 25$ times denser than in comparably bright spirals, which indicates a $\sim 3$ times higher formation redshift [@G01]. Not all apparently round objects need to be intrinsically spherical; some may be face-on flattened systems.
Apparently flattened ellipticals have not yet been addressed in much generality. Primarily, because [*axisymmetric*]{} modelling is required to account for intrinsic flattening, inclination effects and rotation. Fully general axisymmetric models involve three integrals of motion, one of which – the non-classical so-called third integral – is not given explicitly in most astrophysically relevant potentials. Only recently, sophisticated numerical methods such as Schwarzschild’s orbit superposition technique [@S79] have provided fully general models involving all relevant integrals of motion. Dynamical studies of samples of elliptical galaxies using this technique are, however, based on kinematical data inside $r \la {r_\mathrm{eff}}$ [@Geb03; @Cap05] and dark matter is not considered.
The present paper is part of a project aimed to analyse the luminous and dark matter distributions as well as the orbital structure in a sample of flattened Coma ellipticals. The data for this project has been collected over the last years and consists of ground-based as well as (archival and new) HST imaging and measurements of line-of-sight velocity distributions (LOSVDs) along various position angles out to $1-4 \, {r_\mathrm{eff}}$ (@Meh00; @Weg02; @Cor07). The implementation of our modelling machinery, which is an advanced version of the axisymmetric Schwarzschild code of @Ric88 and @Geb00 has been described in detail in @Tho04 [@Tho05]. In the present paper we survey the models of the whole sample. This sets the basis for subsequent investigations of luminous and dark matter scaling relations and stellar populations in elliptical galaxies (Thomas et al. 2007a, in preparation).
In Sec. \[sec:obs\] the observations are summarised and the modelling is outlined in Secs. \[sec:setup\] and \[sec:res\]. The mass structure of our models and the orbital anisotropies are described in Secs. \[sec:mass\] and \[sec:aniso\], respectively. Phase-space distribution functions for luminous and dark matter are the subject of Secs. \[sec:dfstars\] and \[sec:dfhalo\]. We discuss the influence of regularisation on our results in Sec. \[sec:regula\]. The paper closes with a short discussion and summary in Sec. \[sec:sum\]. A detailed comparison of models and data for each galaxy can be found in App. \[sec:fits\].
Summary of observations {#sec:obs}
=======================
The Coma sample consists of seventeen early-type galaxies: two cD galaxies, nine ordinary giant ellipticals and six lenticulars or galaxies of intermediate type. They cover the luminosity interval $-20.30<M_B<-22.56$, typical for luminous giant ellipticals/cDs. One single fainter galaxy with $M_B=-18.8$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The new theoretical input to the analysis of the experimental data of the CCFR collaboration for $F_3$ structure function of $\nu N$ deep inelastic scattering is considered. This input comes from the next-to-next-to-leading order corrections to the anomalous dimensions of the Mellin moments of the $F_3$ structure function and N$^3$LO corrections to the related coefficient funtions. The QCD scale parameter $\Lambda_{\overline{MS}}^{(4)}$ is extracted from higher-twist independent fits. The results obtained demonstrate the minimization of the influence of perturbative QCD contributions to the value of $\Lambda_{\overline{MS}}^{(4)}$.'
---
[CERN-TH/2000-343]{}\
hep-ph/0012014
[**Application of new multiloop QCD input\
to the analysis of $xF_3$ data**]{}\
$^{(a)}$, [**G. Parente**]{}$^{(b,1)}$ and [**A.V. Sidorov**]{}$
^{(c,2)}$\
(a) Theoretical Physics Division, CERN CH - 1211 Geneva 23 and\
Institute for Nuclear Research of the Academy of Sciences of Russia, 117312 Moscow, Russia\
(b) Department of Particle Physics, University of Santiago de Compostela,\
15706 Santiago de Compostela, Spain\
(c) Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia
[**ABSTRACT**]{}
The new theoretical input to the analysis of the experimental data of the CCFR collaboration for $F_3$ structure function of $\nu N$ deep inelastic scattering is considered. This input comes from the next-to-next-to-leading order corrections to the anomalous dimensions of the Mellin moments of the $F_3$ structure function. The QCD scale $\Lambda_{\overline{MS}}^{(4)}$ is extracted from higher-twist independent fits. The results obtained demonstrate the minimization of the influence of perturbative QCD contributions to the value of $\Lambda_{\overline{MS}}^{(4)}$. [*Based on Contributed to the Proceedings of Quarks-2000 International Seminar, Pushkin, May 2000, Russia and of ACAT’2000 Workshop, Fermilab, October 2000, USA*]{}
$^{1}$ Supported by Xunta de Galicia (PGIDT00PX20615PR) and CICYT (AEN99-0589-C02-02)\
$^{2}$ Supported by RFBI (Grants N 99-01-00091, 00-02-17432) and by INTAS call 2000 (project N587)
CERN-TH/2000-343\
November 2000
[**Application of new multiloop QCD input\
to the analysis of $xF_3$ data** ]{}
[**A.L. Kataev$^{a}$, G. Parente$^{b}$ and A.V. Sidorov$^{c}$**]{}
[$^{a}$Theoretical Physics Division, CERN, CH-1211 Geneva, Switzerland and\
Institute for Nuclear Research of the Academy of Sciences of Rusia,\
117312 Moscow, Russia\
$^{b}$Department of Particle Physics, University of Santiago de Compostela,\
15706 Santiago de Compostela, Spain\
$^{c}$ Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research,\
141980 Dubna, Russia]{}
Introduction
============
One of the most important current problems of symbolic perturbative QCD studies is the analytical evaluation of the next-to-next-to-leading order (NNLO) QCD corrections to the kernels of the DGLAP equations [@DGLAP] for different structure functions of the deep-inelastic scattering (DIS) process. In this note we will apply the related information for the fixation of definite uncertainties of the NNLO analysis [@KKPS; @KPS1] of experimental data for $F_3$ structure function (SF) data of $\nu N$ DIS, provided by the CCFR collaboration [@CCFR] at the Fermilab Tevatron and present preliminary results of our improved fits which will be described elsewhere [@KPS2].
Methods of analysis of DIS data
===============================
There are several methods of analysis of the experimental data of DIS in the high orders of perturbation theory. The traditional method is based on the solution of the DGLAP equation, which in the case of the $F_3$ SF has the following form: $$Q^2\frac{d}{dQ^2}F_3(x,Q^2)=\frac{1}{2}\int_x^{1}\frac{dy}{y}
\bigg[V_{F_3}(y,A_s)+\beta(A_s)\frac{\partial{\rm ln}C_{F_3}(y,A_s)}
{\partial A_s}\bigg]F_3\bigg(\frac{x}{y},Q^2\bigg)$$ where $A_s=\alpha_s/(4\pi)$, $\mu\partial A_s/\partial\mu=\beta(A_s)$ is the QCD $\beta$-function and $C_{F_3}(y,A_s)$ is the coefficient function, defined as $$C_{F_3}(y,A_s)=\sum_{n\geq 0} C_{F_3,n}(y)
\bigg(\frac{\alpha_s}{4\pi}\bigg)^{n}$$ and $V_{F_3}(z)$ is the DGLAP kernel, related to a non-singlet (NS) $F_3$ SF. The solution of Eq.(1) is describing the predicted by perturbative QCD violation of scaling [@Bj] or automedeling [@BVT] behaviour of the DIS SFs by the logarithmically decreasing order $\alpha_s$-corrections.
The coefficient function we are interested in has been known at the NNLO for quite a long period. The term $C_{F_3,2}(y)$ was analytically calculated in Ref.[@VZ]. The results of these calculations were confirmed recently [@MV] using a different technique.
The kernel $V_{F_3}(z,\alpha_s)$ is analytically known only at the NLO. However, since there exists a method of symbolic evaluation of multiloop corrections to the renormalization group functions in the $\overline{MS}$-scheme [@T] and its realization at the FORM system, it became possible to calculate analytically the NNLO corrections to the $n=2,4,6,8,10$ Mellin moments of the NS kernel of the $F_2$ SF [@Larin]. They have the following expansion: $$-\int_0^{1} z^{n-1}V_{NS,F_2}(z,\alpha_s)dz
= \sum_{i\geq 0}\gamma_{NS,F_2}^{(i)}(n)
\bigg(\frac{\alpha_s}{4\pi}\bigg)^{i+1}$$ and are related to the anomalous dimension of NS renormalization group (RG) constants of $F_2$ SF[^1] : $$\mu\frac{\partial\ln Z_n^{NS,F_2}}{\partial\mu}
=\gamma_{NS,F_2}^{(n)}(\alpha_s)~~~~.$$
These results were used in the process of the fits of Refs.[@KKPS; @KPS1] of the CCFR data for the $F_3$ SF with the help of the Jacobi polynomial method [@Jacobi]. It allows the reconstruction of the SF $F_3$ from the [**finite**]{} number of Mellin moments $M_{j,F_3}(Q^2)$ of the $xF_3$ SF: $$F_3^{N_{max}}(x,Q^2)=w\sum_{n=0}^{N_{max}}
\Theta_n^{\alpha,\beta}(x)\sum_{j=0}^{n}c_j^{(n)}(\alpha,\beta)
M_{j+2,F_3}^{TMC}(Q^2)$$ where $w=w(\alpha,\beta)=x^{\alpha-1}(1-x)^{\beta}$, $\Theta_n^{\alpha,\beta}$ are the orthogonal Jacobi polynomials and $c_j^{(n)}(\alpha,\beta)$ is the combination of Euler $\Gamma$-functions, which is factorially increasing with increasing of $N_{max}$ and thus $n$.
The expressions for $M_{j+2,F_3}^{TMC}(Q^2)$ include the information about Mellin moments of the coefficient function $$C_{n,F_3}(Q^2)=\int_0^{1}x^{n-1}C_{F_3}(x,\alpha_s)dx
=\sum_{i\geq 0}C^{(i)}(n)\bigg(\frac{\alpha_s}{4\pi}\bigg)^{i}$$ where $C^{(0)}(n)=1$. The target mass corrections, proportional to $(M_N^2/Q^2)M_{j+4,F_3}(Q^2)$, are also included into the fits. Therefore, the number of the Jacobi polynomials $N_{max}=6$ corresponds to taking into account the information about RG evolution of 10 moments, and $N_{max}=9$ presumes that the evolution of $n=13$ number of Mellin moments is considered.
The procedure of reconstruction of $F_3(x,Q
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A junction between two boundaries of a topological superconductor (TSC), mediated by localized edge modes of Majorana fermions, is investigated. The tunneling of fermions across the junction depends on the magnetic flux. It breaks the time-reversal symmetry at the boundary of the sample. The persistent current is determined by the emergence of Majorana edge modes. The structure of the edge modes depends on the magnitude of the tunneling amplitude across the junction. It is shown that there are two different regimes, which correspond to strong and weak tunneling of Majorana fermions, distinctive in the persistent current behavior. In a strong tunneling regime, the fermion parity of edge modes is not conserved and the persistent current is a $2\pi$-periodic function of the magnetic flux. When the tunneling is weak the chiral Majorana states, which are propagating along the edges have the same fermion parity. They form a $4\pi$-phase periodic persistent current along the boundaries. The regions in the space of parameters, which correspond to the emergence of $2\pi$- and of $4\pi$-harmonics, are numerically determined. The peculiarities in the persistent current behavior are studied.'
author:
- 'Igor N. Karnaukhov'
title: Persistent current in 2D topological superconductors
---
Introduction {#introduction .unnumbered}
============
The phase coherent tunneling across a junction between two superconductors implies the presence of a $2 \pi-$ periodic persistent current, which is defined by the phase difference between the superconducting order parameters. The Josephson effect has been considered in Refs [@I1; @Er] in the framework of the well-known Kitaev chain model [@Kitaev]. The Kitaev’s proposal that, in the case of the fermion parity conservation, zero-energy Majorana fermion states, which are localized at the ends of the superconducting wire, trigger a $4 \pi$-periodic persistent current explains the so-called ’topological (or fractional) Josephson effect. The $2 \pi$- and $4 \pi$-harmonics of a persistent current correspond to the respective ground states of the system with different fermion parity when the magnetic flux is greater than $\pi$. In [@Kitaev] the author stimulates further research of new topological states that are realized at junctions between 1D TSCs, and Luttinger liquids [@I2; @I3]. In the absence of fermion parity conservation (that is, in those superconductors, in which the total number of particles is not conserved), the system under consideration is relaxing to the phase state with the lowest energy, which leads to the emergence of a $2\pi$-periodic persistent current.
Below we discuss the persistent current in a 2D $(p+ip)$ TSC that has the spatial form of a hollow cylinder and is penetrated by a magnetic flux Q. We expect a nontrivial behavior of the persistent current depending on the magnitude of the applied magnetic flux. Due to their nontrivial topology [@TSC5; @TSC6; @TSC7; @TSC8], the superconductors with $(d +id)$ and $(p+ip)$ order parameters exhibit exotic phenomena such as Majorana vortex bound states and gapless chiral edge modes. The 2D TSCs with the ($p+ip)$-pairing of spinless fermions, which have chiral Majorana fermion states propagating along the edges, have been considered in [@TSC5a]. The behavior of topological states in the presence of disorder has been studied in Refs [@A1; @D1; @D3; @D4; @D5; @D6]. A nontraditional approach for description of TSCs has been proposed in [@K1] (see also [@K2]). It was shown that spontaneous breaking of time reversal symmetry is realized due to nontrivial stable phases of the superconducting order parameter (new order parameter). At that, the models of the TSC with the $p-$ and $(p+ip)$-wave superconducting pairing of spinless fermions are the simplest and the most straightforward examples of relevant model systems.
In a finite system, the gapless chiral edge modes are localized at the boundaries. The tunneling of fermions across a junction leads to gapped edge modes due to the hybridization (through the weak link) of chiral edge modes localized at the different boundaries of the junction. In the case of a 1D superconductor the fermion parity is associated with zero energy Majorana edge states [@Kitaev; @I5; @I6; @I7; @D7], for a 2D TSC a persistent current is determined by the presence of the Majorana gapless edge modes localized at the boundaries of the junction. The ground state fermion parity changes whenever the energy of a pair of Majorana fermions crosses the zero energy. In the superconductor-topological insulator system the fermion parity of the ground state was associated with the Hopf index [@I4]. The fermion parity conservation, as a rule, is the result of the conservation of the total number of particles in the system, while the total number of particles is not conserved in those superconductors, which were studied in the framework of the Bogoliubov-de Gennes formalism. Nevertheless, we show that the fermion parity conservation is realized due to the conservation of the Chern number that determines the chiral current at the ends of the cylinder. The key point of the paper is that the unconventional behavior of the persistent current is determined by a chiral current along the boundaries of the TSC, while the behavior of the persistent current depends on the value of the tunneling amplitude of Majorana fermions across the junction. We should expect that behavior of the persistent current differs in the cases of the strong and weak tunneling of Majorana fermions.
Model Hamiltonian, edge modes {#model-hamiltonian-edge-modes .unnumbered}
=============================
We consider a junction between two boundaries of the TSC. The lattice Hamiltonian for a $(p+ip)$-wave superconductor of spinless fermions consists of two terms: ${\cal H} = {\cal H}_{TSC} + {\cal H}_{tun}$. At that, the first term describes the TSC per se: $${\cal H}_{TSC}= - \sum_{<ij>}a^\dagger_{i}a_j - 2\mu \sum_{j} n_j+
(i\Delta \sum_{<ij> x-links} a^\dagger_{i}a^\dagger_{j}+\Delta\sum_{<ij> y-links} a^\dagger_{i}a^\dagger_{j}+h.c.) ,
\label{eq-H}$$ and the second term describes the tunneling of fermions between two boundaries of a TSC with a junction along the x-direction $${\cal H}_{tun}= - 2\tau e^{i\frac{Q}{2}} \sum_{x-links} a^\dagger_ {x,1} a_{x,L} +h.c.,
\label{eq-Htun}$$ where $a^\dagger_{j} $ and $a_{j}$ are the spinless fermion operators on a site $j = {x,y}$ obeying usual anticommutation relations, and $n_j$ denotes the density operator. The first term in (1) describes hoppings of spinless fermions between nearest-neighbor lattice sites with equal to the unity magnitude, $\mu$ is the chemical potential (by choosing $ 0 < \mu < 1$ we do not restrict the generality of the study). Remaining terms describe pairing with superconducting order parameter $\Delta > 0$, which is defined along the link. Links are divided into two types depending on their direction: real $\Delta$ along y-links and complex $i\Delta$ along x-links. In practice, values of $\Delta,|\mu| << 1$. Therefore, we consider low energy excitations for $\Delta,|\mu| < 1$. The term ${\cal H}_{tun}$ contains the tunneling amplitude $0 < \tau < 1$ and takes into account the applied flux $Q$. The value of $ Q$ is measured in units of the quantum of flux $hc/(2e)$.
Energies of spinless fermions E in the TSC that is described by the Hamiltonian (\[eq-H\]) are arranged symmetrically with respect to the zero energy and are given by the following dispersion relation $$E=\pm[(\mu+\cos k_x + \cos k_y)^2 +\Delta^2 (\sin^2 k_x +\sin^2 k_y)]^{1/2},
\label{eq-3}$$ where the wave vector $\textbf{k}=\{k_x,k_y\}$. In a finite system, the one-particle spectrum of the Hamiltonian $ {\cal H}$ (\[eq-H\]), (\[eq-Htun\]), is also symmetric edge states including. The corresponding edge states are determined by the particle-hole states of Majorana fermions.
![(Color online) Low energy spectra with edge modes of the one-dimensional strip along the *x*-direction as a function of the momentum directed along the edge. The energies are calculated at the Kitaev point $\Delta =1$ for $\mu=\frac{1}{5}$, $Q=\pi$ left), $Q=1\frac{1}{4}\pi$ right) and for different $\tau$. []{data-label="fig:3"}](3.eps){width="1\linewidth"}
We analyze the formation of Majorana modes at the edges of the TSC. The gapped spectrum of excitations (\[eq-3\]) is realized in the topological nontrivial phase at $0<| \mu |<2$ (see the excitation spectra in Figs \[fig:1\]a),c)). The topological properties of a system are manifested in the existence of a
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the clustering of galaxies as function of luminosity and redshift in the range $0.35 < z < 1.25$ using data from the Advanced Large Homogeneous Area Medium Band Redshift Astronomical (ALHAMBRA) survey. The ALHAMBRA data used in this work cover $2.38 \deg^2$ in 7 independent fields, after applying a detailed angular selection mask, with accurate photometric redshifts, $\sigma_z \lesssim 0.014 (1+z)$, down to $I_{\rm AB} < 24$. Given the depth of the survey, we select samples in $B$-band luminosity down to $L^{\rm th} \simeq 0.16 L^{*}$ at $z = 0.9$. We measure the real-space clustering using the projected correlation function, accounting for photometric redshifts uncertainties. We infer the galaxy bias, and study its evolution with luminosity. We study the effect of sample variance, and confirm earlier results that the COSMOS and ELAIS-N1 fields are dominated by the presence of large structures. For the intermediate and bright samples, $L^{\rm med} \gtrsim 0.6L^{*}$, we obtain a strong dependence of bias on luminosity, in agreement with previous results at similar redshift. We are able to extend this study to fainter luminosities, where we obtain an almost flat relation, similar to that observed at low redshift. Regarding the evolution of bias with redshift, our results suggest that the different galaxy populations studied reside in haloes covering a range in mass between $\log_{10}[M_{\rm h}/({\, h^{-1} \, \mathrm{M}_{\sun}})] \gtrsim 11.5$ for samples with $L^{\rm med} \simeq 0.3 L^{*}$ and $\log_{10}[M_{\rm h}/({\, h^{-1} \, \mathrm{M}_{\sun}})] \gtrsim 13.0$ for samples with $L^{\rm med} \simeq 2 L^{*}$, with typical occupation numbers in the range of $\sim 1 - 3$ galaxies per halo.'
author:
- |
P. Arnalte-Mur$^{1,\star}$, V. J. Martínez$^{2,3,4}$, P. Norberg$^1$, A. Fernández-Soto$^{4,5}$, B. Ascaso$^6$, A. I. Merson$^7$, J. A. L. Aguerri$^8$, F. J. Castander$^9$, L. Hurtado-Gil$^{2,5}$, C. López-Sanjuan$^{10}$, A. Molino$^6$, A. D. Montero-Dorta$^{11}$, M. Stefanon$^{12}$, E. Alfaro$^6$, T. Aparicio-Villegas$^{13}$, N. Benítez$^6$, T. Broadhurst$^{14}$, J. Cabrera-Caño$^{15}$, J. Cepa$^{8,16}$, M. Cerviño$^{6,8,16}$, D. Cristóbal-Hornillos$^{10}$, A. del Olmo$^6$, R. M. González Delgado$^6$, C. Husillos$^6$, L. Infante$^{17}$, I. Márquez$^6$, J. Masegosa$^6$, M. Moles$^{10}$, J. Perea$^6$, M. Pović$^6$, F. Prada$^6$, J. M. Quintana$^6$\
$^1$Institute for Computational Cosmology, Department of Physics, Durham University, South Road, Durham DH1 3LE, UK\
$^2$Observatori Astronòmic, Universitat de València, C/ Catedràtic José Beltrán 2, E-46980, Paterna, Spain\
$^3$Departament d’Astronomia i Astrofísica, Universitat de València, E-46100, Burjassot, Spain\
$^4$Unidad Asociada Observatorio Astronómico (IFCA-UV), E-46980, Paterna, Spain\
$^5$Instituto de Física de Cantabria (CSIC-UC), E-39005 Santander, Spain\
$^6$IAA-CSIC, Glorieta de la Astronomía s/n, 18008 Granada, Spain\
$^7$Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK\
$^8$Instituto de Astrofísica de Canarias, Vía Láctea s/n, 38200 La Laguna, Tenerife, Spain\
$^9$Institut de Ciències de l’Espai (IEEC-CSIC), Facultat de Ciències, Campus UAB, 08193 Bellaterra, Spain\
$^{10}$Centro de Estudios de Física del Cosmos de Aragón, Plaza San Juan 1, 44001 Teruel, Spain\
$^{11}$Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112, USA\
$^{12}$Physics and Astronomy Department, University of Missouri, Columbia, MO 65211, USA\
$^{13}$Observatório Nacional-MCT, Rua José Cristino, 77. CEP 20921-400, Rio de Janeiro-RJ, Brazil\
$^{14}$Department of Theoretical Physics, University of the Basque Country UPV/EHU, 48080 Bilbao, Spain\
$^{15}$Departamento de Física Atómica, Molecular y Nuclear, Facultad de Física, Universidad de Sevilla, 41012 Sevilla, Spain\
$^{16}$Departamento de Astrofísica, Facultad de Física, Universidad de La Laguna, 38206 La Laguna, Spain\
$^{17}$Departamento de Astronomía, Pontificia Universidad Católica. 782-0436 Santiago, Chile\
$^{\star}$ E-mail: pablo.arnalte-mur@durham.ac.uk
bibliography:
- 'ArnalteMur\_ALHclustering\_v2.bib'
date: 'Accepted by MNRAS, 2014 April 4.'
title: 'The ALHAMBRA survey: evolution of galaxy clustering since $z \sim 1$'
---
\[firstpage\]
methods: data analysis – methods: statistical – galaxies: distances and redshifts – cosmology: observations – large-scale structure of Universe
Introduction {#sec:intro}
============
The large-scale structure (LSS) of the Universe is one of the main observables that we can use to obtain information about the nature of dark matter and cosmic acceleration. The simplest way to study the LSS is to study the spatial distribution of galaxies in surveys covering cosmologically significant volumes. Although the galaxy distribution is closely related to the global matter distribution, they are not equal. The relation between both distributions is known as galaxy bias, and it depends on the processes of galaxy formation and evolution. In the simplest case, one can consider the galaxy contrast to be proportional to the matter contrast. Then, the bias is simply the constant of proportionality, which is independent of scale. Being able to understand and model this bias is crucial for the correct interpretation of the cosmological information that can be obtained from the analysis of galaxy clustering.
As the bias encodes information about the galaxy formation and evolution process, it is logical to expect that it will be different for different galaxy populations. In other words, the clustering properties of galaxies should depend on some of their intrinsic properties, such as stellar mass, star formation rate or age, and should evolve with time. This phenomenon, known as galaxy segregation, is observed when studying the dependence of clustering on different observables such as luminosity, colour, or morphology. In general, it is observed that bright, red, elliptical galaxies are more strongly clustered (i.e., they have a larger bias) than faint, blue, spiral ones [see e.g. @dav76a; @ham88a; @mad03a; @ski08a; @mar10a; @zeh10a].
In this work, we focus on the dependence of the galaxy bias on luminosity, and the evolution of this relation with redshift. This dependence has been studied extensively in the local Universe using both the Two-degree Field Galaxy Redshift Survey [2dFGRS, @nor01a; @nor02b] and Sloan Digital Sky Survey [SDSS, @teg04b; @zeh05a; @zeh10a]. @Guo2013a also studied this relationship at $z \sim 0.5$ using data from the Baryon Oscillation Spectroscopic Survey (BOSS). The bias shows a weak dependence on luminosity $L$ for galaxies with $L < L^{*}$, where $L^{*}$ is the characteristic luminosity parameter of the Schechter function. For $L \gtrsim L^{*}$, however, this relation steepens, and the bias clearly increases with luminosity.
These studies
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We have studied crystal structure, magnetism and electric transport properties of a europium fulleride Eu$_6$C$_{60}$ and its Sr-substituted compounds, Eu$_{6-x}$Sr$_x$C$_{60}$. They have a $bcc$ structure, which is an isostructure of other $M_6$C$_{60}$ ($M$ represents an alkali atom or an alkaline earth atom). Magnetic measurements revealed that magnetic moment is ascribed to the divalent europium atom with $S$ = 7/2 spin, and a ferromagnetic transition was observed at $T_C$ = 10 - 14 K. In Eu$_6$C$_{60}$, we also confirm the ferromagnetic transition by heat capacity measurement. The striking feature in Eu$_{6-x}$Sr$_x$C$_{60}$ is very large negative magnetoresistance at low temperature; the resistivity ratio $\rho$($H$ = 9 T)/$\rho$($H$ = 0 T) reaches almost 10$^{-3}$ at 1 K in Eu$_6$C$_{60}$. Such large magnetoresistance is the manifestation of a strong $\pi$-$f$ interaction between conduction carriers on C$_{60}$ and 4$f$ electrons of Eu.'
author:
- Kenji Ishii
- Akihiko Fujiwara
- Hiroyoshi Suematsu
- Yoshihiro Kubozono
bibliography:
- 'eu.bib'
title: 'Ferromagnetism and giant magnetoresistance in the rare earth fullerides Eu$_{6-x}$Sr$_x$C$_{60}$'
---
INTRODUCTION
============
Since the discovery of fullerenes, C$_{60}$ compounds have given us various opportunities for the research in condensed matter physics and materials science. Much attention was attracted to the superconductivity in $A_3$C$_{60}$ ($A$ is an alkali atom) [@Hebard1]. As for the magnetism, TDAE-C$_{60}$ (TDAE is tetrakisdimethylaminoethylene) shows a ferromagnetic transition [@Allemand1], while antiferromagnetic (or spin density wave) ground state was observed in polymeric $A_1$C$_{60}$ [@Chauvet1], Na$_2$Rb$_{0.3}$Cs$_{0.7}$C$_{60}$ [@Arcon1], and three-dimensional (NH$_3$)$A_3$C$_{60}$ [@Takenobu1]. In these compounds a magnetic moment is considered to be carried by an electron on C$_{60}$ molecule. Because various atoms and molecules can be intercalated into C$_{60}$ crystal, we also expect the magnetic C$_{60}$ compounds in which magnetic moment is carried by intercalants. In this viewpoint, rare earth metal is a good candidate. The research of rare earth fullerides was reported for Yb [@Oezdas1] and Sm [@Chen1] in relation to the superconductivity, but little effort has been made to study the magnetic properties. The only case of magnetic study in rare earth fullerides is for europium. Europium has a magnetic moment of 7$\mu_B$ ($S$ = 7/2, $L$ = 0, and $J$ = 7/2) in the divalent state, while it is non-magnetic ($S$ = 3, $L$ = 3, and $J$ = 0) in the trivalent state. A photoemission study of C$_{60}$ overlayered on Eu metal revealed the charge transfer from Eu to C$_{60}$ and the formation of fulleride [@Yoshikawa1]. Ksari-Habiles [*et al*]{}. [@Ksari1; @Claves1] investigated the crystal structure and magnetic properties of Eu$_{\sim3}$C$_{60}$ and Eu$_6$C$_{60}$; they observed some magnetic anomalies in Eu$_6$C$_{60}$.
In this paper, we report the ferromagnetic transition of Eu$_6$C$_{60}$, which was observed at $T_C \sim$ 12 K in magnetic and heat capacity measurements. We also investigated the substitution effect from Eu to non-magnetic Sr, and the ferromagnetic transition temperature was found to change little with the Sr concentration. In the resistivity measurement we found a huge negative magnetoresistance below around $T_C$; the reduction ratio of resistivity $\rho(H)/\rho(0)$ is almost 10$^{-3}$ at 1 K in Eu$_6$C$_{60}$. This ratio is comparable to those in perovskite manganese oxides, which is known as colossal magnetoresistance (CMR). However Eu$_6$C$_{60}$ should be categorized as a new class of giant magnetoresistive compounds in the sense that (1) the magnitude of magnetoresistance increases very steeply with decreasing temperature rather than the vicinity of $T_C$, (2) the compound consists of a molecule with novel structure. These features can open the further possibility to find a new magnetic and magnetoresistive material.
EXPERIMENTAL PROCEDURES
=======================
Polycrystalline samples of Eu$_{6-x}$Sr$_x$C$_{60}$ were synthesized by solid-state reaction. A stoichiometric amount of mixture of Eu, Sr and C$_{60}$ powders, which was pressed into a pellet and sealed in a quartz tube in vacuum, was heat-treated at 600 $^{\circ}$C for about 10 days. In the course of the heat treatment the sample was ground for ensuring the complete reaction. Because the sample is very unstable in air, we treated it in a glove box with inert atmosphere.
Powder x-ray diffraction experiments were carried out by using synchrotron radiation x-rays at BL-1B in Photon Factory, KEK, Tsukuba. The samples was put into a glass capillary in 0.3 mm diameter and an imaging plate was used for the detection [@Fujiwara1]. Magnetic measurements were performed using a SQUID magnetometer. In the heat capacity measurement by relaxation method, the sample was pressed into a pellet and sealed by grease to keep from exposure to air. Eu $L_{III}$-edge XANES (x-ray absorption near edge structure) was measured in the fluorescence method at BL01B1 of SPring-8, Harima. The resistivity measurements were carried out by the 4-probe method. Four gold wires were attached to a pressed pellet of polycrystalline sample with sliver paste. The sample was put into a capsule and sealed in He atmosphere.
RESULTS
=======
X-ray diffraction spectra of Eu$_{6-x}$Sr$_x$C$_{60}$ are shown in Fig. \[fig:structure\](a). The spectrum of Sr$_6$C$_{60}$ is also presented as a reference. The wavelength of x-ray is 0.8057 Å for $x$ = 0, 3, 5, and 0.8011 Å for $x$ = 6. They all can be understood by a $bcc$ structure which is an isostructure of other $M_6$C$_{60}$ in alkali [@Zhou1], alkaline earth[@Kortan1] and rare earth (Sm) [@Chen2] fullerides. The Rietveld refinements based on the space group $Im\overline{\it 3}$ were performed with use of the RIETAN program [@Izumi1; @Kim1]. In the refinements, only two atomic coordinates ($x$ for C1 and C3) are refined in C$_{60}$ molecule, which corresponds to the refinement of the length of 6:6 bond (the bond between two hexagons) and 5:6 bond (the bond between hexagon and pentagon). In the compounds of $x$ = 3 and 5, the sum of the metal concentration is fixed to unity. The results of refinement are presented in Table \[tab:structure\] and obtained structure is shown in Fig. \[fig:structure\](b). This crystal structure of Eu$_6$C$_{60}$ is consistent with the previous works [@Claves1; @Ootoshi1], but we observed little trace of the secondary phase in the present sample. In the Sr-substituted compounds, the values of Eu concentration are in good agreement with the nominal ones.
As seen in Fig. \[fig:structure\](c), the obtained lattice constants change linearly with the nominal Eu concentration, which means they follow the Vegard’s law, and confirms the formation of solid solution at $x$ = 3 and 5. This result is attributed to the fact that ionic radius of Eu$^{2+}$ and Sr$^{2+}$ is quite similar, while the substitution of Ba for Eu results in the phase separation.
Figures \[fig:magnetism\] show the result of magnetic measurements of Eu$_{6-x}$Sr$_x$C$_{60}$. Above 30 K, magnetic susceptibility ($\chi$) follows the Curie-Weiss law, as shown in Figs.\[fig:magnetism\](a)-(c). The effective Bohr magneton estimated from Curie constant and the Weiss temperature are summarized in Table \[tab:magnetism\] [@eumag]. The former agrees with the Eu$^{2+}$ state ($S$ = 7/2, $L$ = 0, and $J$ = 0). The field dependence of magnetization at 2 K gives the saturation moment close to 7$\mu_{B}$, which is consistent with the magnetic moment of Eu$^{2+}$. Moreover the Eu$^{2+}$ state has been also confirmed by Eu $L_{III}$-edge XANES experiments, as seen in Fig. \[fig:xanes\]. The spectra of EuS and
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate the supersymmetry (SUSY) structures for inductor-capacitor circuit networks on a simple regular graph and its line graph. We show that their eigenspectra must coincide (except, possibly, for the highest eigenfrequency) due to SUSY, which is derived from the topological nature of the circuits. To observe this spectra correspondence in the high frequency range, we study spoof plasmons on metallic hexagonal and lattices. The band correspondence between them is predicted by a simulation. Using terahertz time-domain spectroscopy, we demonstrate the band correspondence of fabricated metallic hexagonal and lattices.'
author:
- Yosuke Nakata
- Yoshiro Urade
- Toshihiro Nakanishi
- Fumiaki Miyamaru
- Mitsuo Wada Takeda
- Masao Kitano
nocite: '[@*]'
title: ' Supersymmetric correspondence in spectra on a graph and its line graph: From circuit theory to spoof plasmons on metallic lattices '
---
\#1[[~~\#1~~]{}]{} \#1[[\#1]{}]{}
\#1[[Ø u ‘=‘Ø=]{}]{} \#1[\#1|]{} \#1[\#1|]{} \#1[|]{} \#1[|]{} \#1 \#1\#2[|]{} \#1\#2\#3[||]{} \#1[[ ]{}]{} \#1 \#1\#2 \#1[\_]{} \#1[\^]{}
Introduction
============
Supersymmetry (SUSY) is a conjectured symmetry between fermions and bosons. Although the concept of SUSY was introduced in high-energy physics and remains to be experimentally confirmed, the underlying algebra is also found in quantum mechanics. When the SUSY algebra is applied to the field of quantum mechanics it is called supersymmetric quantum mechanics (SUSYQM) [@Cooper1994]. The algebraic relations of SUSY link two systems that at first glance might seem to be very different. The linkage through SUSY can be utilized to construct exact solutions for various systems in quantum mechanics. Recently, SUSYQM has been applied to construct quantum systems enabling exotic quantum wave propagations: reflectionless or invisible defects in tight-binding models [@Longhi2010] and complex crystals [@Longhi2013a], transparent interface between two isospectral one-dimensional crystals [@Longhi2013], reflectionless bent waveguides for matter-waves [@Campo2014], and disordered systems with Bloch-like eigenstates and band gaps [@Yu2015].
The SUSY structure was also found in other physics fields besides quantum mechanics, e.g., statistical physics through the Fokker-Planck equations [@Bernstein1984]. Through the similarity between quantum-mechanical probability waves and electromagnetic waves, the SUSY structure can be formulated for electromagnetic systems. Electromagnetic SUSY structures have been found in one-dimensional refractive index distributions [@Chumakov1994; @Miri2013], coupled discrete waveguides [@Longhi2010; @Miri2013], weakly guiding optical fibers with cylindrical symmetry [@Miri2013], planar waveguides with varying permittivity and permeability [@Laba2014], and non-uniform grating structures [@Longhi2015]. Even a quantum optical deformed oscillator with $\mathrm{SU}(1,1)$ group symmetry and its SUSY partner were constructed as a classical electromagnetic system [@Zuniga-Segundo2014].
The SUSY transformation generates new optical systems whose spectra coincide with those of the original system (except possibly for the highest eigenvalue of the fundamental mode of original or generated systems). The SUSY transformations have been utilized to synthesize mode filters [@Miri2013] and distributed-feedback filters with any desired number of resonances at the target frequencies [@Longhi2015]. The scattering properties of the optical systems paired by the SUSY transformation are related to each other [@Longhi2010; @Miri2013]. It is possible to design an optical system family with identical reflection and transmission characteristics by using the SUSY transformations [@Miri2014]. A reflectionless potential derived from the trivial system by SUSY transformation was applied to design transparent optical intersections [@Longhi2015a]. Moreover, SUSY has also been intensively investigated in non-Hermitian optical systems. If a system is invariant under the simultaneous operations of the space and time inversions, it is called $\mathcal{PT}$-symmetric. The SUSY transformation for the $\mathcal{PT}$-symmetric system allows for arbitrarily removing bound states from the spectrum [@Miri2013a]. In addition, non-Hermitian optical couplers can be designed [@Principe2015]. By using double SUSY transformations, the bound states in the continuum were also formulated in tight-binding lattices [@Longhi2014; @Longhi2014a] and continuous systems [@Correa2015]. The SUSY transformation in the $\mathcal{PT}$-symmetric system can also reduce the undesired reflection of one-way-invisible optical crystals [@Midya2014].
From an experimental perspective, it is still challenging to extract the full potential of electromagnetic SUSY because of fabrication difficulties. However, using dielectric coupled waveguides, researchers have realized a reflectionless potential [@Szameit2011], interpreted as a transformed potential derived from the trivial one by a SUSY transformation [@Longhi2010], and SUSY mode converters [@Heinrich2014]. The SUSY scattering properties of dielectric coupled waveguides have also been observed [@Heinrich2014a].
As we have described so far, many studies have been done for the electromagnetic SUSY, but their focusing point is mainly limited to dielectric structures. Recent progress of plasmonics [@AlexanderMaier2007] and metamaterials [@Solymar2009] using metals in optics demands further studies of SUSY for metallic systems. To design and analyze the characteristics of metallic structures, intuitive electrical circuit models are very useful, because they extract the nature of the phenomena despite reducing the degree of freedom for the problem [@Nakata2012a]. Actually, a circuit-theoretical design strategy called [*metactronics*]{} has been proposed even in the optical region [@Engheta2007] and the circuit theory for plasmons has also been developed [@Staffaroni2012]. If we could design circuit models enabling exotic phenomena, they open up new possibilities for application to higher frequency ranges due to the scale invariance of Maxwell equations. Thus, in this paper we develop how SUSY appears in inductor-capacitor circuit networks and demonstrate the SUSY correspondence in the high frequency region. In particular, we focus on the SUSY structure for inductor-capacitor circuit networks on a graph and its line graph.
This article is organized as follows. In Sec. \[sec:2\], we start by introducing the graph-theoretical concepts and formulate a general class of inductor-capacitor circuit network pairs related through SUSY, derived from the topological nature of the graphs representing the circuits. In Sec. \[sec:3\], we theoretically and experimentally demonstrate the SUSY eigenfrequency correspondence for paired metallic lattices in the terahertz frequency range. In Sec. \[sec:4\], we summarize and conclude the paper.
Theory \[sec:2\]
================
Eigenequation for inductor-capacitor circuit networks
------------------------------------------------------
We consider an inductor-capacitor circuit network on a simple directed graph $G=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}$ and $\mathcal{E}$ are the sets of vertices and directed edges, respectively. The modifier [*simple*]{} means that there are no multiple edges between any vertex pair and no edge (loop) that connects a vertex to itself. The number of the edges connected to a vertex $v$ of a graph is called the degree of $v$. A [*regular*]{} graph is a graph whose every vertex has the same degree. We assume that $G$ is an $m$-[*regular*]{} graph with all vertices having degree $m$. The capacitors, all with the same capacitance $C$, are connected between each vertex $v\in \mathcal{V}$ and the ground. Coils, all with the same inductance $L$, are loaded along all $e \in \mathcal{E}$. An example of $G$ and the inductor-capacitor circuit network on it are shown in Fig. \[fig:lc\_ladder\](a) and (b).
![\[fig:lc\_ladder\] (a) Example of simple $3$-regular directed graph. (b) Inductor-capacitor circuit network on the graph. (c) Line graph of the graph shown in (a). (d) Inductor-capacitor circuit network on the line graph (c). ](lc_ladder.eps){width="86mm"}
For $v\in \mathcal{V}$ and $e\in \mathcal{E}$, the incidence matrix $\mathsf{X}=[X_{ve}]$ of a directed graph $G$ is defined as follows: $X_{ve}=-1$ ($e$ enters $v$), $X_{ve}=1$ ($e$ leaves $v$), otherwise $X_{ve}=0$.
Using vector notation, we represent the current distribution $J_e$ flowing along $e\in \mathcal{E}$ as a column vector $\vct{J}=[J_e]^\mathrm{T}$. The charge distribution is denoted by $\vct{q}=[q_v]^\mathrm{T}$ with a stored charge $q_v$ at $v\in \mathcal{V}$. The charge conservation law is given by $$\dot{\vct{q}}=-\mathsf{X} \vct{J}, \label{eq:1}$$ where the time derivative is represented by
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The performance of Gallager’s error-correcting code is investigated via methods of statistical physics. In this approach, the transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon’s channel capacity is saturated for many of the codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding.'
address: |
$^{1}$ Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 2268502, Japan.\
$^{2}$The Neural Computing Research Group, Aston University, Birmingham B4 7ET, UK.
author:
- 'Yoshiyuki Kabashima$^{1}$, Tatsuto Murayama$^{1}$ and David Saad$^{2}$'
title: 'Typical Performance of Gallager-type Error-Correcting Codes'
---
The ever increasing information transmission in the modern world is based on communicating messages reliably through noisy transmission channels; these can be telephone lines, deep space, magnetic storing media etc. Error-correcting codes play an important role in correcting errors incurred during transmission; this is carried out by encoding the message prior to transmission, and decoding the corrupted received codeword for retrieving the original message. In his ground breaking papers, Shannon[@Shannon] analyzed the capacity of communication channels, setting an upper bound to the achievable noise-correction capability of codes, given their code (or symbol) rate. The latter represents the ratio between the number of bits in the original message and the transmitted codeword.
Shannon’s bound is non-constructive and does not provide explicit rules for devising optimal codes. The quest for more efficient codes, in the hope of saturating the bound set by Shannon, has been going on ever since, providing many useful but sub-optimal codes.
One family of codes, presented originally by Gallager[@Gallager], attracted significant interest recently as it has been shown to outperform most currently used techniques[@MacKay]. In fact, irregular versions of Gallager-type codes have recently been shown to get very close to saturating Shannon’s bound in the case of infinitely long messages[@Richardson]. Gallager-type codes are characterized by several parameters, the choice of which defines a particular member of this family of codes. Most studies of Gallager-type codes conducted so far have been carried out via numerical simulations. Some analytical results have been obtained via methods of information theory [@MacKay], setting bounds on the performance of certain code types, and by combinatorical/statistical methods [@Richardson]; no quantitative results have been obtained for their [*typical*]{} performance.
In this Letter we analyze the typical performance of Gallager-type codes for several parameter choices via methods of statistical mechanics. We then validate the analytical solution by comparing the results to those obtained by the TAP approach to diluted systems and via numerical methods.
In a general scenario, a message represented by an $N$ dimensional Boolean/binary vector ${\mbox{\boldmath{$\xi$}}}$ is encoded to the $M$ dimensional vector ${\mbox{\boldmath{$J^{0}$}}}$ which is then transmitted through a noisy channel with some flipping probability $p$ per bit (other noise types may also be considered but will not be examined here). The received message ${\mbox{\boldmath{$J$}}}$ is then decoded to retrieve the original message.
One can identify several slightly different versions of Gallager-type codes. The one used in this Letter, termed the MN code[@MacKay] is based on choosing two randomly-selected sparse matrices $A$ and $B$ of dimensionality $M\!\times \!N$ and $M\!\times\! M$ respectively; these are characterized by $K$ and $L$ non-zero unit elements per row and $C$ and $L$ per column respectively. The finite, usually small, numbers $K$, $C$ and $L$ define a particular code; both matrices are known to both sender and receiver. Encoding is carried out by constructing the modulo 2 inverse of $B$ and the matrix $B^{-1}A$ (modulo 2); the vector ${\mbox{\boldmath{$J^{0}$}}}\! =\! B^{-1}A \ {\mbox{\boldmath{$\xi$}}}$ (modulo 2, ${\mbox{\boldmath{$\xi$}}}$ in a Boolean representation) constitutes the codeword. Decoding is carried out by taking the product of the matrix $B$ and the received message ${\mbox{\boldmath{$J$}}}\! = \! {\mbox{\boldmath{$J^{0}$}}}\! +\! {\mbox{\boldmath{$\zeta$}}}$ (modulo 2), corrupted by the Boolean noise vector ${\mbox{\boldmath{$\zeta$}}}$, resulting in $A{\mbox{\boldmath{$\xi$}}}\! + \! B{\mbox{\boldmath{$\zeta$}}}$. The equation $$\label{eq:decoding}
A{\mbox{\boldmath{$\xi$}}}+ B{\mbox{\boldmath{$\zeta$}}}= A{\mbox{\boldmath{$S$}}}+ B{\mbox{\boldmath{$\tau$}}}$$ is solved via the iterative methods of Belief Propagation (BP)[@MacKay] to obtain the most probable Boolean vectors ${\mbox{\boldmath{$S$}}}$ and ${\mbox{\boldmath{$\tau$}}}$; BP methods in the context of error-correcting codes have recently been shown to be identical to a TAP[@tap] based solution of a similar physical system[@us_sourlas].
The similarity between error-correcting codes of this type and Ising spin systems was first pointed out by Sourlas[@Sourlas], who formulated the mapping of a simpler code, somewhat similar to the one presented here, onto an Ising spin system Hamiltonian. We recently extended the work of Sourlas, that focused on extensively connected systems, to the finite connectivity case[@us_sourlas].
To facilitate the current investigation we first map the problem to that of an Ising model with finite connectivity. We employ the binary representation $(\pm1)$ of the dynamical variables ${\mbox{\boldmath{$S$}}}$ and ${\mbox{\boldmath{$\tau$}}}$ and of the vectors ${\mbox{\boldmath{$J$}}}$ and ${\mbox{\boldmath{$J^{0}$}}}$ rather than the Boolean $(0,1)$ one; the vector ${\mbox{\boldmath{$J^{0}$}}}$ is generated by taking products of the relevant binary message bits $J^{0}_{\left\langle i_{1}, i_{2} \ldots
\right\rangle} \! = \! \xi_{i_{1}} \xi_{i_{2}} \ldots $, where the indices $i_{1},i_{2}\ldots $ correspond to the non-zero elements of $B^{-1}A$, producing a binary version of ${\mbox{\boldmath{$J^{0}$}}}$. As we use statistical mechanics techniques, we consider the message and codeword dimensionality ($N$ and $M$ respectively) to be infinite, keeping the ratio between them $R \!=\! N/M$, which constitutes the code rate, finite. Using the thermodynamic limit is quite natural as Gallager-type codes are usually used for transmitting long ($10^{4}\!-\!10^{5}$) messages, where finite size corrections are likely to be negligible. To explore the system’s capabilities we examine the Hamiltonian $$\begin{aligned}
\label{eq:Hamiltonian}
{\cal H} &=& \!\!\!\!\! \sum_{<i_1,..,i_K;j_1,..,j_L>}
\mbox{\hspace*{-5mm}} {{\cal D}}_{<i_1,..,i_K;j_1,..,j_L>} \ \delta
\biggl[-1 \ ; \ {{\cal J}}_{<i_1,..,i_K;j_1,..,j_L>} \nonumber \\
&\cdot & S_{i_1}\ldots S_{i_K} \tau_{j_1}\ldots\tau_{j_L}
\biggr] - \frac{F_s}{\beta} \sum_{i=1}^{N} S_i -
\frac{F_{\tau}}{\beta} \sum_{j=1}^{M} \tau_j \ .\end{aligned}$$ The tensor product ${{\cal D}}_{<i_1,..,i_K;j_1,..,j_L>}
{{\cal J}}_{<i_1,..,i_K;j_1,..,j_L>}$, where ${{\cal J}}_{<i_1,..,j_L>} \! = \!
\xi_{i_{1}} \xi_{i_{2}} \ldots \xi_{i_{K}} \zeta_{j_{1}} \zeta_{j_{2}}
\ldots \zeta_{j_{L}}$, is the binary equivalent of $A{\mbox{\boldmath{$\xi$}}}\! + \!
B{\mbox{\boldmath{$\zeta$}}}$, treating both signal (${\mbox{\boldmath{$S$}}}$ and index $i$) and noise (${\mbox{\boldmath{$\tau$}}}$ and index $j$) simultaneously. Elements of the sparse connectivity tensor ${{\cal D}}_{<i_1,..,j_L>}$ take the value 1 if the corresponding indices of both signal and noise are chosen (i.e., if all corresponding indices of the matrices $A$ and $B$ are 1)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we develop Algebraic Morse Theory for the case where a group acts on a free chain complex. Algebraic Morse Theory is an adaption of Discrete Morse Theory to free chain complexes.'
address: 'Fachbereich Mathematik, Universität Bremen, Bibliothekstraße 1, 28359 Bremen, Germany'
author:
- Ralf Donau
title: Equivariant Algebraic Morse Theory
---
Discrete Morse Theory,Acyclic matching,Algebraic Morse Theory
Introduction
============
There exists an equivariant version of the Main Theorem of Discrete Morse Theory, see [@freij]. In this paper I present an equivariant version of Theorem 11.24 in [@buch Chapter 11.3]. We use the same notion of an equivariant acyclic matching as in Equivariant Discrete Morse Theory. An example for working with equivariant acyclic matchings can be found in [@donau3].
Equivariant acyclic matchings
=============================
The definition of an acyclic matching on a poset can be found in [@clmap; @buch].
Let $P$ be a poset and let $G$ be a group acting on $P$. Let $M$ be an acyclic matching on $P$. We call $M$ an *$G$-equivariant acyclic matching* if $(a,b)\in M$ implies $(ga,gb)\in M$ for all $g\in G$ and $a,b\in P$.
There exists a characterization of acyclic matchings by means of order-preserving maps with small fibers, see Definition 11.3 and Theorem 11.4 in [@buch Chapter 11]. In a similar way we can also characterize $G$-equivariant acyclic matchings by means of order-preserving $G$-maps with small fibers.
For an order-preserving map with small fibers $\varphi$ let $M(\varphi)$ denote its associated acyclic matching which consists of all fibers of cardinality $2$, see [@buch Chapter 11].
\[smallfibers\] Let $G$ be a group acting on a finite poset $P$. For any order-preserving $G$-map $\varphi:P\longrightarrow Q$ with small fibers, the acyclic matching $M(\varphi)$ is $G$-equivariant. On the other hand, any $G$-equivariant acyclic matching $M$ on $P$ can be represented as $M=M(\varphi)$, where $\varphi:P\longrightarrow Q$ is an order-preserving $G$-map with small fibers.
The proof of Proposition \[smallfibers\] can be found in [@donau3].
The main result
===============
We consider chain complexes of modules over some fixed commutative ring $R$ with unit. Furthermore we consider group actions on such chain complexes. Let $G$ be a group and let $C_*=(\dots\overset{\partial_{n+2}}\longrightarrow C_{n+1}\overset{\partial_{n+1}}\longrightarrow C_n\overset{\partial_n}\longrightarrow\dots)$ be a finitely generated free chain complex with an action of the group $G$ and let $\Omega=(\Omega_n)_n$ be a $G$-basis of $C_*$, i.e. each $\Omega_n$ is closed under the action of $G$. For $b\in\Omega_n$ let $k_b:C_n\longrightarrow R$, $\sum_{b'\in\Omega_n}\lambda_{b'}b'\longmapsto\lambda_b$ denote the linear function which maps any $x\in C_n$ to the coefficient of $b$ inside the linear representation of $x$.
Let $P(C_*,\Omega):=\bigcup_n\Omega_n$. We define an order relation on $P(C_*,\Omega)$ as follows. For $a\in\Omega_n$ and $b\in\Omega_{n+1}$ we denote the *weight of the covering relation* by $w(b\succ a):=k_a(\partial_n b)$, we set $a\leq b$ if $w(b\succ a)\not=0$. Furthermore for $a\in\Omega_n$ and $b\in\Omega_m$ with $n\leq m$, we set $a\leq b$ if there exists a sequence $(c_i)_{n\leq i\leq m}$ with $c_i\leq c_{i+1}$ for $n\leq i<m$ such that $a=c_n$ and $b=c_m$. This defines a partial order relation on $P(C_*,\Omega)$, which can be easily verified. Notice that $P(C_*,\Omega)$ is a $G$-poset since $\partial_n$ is a $G$-map.
Let $M$ be an $G$-equivariant acyclic matching on $P(C_*,\Omega)$ such that $w(b\succ a)$ is invertible for any $(a,b)\in M$. Let $\varphi$ denote the order-preserving $G$-map with small fibers with $M(\varphi)=M$, which exists by Proposition \[smallfibers\].
For $b\in C_n$ we define the $G$-subcomplex ${\cal A}(Gb)$ as follows. $$\dots\longrightarrow 0\longrightarrow\span(Gb)\overset{\partial^{Gb}_n}\longrightarrow\span(G\partial_n b)\longrightarrow 0\longrightarrow\dots$$ $\partial^{Gb}_n$ denotes the restriction of $\partial_n:C_n\longrightarrow C_{n-1}$ to $\span(Gb)$, i.e. $\partial^{Gb}_n(x):=\partial_n(x)$ for $x\in\span(Gb)$.
Notice that $\partial^{Gb}_n$ is surjective by construction.
\[poset\_orbit\] Let $G$ be a finite group acting on a finite poset $Q$. Let $q\in Q$ and $g\in G$. Then $gq\leq q$ implies $gq=q$. In other words, the elements inside an orbit are not comparable to each other.
Let $(a,b)\in M$ be a matching pair. Then for any $g\in G$, $a\prec gb$ implies $\varphi(b)=\varphi(a)\leq\varphi(gb)=g\varphi(b)$ which implies $\varphi(b)=\varphi(gb)$ by Remark \[poset\_orbit\]. Hence $b=gb$ since $\varphi$ has small fibers. In particular $k_a(\partial gb)=0$ for $gb\not=b$.
\[iso\] Let $(a,b)\in M$ be a matching pair, assume $b\in C_n$. Then $\partial^{Gb}_n$ is an isomorphism and ${\cal A}(Gb)$ is $G$-homotopy equivalent to the zero complex.
We have to show that $\partial^{Gb}_n$ is injective. Assume $\partial_n(\sum_{b'\in Gb}\lambda_{b'}b')=0$. Then $0=k_a(\sum_{b'\in Gb}\lambda_{b'}\partial_nb')=\lambda_{b}k_a(\partial_nb)$, which implies $\lambda_b=0$, since $k_a(\partial_nb)$ is invertible. This implies $\lambda_{b'}=0$ for all $b'\in Gb$, since $\partial_n$ is a $G$-map.
The composition ${\cal A}(Gb)\longrightarrow0\longrightarrow{\cal A}(Gb)$ is homotop to $\id:{\cal A}(Gb)\longrightarrow{\cal A}(Gb)$ via the $G$-chain homotopy $P=(P_i)$, where
- $P_{n-1}:=(\partial^{Gb}_n)^{-1}$
- $P_i:\equiv 0$ for $i\not=n-1$
On the other hand $0\longrightarrow{\cal A}(Gb)\longrightarrow 0$ equals $\id:0\longrightarrow 0$. Hence ${\cal A}(Gb)$ is $G$-homotopy equivalent to the zero complex.
\[eqthm\] Let $M$ be an $G$-equivariant acyclic matching on $P(C_*,\Omega)$ such that $w(b\succ a)$ is invertible. Then there exists a $G$-chain complex $C_*^M$ such that $C_*\cong_G C_*^M\oplus T_*$ where $T_*=\bigoplus_{G(a,b)\in M/G}{\cal A}(Gb)$.
This in an $G$-equivariant version of Theorem 11.24 in [@buch Chapter 11.3]. Notice that $C_*^M$ is unique up to $G$-isomorphism.
By induction on the number $m=|M/G|$ of orbits in $M$. For $m=0$, we set $C_*^M:=C_*$. Now assume $m>0$. By Proposition \[smallfibers\], there exists a finite poset $Q$ and an order-preserving $G$-map with small fibers $\varphi:P
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We discuss an approach to obtaining black hole quasinormal modes (QNMs) using the asymptotic iteration method (AIM), initially developed to solve second order ordinary differential equations. We introduce the standard version of this method and present an improvement more suitable for numerical implementation. We demonstrate that the AIM can be used to find radial QNMs for Schwarzschild, Reissner-Nordström (RN) and Kerr black holes in a unified way. An advantage of the AIM over the standard continued fraction method (CFM) is that for differential equations with more than three regular singular points Gaussian eliminations are not required. However, the convergence of the AIM depends on the location of the radial or angular position, choosing the best such position in general remains an open problem. This review presents for the first time the spin $0, 1/2$ & $2$ QNMs of a Kerr black hole and the gravitational and electromagnetic QNMs of the RN black hole calculated via the AIM, and confirms results previously obtained using the CFM. We also presents some new results comparing the AIM to the WKB method. Finally we emphasize that the AIM is well suited to higher dimensional generalizations and we give an example of doubly rotating black holes.\
author:
- 'H. T. Cho'
- 'A. S. Cornell'
- Jason Doukas
- 'T. -R. Huang'
- Wade Naylor
date: '18$^{th}$ November, 2011'
title: |
A New Approach to Black Hole Quasinormal Modes:\
A Review of the Asymptotic Iteration Method
---
Introduction {#sec:1}
============
The study of quasinormal modes (QNMs) of black holes is an old and well established subject, where the various frequencies are indicative of both the parameters of the black hole and the type of emissions possible. Initially the calculation of these frequencies was done in a purely numerical way, which requires selecting a value for the complex frequency, integrating the differential equation, and checking whether the boundary conditions are satisfied. Note that in the following we shall use the definition that QNMs are defined as solutions of the perturbed field equations with boundary conditions: $$\psi(x) \to \left\{
\begin{array}{cl}
e^{-i \omega x} &\qquad x\to -\infty \\
e^{i \omega x} &\qquad x\to \infty
\end{array}
\right. \; , \label{QNMdef}$$ for an $e^{-i \omega t}$ time dependence (which corresponds to ingoing waves at the horizon and outgoing waves at infinity). Also note the boundary condition as $x\to\infty$ does not apply to asymptotically anti-de Sitter spacetimes, where instead something like a Dirichlet boundary condition is imposed, for example see Ref. [@Moss:2001ga]. Since those conditions are not satisfied in general, the complex frequency plane must be surveyed for discrete values that lead to QNMs. This technique is time consuming and cumbersome, making it difficult to systematically survey the QNMs for a wide range of parameter values. Following early work by Vishveshwara [@Vishveshwara:1970zz], Chandrasekhar and Detweiler [@Chandrasekhar:1975zza] pioneered this method for studying QNMs.
In order to improve on this, a few semi-analytic analyses were also attempted. In one approach, employed by Mashoon [*et al.*]{} [@Ferrari:1984zz], the potential barrier in the effective one-dimensional Schrödinger equation is replaced by a parameterized analytic potential barrier function for which simple exact solutions are known. The overall shape approximates that of the true black hole barrier, and the parameters of the barrier function are adjusted to fit the height and curvature of the true barrier at the peak. The resulting estimates for the QNM frequencies have been applied to the Schwarzschild, Reissner-Nordström and Kerr black holes, with agreement within a few percent with the numerical results of Chandrasekhar and Detweiler in the Schwarzschild case [@Chandrasekhar:1975zza], and with Gunter [@Gunter:1980] in the Reissner-Nordström case. However, as this method relies upon a specialized barrier function, there is no systematic way to estimate the errors or to improve the accuracy.
The method by Leaver [@Leaver1985], which is a hybrid of the analytic and the numerical, successfully generates QNM frequencies by making use of an analytic infinite-series representation of the solutions, together with a numerical solution of an equation for the QNM frequencies which involves, typically by applying a Frobenius series solution approach, the use of continued fractions. This technique is known as the continued fraction method (CFM).
Historically, another commonly applied technique is the WKB approximation [@Seidel:1989bp]. Even though it is based on an approximation, this approach is powerful as the WKB approximation is known in many cases to be more accurate, and can be carried to higher orders, either as a means to improve accuracy or as a means to estimate the errors explicitly. Also it allows a more systematic study of QNMs than has been possible using outright numerical methods. The WKB approximation has since been extended to sixth-order [@Konoplya:2003ii].
However, all of these approaches have their limitations, where in recent years a new method has been developed which can be more efficient in some cases, called the asymptotic iteration method (AIM). Previously this method was used to solve eigenvalue problems [@Ciftci:2005xn] as a semi-analytic technique for solving second-order homogeneous linear differential equations. It has also been successfully shown by some of the current authors that the AIM is an efficient and accurate technique for calculating QNMs [@Cho:2009cj].
As such, we will review the AIM as applied to a variety of black hole spacetimes, making (where possible) comparisons with the results calculated by the WKB method and the CFM á la Leaver [@Leaver1985]. Therefore, the structure of this paper shall be: In Sec. \[sec:2\] we shall review the AIM and the improved method of Ciftci [*et al.*]{} [@Ciftci:2005xn] (also see Ref. [@Barakat:2006ki]), along with a discussion of how the QNM boundary conditions are ensured. Applications to simple concrete examples, such as the harmonic oscillator and the Poschl-Teller potential are also provided. In Sec. \[sec:3\] the case of Schwarzschild (A)dS black holes shall be discussed, developing the integer and half-spin equations. In Sec. \[sec:4\] a review of how the QNMs of the Reissner-Nordström black holes shall be made, with several frequencies calculated in the AIM and compared with previous results. Sec. \[sec:5\] will review the application of the AIM to Kerr black holes for spin $0, 1/2, 2$ fields. Sec. \[sec:6\] will discuss the spin-zero QNMs for doubly rotating black holes. We then summarize and conclude in Sec. \[sec:7\].
The Asymptotic Iteration Method {#sec:2}
===============================
The Method
----------
To begin we shall now review the idea behind the AIM, where we first consider the homogeneous linear second-order differential equation for the function $\chi (x)$, $$\chi'' = \lambda_{0} ( x ) \chi' + s_{0} ( x ) \chi \; , \label{eq:Chapter 4 Equation 14}$$ where $\lambda_{0} ( x )$ and $s_{0} ( x )$ are functions in $C_{\infty} ( a , b )$. In order to find a general solution to this equation, we rely on the symmetric structure of the right-hand of Eq. (\[eq:Chapter 4 Equation 14\]) [@Ciftci:2005xn]. If we differentiate Eq. (\[eq:Chapter 4 Equation 14\]) with respect to $x$, we find that $$\chi''' = \lambda_{1} ( x ) \chi' + s_{1} ( x ) \chi \; ,$$ where $$\lambda_{1} = \lambda'_{0} + s_{0} + ( \lambda_{0})^{2} \; \mathrm{and} \; s_{1} = s'_{0} + s_{0} \lambda_{0} \; .$$ Taking the second derivative of Eq. (\[eq:Chapter 4 Equation 14\]) we get $$\chi'''' = \lambda_{2} ( x ) \chi' + s_{2} ( x ) \chi \; ,$$ where $$\lambda_{2} = \lambda'_{1} + s_{1} + \lambda_{0} \lambda_{1} \hspace{1cm} \mathrm{and} \hspace{1cm} s_{1} = s'_{0} + s_{0} \lambda_{0} \; .$$ Iteratively, for the $( n + 1 )^{ t h }$ and the $( n + 2 )^{ th }$ derivatives, $n = 1 , 2 , . . .$, we have $$\chi^{ ( n + 1 ) } = \lambda_{n - 1} ( x ) \chi' + s_{n - 1} ( x ) \chi \; , \label{eq:Chapter 4 Equation 15}$$ and thus bringing us to the crucial observation in the AIM is that differentiating the above equation $n$ times with respect to $x$, leaves a symmetric
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We compute the number of coverings of ${{\mathbb{C}}}P^1\setminus\{0, 1, \infty\}$ with a given monodromy type over $\infty$ and given numbers of preimages of 0 and 1. We show that the generating function for these numbers enjoys several remarkable integrability properties: it obeys the Virasoro constraints, an evolution equation, the KP (Kadomtsev-Petviashvili) hierarchy and satisfies a topological recursion in the sense of Eynard-Orantin.'
address:
- |
Steklov Mathematical Institute\
8 Gubkin St.\
Moscow 119991 Russia
- |
St.Petersburg Department of the Steklov Mathematical Institute\
Fontanka 27\
St. Petersburg 191023, and Chebyshev Laboratory of St. Petersburg State University\
14th Line V.O. 29B\
St.Petersburg 199178 Russia
author:
- 'M. Kazarian, P. Zograf'
title: 'Virasoro constraints and topological recursion for Grothendieck’s dessin counting'
---
Introduction and preliminaries
==============================
Enumerative problems arising in various fields of mathematics, from combinatorics and representation theory to algebraic geometry and low-dimensional topology, often bear much in common. In many cases the generating functions associated with these problems exhibit similar behavior – in particular, they may satisfy
- Virasoro constraints,
- Evolution equations of the “cut-and-join” type,
- Integrable hierarchy (such as Kadomtsev-Petviashvili (KP), Korteveg-DeVries (KdV) or Toda equations),
- Topological recursion (also known as Eynard-Orantin recursion).
Simple Hurwitz numbers provide one of the best studied examples of such an enumerative problem – indeed, their generating function satisfies the celebrated cut-and-join equation [@GJ1], the Virasoro constraints (via the ELSV theorem [@ELSV] and the famous Mumford’s Grothendieck-Riemann-Roch formula [@M] it reduces to the Witten-Kontsevich potential), the KP hierarchy [@O], [@KL] or [@K], and the topological recursion [@EMS]. Other examples include the Witten-Kontsevich theory, Mirzakhani’s Weil-Petersson volumes, Gromov-Witten invariants of the complex projective line, invariants of knots, etc. (see [@EO1], [@EO2] for a review).
These remarkable integrability properties of generating functions usually result from matrix model reformulations of the corresponding counting problems. However, in this paper we show that for the enumeration of Grothendieck’s [*dessins d’enfants*]{} all these properties follow from pure combinatorics in a rather straightforward way.
The origin of Grothendieck’s theory of dessins d’enfants [@G] lies in the famous result by Belyi:
[(Belyi, [@B])]{} A smooth complex algebraic curve $C$ is defined over the field of algebraic numbers ${\overline{\mathbb{Q}}}$ if and only if there exist a non-constant meromorphic function $f$ on $C$ (or a holomorphic branched cover $f:C\to{\mathbb{C}P^1}$) that is ramified only over the points $0,1,\infty\in{\mathbb{C}P^1}$.
We call $(C,f)$, where $C$ is a smooth complex algebraic curve and $f$ is a meromorphic function on $C$ unramified over ${\mathbb{C}P^1}\setminus\{0,1,\infty\}$, a [*Belyi pair*]{}. For a Belyi pair $(C,f)$ denote by $g$ the genus of $C$ and by $d$ the degree of $f$. Consider the inverse image $f^{-1}([0,1])\subset C$ of the real line segment $[0,1]\subset{\mathbb{C}P^1}$. This is a connected bicolored graph with $d$ edges, whose vertices of two colors are the preimages of 0 and 1 respectively, and the ribbon graph structure is induced by the embedding $f^{-1}([0,1])\hookrightarrow C$. (Recall that a ribbon graph structure is given by prescribing a cyclic order of half-edges at each vertex of the graph.) The following is straightforward (cf. also [@LZ]):
\[Gr\][(Grothendieck, [@G])]{} There is a one-to-one correspondence between the isomorphism classes of Belyi pairs and connected bicolored ribbon graphs.
A connected bicolored ribbon graph representing a Belyi pair is called Grothendieck’s [*dessin d’enfant*]{}.[^1]
Let $(C,f)$ be a Belyi pair of genus $g$ and degree $d$, and let ${\Gamma}=f^{-1}([0,1])\hookrightarrow C$ be the corresponding dessin. Put $k=|f^{-1}(0)|,\;l=|f^{-1}(1)|$ and $m=|f^{-1}(\infty)|$, then we have $2g-2=d-(k+l+m)$. We assume that the poles of $f$ are labeled and denote the set of their orders by $\mu=(\mu_1,\ldots,\mu_m)$, so that $d=\sum_{i\geq 1}\mu_i$. The triple $(k,l,\mu)$ will be called here the [*type*]{} of the dessin ${\Gamma}$, and the set of all dessins of type $(k,l,\mu)$ will be denoted by ${\mathcal{D}}_{k,l;\mu}$.
Actually, instead of the dessin ${\Gamma}=f^{-1}([0,1])$ corresponding to a Belyi pair $(C,f)$ it is more convenient to consider the graph ${\Gamma}^*=\overline{f^{-1}(1/2+\sqrt{-1}{\mathbb{R}})}$ dual to $\Gamma$ (where the bar denotes the closure in $C$), see Fig. \[dual\]. The graph ${\Gamma}^*$ is connected, has $m$ ordered vertices of even degrees $2\mu_1,\ldots,2\mu_m$ at the poles of $f$ and inherits a natural ribbon graph structure. Moreover, the boundary components (faces) of ${\Gamma}^*$ are naturally colored: a face is colored in white (resp. in gray) if it contains a preimage of 0 (resp. 1), and every edge of ${\Gamma}^*$ belongs to precisely two boundary components of different color.
![Decomposition of ${\mathbb{C}P^1}$ into two 1-gons.[]{data-label="dual"}](dual){width="5cm"}
In this paper we are interested in the weighted count of labeled dessins d’enfants of a given type. Namely, define $$\begin{aligned}
N_{k,l}(\mu)=N_{k,l}(\mu_1,\ldots,\mu_m)=\sum_{{\Gamma}\in{\mathcal{D}}_{k,l,\mu}}\frac{1}{|{\rm Aut}_b {\Gamma}|}\;,\end{aligned}$$ where ${\rm Aut}_b {\Gamma}$ denotes the group of automorphisms of ${\Gamma}$ that preserve the boundary componentwise.[^2] Consider the total generating function $$\begin{aligned}
\label{gf}
F(s,u,v,p_1,p_2,\dots) = \sum_{k,l,m\geq 1}\frac{1}{m!}\sum_{\mu\in{\mathbb{Z}}_+^m} N_{k,l}(\mu) s^{d} u^k v^l\, p_{\mu_1}\ldots p_{\mu_m}\;,\end{aligned}$$ where the second sum is taken over all ordered sets $\mu=(\mu_1,\ldots,\mu_m)$ of positive integers, and $d=\sum_{i=1}^m \mu_i$.
The objective of this paper is to show that the generating function $F$ satisfies all four integrability properties listed at the beginning of this section – namely, Virasoro constraints, an evolution equation, the KP (Kadomtsev-Petviashvili) hierarchy and a topological recursion. We prove the Virasoro constraints by a bijective combinatorial argument and derive from them all other properties of $F$.[^3] As a result, we obtain a simpler version of the topological recursion in terms of homogeneous components of $F$. We also revisit the problem of enumeration of the ribbon graphs with a prescribed boundary type. Topological recursion for this problem was first established in [@EO2] (cf. also [@DMSS]). In this paper we give a different, more streamlined proof of it based on the Virasoro constraints and show that the corresponding generating function satisfies an evolution equation and the KP hierarchy as well. These (and other) examples convincingly demonstrate that Virasoro constraints imply topological recursion and are in fact equivalent to it.
Additionally, we show how our results can be applied to effectively enumerate orient
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'S. Barceló Forteza'
- 'T. Roca Cortés'
- 'A. García Hernández'
- 'R.A. García'
bibliography:
- 'tcb.bib'
date: 'Received 8 April 2016; Accepted 24 February 2017'
title: 'Evidence of chaotic modes in the analysis of four [[$\delta$ Scuti ]{}]{}stars'
---
=1
Introduction {#s:intro}
============
The launch of space telescopes such as MOST, CoRoT, & Kepler satellites [@Walker2003; @Baglin2006; @Borucki2010] announced the beginning of the precise study of the stellar oscillations in stars other than the Sun. Since then, the high quality of the light curves has allowed the precise characterization of the mode parameters of different kinds of stars, and the study of their variation with time and their connection with the stellar structure.\
Although the power-spectral structure of the stars with solar-type oscillations is well known, this is not the case for [[$\delta$ Scuti ]{}]{}stars. The power spectrum of these stars shows a complex structure with dominant peaks of moderate amplitudes and many hundreds of lower amplitude peaks that form a flat plateau [e.g. @Poretti2009], the so-called “grass”. After observation of the “grass”, a long-standing debate about its origin started, including the possibility of it arising from spurious signals produced during the analysis of the data [@Balona2014a].\
A huge theoretical effort has been made to find a possible physical phenomenon behind this power-spectral structure. Some of these arguments are:\
1. Less effective disc-disc averaging of the flux owing to the geometry of the [[$\delta$ Scuti ]{}]{}star. Therefore, it is possible to find modes with higher degrees than in the spherical symmetric case $l > 4$ [@Balona1999]. Although @Balona2011 find that most of [[$\delta$ Scuti ]{}]{}stars do not seem to have enough density of peaks to support this possibility, several stars seem to show modes with high degrees, up to $l = 20$ [@Kennelly1998; @Poretti2009].\
2. A granulation background signal due to the effect of a thin outer convective layer [@Kallinger2010]. This effect is found to be more important in cool [[$\delta$ Scuti ]{}]{}stars [@Balona2011a].\
3. Variations with time that produce sidelobes of the main peak of the spectra. @Balona2011 find that around $\sim$45% of the spectra of Kepler [[$\delta$ Scuti ]{}]{}stars have one-sided sidelobe. They discard effects such as binarity because these yield amplitude-symmetric equal-spaced multiplets [@ShK2012]. However, there are other causes that produce variations with non-symmetric amplitude multiplets, such as resonant mode coupling [RMC; see @BarceloForteza2015 and references therein].\
4. A magnetic field in a rotating star splits each peak of the rotational multiplet into $(2l+1)$, meaning that one mode is split into $(2l+1)^{2}$ peaks [@Goode1992]. Magnetic fields have been detected in the surface of $\sim$7% of main sequence and pre-main sequence intermediate-mass and massive stars [@Mathis2015]. However, [[$\delta$ Scuti ]{}]{}stars with measurable magnetic fields are not common because only one [[$\delta$ Scuti ]{}]{}star shows a magnetic field [@Neiner2015] and another is suggested to be magnetic from its chemical abundance [@Escorza2016].\
5. The oblateness of the star produced by high rotation rates is the cause of the appearance of a significant number of chaotic modes [@Lignieres2009].\
The determination of the fundamental structural parameters of these stars such as mass, inclination, rotation rate, and convective efficiency can help us to unveil which of these mechanisms are responsible for this kind of power spectral structure. Four interesting [[$\delta$ Scuti ]{}]{}stars observed by CoRoT and Kepler satellites that have been characterized in this paper are CID 546, CID 3619, CID 8669, and KIC 5892969. Their differences in the power spectra help us in our aim. In Sect. \[s:dScu\] we describe the main characteristics of these kind of stars. The way in which the their oscillations are analysed to obtain the parameters of the modes is presented in Sect. \[s:dSBF\]. Results for each target star are commented in Sect. \[s:4dScu\]. In Sect. \[s:regular\] we estimate their structural parameters. The power-spectral structure is deeply studied and discussed in Sect. \[s:grass\]. In the last section we present our conclusions.
[[$\delta$ Scuti ]{}]{}type stars {#s:dScu}
=================================
[[$\delta$ Scuti ]{}]{}stars are classical pulsators with oscillation frequencies between $\sim$60 and $\sim$900 $\mu$Hz [e.g., @Zwintz2013]. These stars are located on or slightly off the main sequence, with spectral types between A2 and F5 [@Breger2000]. They are intermediate-mass stars that show fast rotation rates as it is common in stars within their mass domain or of higher mass [@Royer2007]. In fact, one of the reasons that [[$\delta$ Scuti ]{}]{}stars can be separated from RR Lyrae stars is their higher rotational velocity, $v \mathrm{sin}i > 10$ km/s [see @Peterson1996]. Other typical characteristics of [[$\delta$ Scuti ]{}]{}stars are detailed in Table \[t:dScuchar\].\
Characteristic From To
---------------------------------- ------ ------ --
Spectral-type F5 A2
Luminosity class III V
$M$ ($M_{\odot}$) 1.5 2.5
$T_{\mathrm{eff}}$ (K) 6300 8600
$\mathrm{log} ~\textit{g}$ (cgs) 3.2 4.3
$v \mathrm{sin}i$ (km/s) 10 250
$\nu$ ($\mu$Hz) 60 930
A (mag) 0.3
: Typical values of the stellar characteristics of [[$\delta$ Scuti ]{}]{}stars by [@Breger2000], [@Aerts2010], and [@Uytterhoeven2011][]{data-label="t:dScuchar"}
Hybrid stars {#ss:subgroups}
------------
Several subgroups can be distinguished from the main class of [[$\delta$ Scuti ]{}]{}stars pulsating with nonradial p-modes, such as High Amplitude [[$\delta$ Scuti ]{}]{}stars (HADS), SX Phe variables, or $\delta$ Scu/$\gamma$ Dor hybrid stars [@Breger2000]. This last group comes from the observation of g-modes in [[$\delta$ Scuti ]{}]{}type stars with frequencies typical of [[$\gamma$ Doradus ]{}]{}stars, meaning $\nu \sim \left[6-60 \right] \mu$Hz. @Uytterhoeven2011 point out that a star can be classified as hybrid when all of the three following conditions are accomplished:\
1) Typical frequencies of both kinds of stars are detected.\
2) The amplitudes of both domains are comparable, within a factor $\lesssim 5$.\
3) There are two independent frequencies in both domains with amplitudes higher than 100 parts per million (ppm)\
If the star is hybrid it would be a $\delta$ Scu/$\gamma$ Dor or a $\gamma$ Dor/$\delta$ Scu star depending on which part is the dominant one [@Grigahcene2010]. In this way, it is found that a great amount of [[$\delta$ Scuti ]{}]{}and [[$\gamma$ Doradus ]{}]{}stars are hybrids, $\sim$36%. Other studies suggest that all [[$\delta$ Scuti ]{}]{}stars are hybrids [@Balona2014].\
However, other magnitudes that help us to differentiate [[$\delta$ Scuti ]{}]{}from [[$\gamma$ Doradus ]{}]{}are the convective efficiency ($\Gamma$), $$\Gamma \sim \left(T_{eff}^{3} \mathrm{log} ~\textit{g} \right)^{-\frac{2}{3}}\, ,
\label{e:conveff}$$ where $g$ is the surface gravity, and the kinetic energy of the waves ($E_{kin}$), $$E_{kin} \sim \left(A_{0} \nu_{0} \right)^{2}\, ,
\label{e:kine}$$ where $\nu_{0}$ and $A_{0}$ are the frequency and amplitude of the mode with maximum power, respectively. These magnitudes have a dominant value of $\mathrm{log}~\Gamma<-8.1$ and $\mathrm{log}~E_{kin}>10.1$ for [[$\delta$ Scuti ]{}]{}stars when the amplitude is measured in ppm and the frequency in $\mu$Hz [see @Uytterhoeven2011]. Both quantities are related to the convective zone of the star which is more efficient in [[$\gamma$ Doradus
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Charalampos <span style="font-variant:small-caps;">Skokos</span>$^{1,2,}$[^1], Chris <span style="font-variant:small-caps;">Antonopoulos</span>$^{1,}$[^2], Tassos C.<span style="font-variant:small-caps;">Bountis</span>$^{1,}$[^3] and Michael N. <span style="font-variant:small-caps;">Vrahatis</span>$^{3,}$[^4]'
title: 'How does the Smaller Alignment Index (SALI) distinguish order from chaos? '
---
Introduction
============
The evaluation of the [**Smaller Alignment Index (SALI)**]{} is an efficient and simple method to determine the ordered or chaotic nature of orbits in dynamical systems. The SALI was proposed in Ref. and it has been successfully applied to distinguish between ordered and chaotic motion both in symplectic maps [@Sk01] as well as in Hamiltonian flows.[@GRACM]
In order to compute the SALI for a given orbit one has to follow the time evolution of the orbit itself and two deviation vectors which initially point in two different directions. The evolution of these vectors is given by the variational equations for a flow and by the tangent map for a discrete–time system. At every time step the two vectors $\overrightarrow{v_1}(t)$, $\overrightarrow{v_2}(t)$ are normalized and the SALI is computed as: $$SALI(t)= \min \left\{ \left\|
\frac{\overrightarrow{v_1}(t)}{\|\overrightarrow{v_1}(t)\|}+
\frac{\overrightarrow{v_2}(t)}{\|\overrightarrow{v_2}(t)\|} \right\|
,\left\| \frac{\overrightarrow{v_1}(t)}{\|\overrightarrow{v_1}(t)\|}
-\frac{\overrightarrow{v_2}(t)}{\|\overrightarrow{v_2}(t)\|}
\right\| \right\}, \label{eq:SALI}$$ where $t$ is the continuous or the discrete time and $\|\cdot\|$ denotes the Euclidean norm.
The properties of time evolution of the SALI clearly distinguish between ordered and chaotic motion as follows: In the case of Hamiltonian flows or $N$–dimensional symplectic maps with $N\geqslant 2$ the SALI fluctuates around a non-zero value for ordered orbits, while it tends to zero for chaotic orbits.[@Sk01; @GRACM] In the case of 2D maps the SALI tends to zero both for ordered and chaotic orbits, following however completely different time rates, which again allows us to separate between these two cases also.[@Sk01]
We have recently begun to understand the different behaviors of the SALI in regions of order and chaos. In the latter case, we have been able to connect SALI’s rapid convergence to zero, to the influence of the two largest positive Lyapunov exponents of the motion.[@prep] In the present paper we shall study the behavior of the SALI in the case of ordered orbits.
The behavior of the SALI for ordered motion
===========================================
Let us try to understand why the SALI does not become zero in the case of ordered motion, by studying in detail the behavior of the deviation vectors. A suitable way to do this for conservative systems is to consider a non-trivial integrable Hamiltonian model whose orbits are bounded and lie on “nested” tori, which foliate all of the available phase space.[@LL]
An integrable such Hamiltonian system of 2 degrees of freedom possesses besides the Hamiltonian $H$ a second independent integral $F$, in involution with $H$: $$\{ H,F \}=0, \label{eq:HF}$$ where $\{ \cdot , \cdot \} $ denotes the usual Poisson bracket. In such systems, the motion lies in the intersection of both manifolds $$H=\widetilde{h}, \,\,\, F=\widetilde{f}, \label{eq:man}$$ where $\widetilde{h}$, $\widetilde{f}$ are the constant values of the two integrals. Thus, the orbits in the 4–dimensional phase space move instantaneously on a 2–dimensional “tangent” subspace, which is ‘perpendicular’ to the vectors $$\overrightarrow{\nabla H}= (H_x,H_y,H_{p_x}, H_{p_y}), \,\,\,
\overrightarrow{\nabla F}= (F_x,F_y,F_{p_x}, F_{p_y}),
\label{eq:grads}$$ $x$, $y$ being the generalized coordinates of the system and $p_x$, $p_y$ their conjugate momenta, while subscripts denote partial derivatives (e. g. $H_x \equiv \frac{\partial
H}{\partial x}$). In fact, the motion may be thought of as governed by either one of the Hamiltonian vector fields $$\overrightarrow{f_H}= (H_{p_x}, H_{p_y},-H_x,-H_y), \,\,\,
\overrightarrow{f_F}= (F_{p_x}, F_{p_y},-F_x,-F_y).
\label{eq:flow}$$ The vectors $\overrightarrow{\nabla H}$, $\overrightarrow{\nabla
F}$ (and hence also $\overrightarrow{f_H}$, $\overrightarrow{f_F}$) are linearly independent due to the functional independence of the two integrals at almost all points in phase space. So the corresponding unit vectors $$\widehat{f_H} =
\frac{\overrightarrow{f_H}}{\|\overrightarrow{f_H}\|} \bot
\widehat{\nabla H}, \,\,\, \widehat{f_F} =
\frac{\overrightarrow{f_F}}{\|\overrightarrow{f_F}\|} \bot
\widehat{\nabla F}, \,\,\, \mbox{ with } \,\,\, \widehat{\nabla H}
= \frac{\overrightarrow{\nabla H}}{\|\overrightarrow{\nabla H}\|},
\,\,\, \widehat{\nabla F} = \frac{\overrightarrow{\nabla
F}}{\|\overrightarrow{\nabla F}\|} \label{eq:base}$$ can be used as a basis for the 4–dimensional space where the deviation vectors evolve. This basis is in general not orthogonal as $$\langle \widehat{\nabla H}, \widehat{\nabla F} \rangle = \langle
\widehat{f_H}, \widehat{f_F} \rangle = \frac{H_x F_y + H_y F_y
+H_{p_x} F_{p_x} + H_{p_y} F_{p_y}} {\|\overrightarrow{\nabla H}\|
\,\|\overrightarrow{\nabla F}\|} \label{eq:dots1}$$ is not necessary zero. We note that $\|\overrightarrow{\nabla H}\|
=\|\overrightarrow{f_H}\|$, $\|\overrightarrow{\nabla F}\|
=\|\overrightarrow{f_F}\|$ and $\langle \cdot , \cdot \rangle$ denotes the usual inner product. Note also that from definitions (\[eq:grads\]) and (\[eq:flow\]) we get $\langle
\widehat{\nabla H}, \widehat{f_H} \rangle = \langle
\widehat{\nabla F}, \widehat{f_F} \rangle = 0$, while (\[eq:HF\]) yields $\langle \widehat{\nabla H}, \widehat{f_F}
\rangle = \langle \widehat{\nabla F}, \widehat{f_H} \rangle = 0.$
So, using vectors (\[eq:base\]) as a basis for studying the evolution of a deviation vector $\overrightarrow{v_1}$, we can write it as $$\overrightarrow{v_1} =
a_1 \widehat{f_H} + a_2 \widehat{f_F} +
a_3 \widehat{\nabla H} + a_4 \widehat{\nabla F}
\label{eq:vector}$$ with $a_1, \, a_2, \, a_3, \, a_4 \in \mathbb{R}$. The values of the coefficients $a_i$, $i=1,2,3,4$, at different times, give us a clear picture for the evolution of $\overrightarrow{v_1}$. In the case of the 2D standard map for example, where ordered orbits lie on an invariant curve (1D torus), it has been shown both numerically and analytically[@Voz] that any deviation vector (considered as a linear combination of the vectors $\widehat{f_H}$, $\widehat{\nabla H}$ using our notation), eventually becomes tangent to the invariant curve, tending to the tangential direction as $n^{-1}$, with $n$ being the number of iterations.
Similarly, in the case of an integrable 2D Hamiltonian the deviation vector $\overrightarrow{v_1}$ tends to fall on the “tangent space” of the torus, spanned at each point by $\widehat{f_H}$, $\widehat{f_F} $, meaning that in Eq. (\[eq:vector\]) $a_3 \rightarrow 0$, $a_4 \rightarrow 0$, while the $a_1$, $a_2$ are, in general, different from zero. This is analogous to what has been found for the 2D standard map in Ref. . As a model for studying this behavior let us consider the 2D Van der Waals Hamiltonian [@kn:1] $$H(x,y,p_{x},p_y)=
\frac{1}{2}(p_{x}^{
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study explosive percolation (EP) on Erdös-Rényi network for product rule (PR) and sum rule (SR). Initially, it was claimed that EP describes discontinuous phase transition, now it is well-accepted as a probabilistic model for thermal continuous phase transition (CPT). However, no model for CPT is complete unless we know how to relate its observable quantities with those of thermal CPT. To this end, we define entropy, specific heat, re-define susceptibility and show that they behave exactly like their thermal counterparts. We obtain the critical exponents $\nu, \alpha, \beta$ and $\gamma$ numerically and find that both PR and SR belong to the same universality class and they obey Rushbrooke inequality.'
author:
- 'M. K. Hassan and M. M. H. Sabbir'
title: 'Product-Sum universality and Rushbrooke inequality in explosive percolation '
---
The notion of percolation is omnipresent in many seemingly disparate natural and man-made systems [@ref.Stauffer]. Examples include spread of forest fire, flow of fluid through porous media, spread of biological and computer viruses etc. [@ref.saberi; @ref.Newman_virus; @ref.Moore_virus]. Besides such direct applications, percolation is best known as a paradigmatic model for phase transition. One of the simplest models for percolation is the classical random percolation (RP) on Erdös-Rényi (ER) network in which one starts with $N$ labeled nodes that are initially all isolated [@ref.erdos]. Then at each step a link, say $e_{ij}$, is picked at random from all the possible pair of links and occupy it to connect nodes $i$ and $j$. As the number of occupied links $n=tN$ increases from zero we find that clusters, i.e. contiguous nodes connected by occupied links, are formed and on the average grown. In the process, the largest cluster $s_{{\rm max}}$ undergoes a transition across $t_c=0.5$ from minuscule size ($s_{{\rm max}}\sim \log N$) to giant size ($s_{{\rm max}} \sim N$). The emergence of such threshold value $t_c$ is found to be accompanied by a sudden change in the order parameter $P$, the ratio of the largest cluster to the network size, such that $P=0$ at $t\leq t_c$ and $P>0$ at $t>t_c$ in the limit $N\rightarrow \infty$. This is reminiscent of the second order or continuous phase transition (CPT).
In 2009, Achlioptas [*et al.*]{} proposed a class of percolation model in which two links are picked randomly instead of one at each step [@ref.Achlioptas] . However, ultimately only one of the links, that results in the smaller clustering, is occupied and the other is discarded for future picking. One of the key features of this rule, which is now known as the Achlioptas process (AP), is that it discourages the growth of the larger clusters and encourages the smaller ones which inevitably delays the transition. Eventually, when it reaches near the critical point it is so unstable that occupation of one or two links triggers an explosion of growth. It leads to the emergence of a giant cluster with a bang and hence it is called “explosive percolation" (EP). Indeed, the corresponding $P$, in contrast to its classical counterpart, undergoes such an abrupt transition that it was at first mistaken as a discontinuity and suggested to exhibit the first order or discontinuous transition. Their results jolted the scientific community through a series of claims, unclaims and counter-claims [@ref.Friedman; @ref.ziff_1; @ref.radicchi_1; @ref.Costa_2; @ref.souza; @ref.cho_1; @ref.ara; @ref.da_Costa; @ref.Grassberger; @ref.Bastas]. It is now well settled that the explosive percolation transition is actually continuous but with first order like finite-size effects [@ref.Grassberger; @ref.Bastas; @ref.Riordan; @ref.bastas_review; @ref.Choi].
In general, scientists use theoretical model, just like architects use geometric model before building large expensive structure, because it provides useful insights into the real-world systems. The real systems that percolation represent is complex as it often involves quantum and many particle interaction effects. However, modeling is only useful if we know how to relate its various observable quantities to those of the real-world systems. To this end, Kasteleyn and Fortuin used the mapping of the percolation problem onto the $q$-state Potts model in order to relate its observable to the thermal quantities of the Potts model [@ref.Kasteleyn]. Owing to that mapping we know that $P$ is the order parameter, mean cluster size $\langle s\rangle$ is the susceptibility etc. but not equivalent counterpart of entropy. In thermal CPT, the entropy $S$ and the order parameter (OP) complement each other as $S$, that measures the degree of disorder, is maximum where OP is zero and OP, that measures the extent of order, is maximum where $S$ is zero. A similar behaviour in percolation is also expected in order to elucidate whether it is also an order-disorder transition or not. Universality is another aspect that we find common in the thermal CPT and in the random percolation. In the case of EP, we are yet to find universality of any type or any kind. Another interesting aspect of thermal CPT is that its critical exponents $\alpha, \beta$ and $\gamma$ obey the Rushbrooke inequality $\alpha+2\beta+\gamma\geq 2$ which reduces to equality under static scaling hypothesis [@ref.Stanley]. Whether it holds in explosive percolation or not, is also an interesting issue.
In this article, we investigate EP on the ER networks for product rule (PR) and sum rule (SR) and find their critical exponents numerically. First, we define susceptibility $\chi$ as the ratio of the successive jump $\Delta P$ of $P$ and the magnitude of successive intervals $\Delta t$ instead of using the mean cluster size $\langle s\rangle$ as susceptibility. Then we obtain the critical exponents $\nu$ of the correlation length, $\gamma$ of $\chi$, and $\beta$ of $P$. Note that $\langle s\rangle$ exhibits the expected divergence only if the largest cluster size is excluded from it and even then it gives too large a value of $\gamma$. Realizing these drawbacks, many researchers are already considering alternative definitions [@ref.radicchi_1; @ref.ziff_3; @ref.qian]. Second, we define entropy $H$ for EP and find that it is continuous across the whole spectrum of the control parameter $t$ which clearly reveals that EP transition is indeed continuous in nature. We then define the specific heat as $C=q{{dH}\over{dq}}$ where $q=(1-t)$ and find that it diverges with positive critical exponent $\alpha$. The most intriguing and unexpected findings of this work is that PR and SR belong to the same universality class. Besides, we find that the elusive Rushbrooke inequality holds in EP. Recently, using the the same definitions for entropy, specific heat and susceptibility we have shown that the Rushbrooke inequality holds in the random percolation too [@ref.hassan_didar]. Finding that RI also holds in EP on random network provides a clear testament of how robust our results are.
Percolation is all about clusters as every observable quantity of it is related, this way or another, to the clusters by virtue of definition. Initially, all the labeled nodes are considered isolated so that every node is a cluster of its own size. The process starts by picking two distinct links, say $e_{ij}$ and $e_{kl}$, randomly at each step. To apply the PR, we then calculate the products, $\Pi_{ij}=s_i\times s_j$ and $\Pi_{kl}= s_k \times s_l$, of the size of the clusters that the two nodes on either side of each link contain. The link with the smaller value of the products $\Pi_{ij}$ and $\Pi_{kl}$ is occupied. On the other hand, if we find $\Pi_{ij}=\Pi_{kl}$ then we occupy one of the two links at random with equal probability. In the case of SR, we take the sum $\Sigma_{ij}=s_i+s_j$ and $\Sigma_{kl}=s_k+s_l$ instead of the product and do the rest exactly in the same way as we did for PR. Each time we occupy a link, either the size of an existing cluster grows due to occupation of an inter-cluster link or the cluster size remains the same due to addition of an intra-cluster link. In either case, the growth of large clusters are always disfavoured which is in sharp contrast to its RP counterpart. Thus, the emergence of a giant cluster is considerably slowed down but eventually when it happens, it happens abruptly but without discontinuity.
We first investigate not the $P$ itself but its successive jump $\Delta P$ within successive interval $\Delta t=1/N$. The idea of successive jump size $\Delta P$ was first introduced by Manna [@ref.manna]. We use it to define susceptibility as $$\chi(t)={{\Delta P}\over
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We have modelled X-ray burst oscillations observed with the Rossi X-ray Timing Explorer (RXTE) from two low mass X-ray binaries (LMXB): 4U 1636-53 with a frequency of 580 Hz, and 4U 1728-34 at a frequency of 363 Hz. We have computed least squares fits to the oscillations observed during the rising phase of bursts using a model which includes emission from either a single circular hot spot or a pair of circular antipodal hot spots on the surface of a neutron star. We model the spreading of the thermonuclear hot spots by assuming that the hot spot angular size grows linearly with time. We calculate the flux as a function of rotational phase from the hot spots and take into account photon deflection in the relativistic gravitational field of the neutron star assuming the exterior space-time is the Schwarzschild metric. We find acceptable fits with our model in a $\chi^2$ sense, and we use these to place constraints on the compactness of the neutron stars in these sources. For 4U 1636-53, in which detection of a 290 Hz sub-harmonic supports the two spot model, we find that the compactness (i.e., mass/radius ratio) is constrained to be $M/R < 0.163$ at 90 % confidence ($G = c = 1$). This requires a relatively stiff equation of state (EOS) for the stellar interior. For example, if the neutron star has a mass of $1.4 M_{\odot}$ then its radius must be $> 12.8$ km. Fits using a single hot spot model are not as highly constraining. We discuss the implications of our findings for recent efforts to calculate the EOS of dense nucleon matter and the structure of neutron stars.'
author:
- 'Nitya R. Nath, Tod E. Strohmayer & Jean H. Swank'
title: 'Bounds on Compactness for LMXB Neutron Stars from X-ray Burst Oscillations'
---
Introduction
============
X-ray brightness oscillations with frequencies in the 300 - 600 Hz range have now been observed during thermonuclear X-ray bursts from 10 LMXB systems (see Strohmayer 2001 for a recent review). Substantial evidence suggests that rotational modulation of a localized hot spot or a pair of antipodal spots is responsible for the observed oscillations, especially during the rising phase (see for example Strohmayer, Zhang & Swank 1997; Heise 2000). As the mass to radius ratio, $M/R$ or “compactness”, of a neutron star increases, the deflection of photons by its relativistic gravitational field becomes stronger and consequently a greater fraction of the stellar surface is visible to an observer at any given time. This effect weakens the spin modulation pulsations produced by a rotating hot spot on the neutron star surface. Because of this effect, Strohmayer et al. (1997) suggested that modelling of the burst oscillation amplitude could in principle provide a constraint on the neutron star compactness. Strohmayer, Zhang & Swank (1997) investigated the temporal evolution of the amplitude of burst oscillations from 4U 1728-34 and showed that a simple model of an expanding hot spot on a neutron star was in qualitative agreement with the data. Miller & Lamb (1998) performed a study of the dependence of the oscillation amplitude from a point-like hot spot on the stellar compactness, the surface rotational velocity, and the spectrum of the surface emission, and showed that if two antipodal spots are present, the resulting limits on the compactness can be highly constraining. Weinberg, Miller, & Lamb (2000) have recently performed similar calculations but allow for hot spots of finite size. Psaltis, Ozel, & DeDeo (2001) have also recently investigated the effects of relativistic photon deflection on the inferred properties of thermally emitting neutron stars.
Miller (1999) reported the detection of a 290 Hz sub-harmonic of the stronger 580 Hz oscillation frequency in a study of 5 bursts from 4U 1636-53. This led him to suggest that the neutron star spin frequency is actually 290 Hz in this source and that two antipodal hot spots produce the 580 Hz modulation. The observation of a pair of high frequency quasi-periodic oscillations (QPO) with a frequency separation of $\sim 251$ Hz in this source (Mendez, van der Klis, & van Paradijs 1998), has also been interpreted, in the context of a beat frequency model for the high frequency QPO, as evidence for a neutron star spin frequency of $\sim 290$ Hz rather than 580 Hz (see Miller, Lamb & Psaltis 1998). We note, however, that recent efforts to confirm the sub-harmonic detection in subsequent bursts from 4U 1636-53 have not been successful (Strohmayer 2001).
Strohmayer et al. (1998a) reported very large amplitude oscillations at 580 Hz during the rising phase of some bursts from 4U 1636-53. This combination of large measured amplitudes near burst onset and the evidence that two hot spots may produce the modulation, make 4U 1636-53 perhaps the best source currently known in which to constrain the neutron star mass and radius based on the properties of burst oscillations. Here we report on our efforts to do this by detailed modelling of the burst oscillations observed during the rising phase of bursts. We focus on 4U 1636-53 because if the two hot spot conjecture is correct for this object then our results place strong constraints on the neutron star compactness. However, we also summarize our results for 4U 1728-34, a source which has also shown strong oscillations during the rising phase of bursts. The plan of this paper is as follows. In §2 we discuss the basic features and assumptions of our model. In §3 we outline the method of calculation. In §4 we describe our model fitting procedures and our results for both single and antipodal hot spot models. We also summarize the results of fits to data from 4U 1636-53 and 4U 1728-34. In §5 we summarize our results and discuss them in the context of recent efforts to constrain the EOS of neutron star matter. We also discuss future steps we will take to improve the hot spot model.
Model Assumptions
=================
Both spectral and temporal evidence indicate that the X-ray emission near the onset of at least some thermonuclear bursts is localized to a “hot spot” which spreads in some fashion until eventually encompassing all of the neutron star surface (see for example Strohmayer, Zhang & Swank 1997). This likelihood was also recognized early on in theoretical studies of thermonuclear bursts (Joss 1978). Motivated by this we model the burst rise by assuming that all the burst emission comes from either one or a pair of circular hot spots which expand linearly in angular size with time. The rest of the neutron star surface is assumed dark. Photon trajectories are computed assuming the Schwarzschild metric describes the space-time exterior to the star. This is a reasonable approximation since the influence of the neutron star’s rotation on the space-time only affects the oscillation amplitude to second order (Miller & Lamb 1996). For the present work we shall only investigate bolometric modulations across the full $\sim 2 - 90$ keV bandpass of the RXTE Proportional Counter Array (PCA). We shall also ignore Doppler shifts and relativistic aberration produced by the rotational motion of the hot spot (see for example Miller 1999; Chen & Shaham 1989). We discuss later the likely influence on our results of this approximation.
Our model is uniquely characterized by seven parameters: (1) an overall source intensity or normalization, $S$, which can be thought of as the flux leaving unit surface area of the neutron star. (2) neutron star compactness, $\beta =
M/R$, where $M$ and $R$ are the stellar mass and radius, respectively, (3) initial angular size of the spot (half of the subtended angle), $\alpha_0$, (4) angular growth rate of the hot spot, $\dot\alpha$, (5) initial rotational phase, $\delta_0$, (6) latitude of the spot center, $\theta_s$, measured from the rotational equator, and (7) latitude of the observers line of sight, $\theta_{obs}$, also measured from the rotational equator. One of our primary goals is to determine an upper bound on the compactness. To do this within the context of our model we set the hot spot latitude and observation latitude to zero. That is, both the hot spots and the line of sight to the observer are centered on the rotational equator. This geometry produces the largest possible modulation amplitude. Since any observed modulation must be equal to or less than this limit, and since the modulation amplitude decreases with increasing compactness, the upper limit follows. For completeness, we also investigate the influence of moving the hot spot and the line of sight off the rotational equator. The geometry of our model is illustrated in Figure 1. Related hot spot models have been worked out by Pechenick, Ftaclas, & Cohen (1983) and Strohmayer (1992).
Method of Calculation
=====================
The geometry of a photon trajectory in relation to the observers line of sight $\vec r_{obs}$ is shown in Figure 1. The figure is drawn with $\theta_s =
\theta_{obs} = 0$. For any single point on the hot spot with radius vector $\vec r$, the path of a photon reaching the observer lies in the plane of $\vec r$ and $\vec r_{obs}$, and is asymptotically parallel to $\vec r_{obs}$ with impact parameter $b$. The two angles, $\phi$ (between $\vec r$ and $\vec r_{obs}$) and $\psi$ (the emission angle with respect to the surface normal),
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we study the topic of signal restoration using complexity regularization, quantifying the compression bit-cost of the signal estimate. While complexity-regularized restoration is an established concept, solid practical methods were suggested only for the Gaussian denoising task, leaving more complicated restoration problems without a generally constructive approach. Here we present practical methods for complexity-regularized restoration of signals, accommodating deteriorations caused by a known linear degradation operator of an arbitrary form. Our iterative procedure, obtained using the alternating direction method of multipliers (ADMM) approach, addresses the restoration task as a sequence of simpler problems involving $ \ell _2$-regularized estimations and rate-distortion optimizations (considering the squared-error criterion). Further, we replace the rate-distortion optimizations with an arbitrary standardized compression technique and thereby restore the signal by leveraging underlying models designed for compression. Additionally, we propose a shift-invariant complexity regularizer, measuring the bit-cost of all the shifted forms of the estimate, extending our method to use averaging of decompressed outputs gathered from compression of shifted signals. On the theoretical side, we present an analysis of complexity-regularized restoration of a cyclo-stationary Gaussian signal from deterioration by a linear shift-invariant operator and an additive white Gaussian noise. The theory shows that optimal complexity-regularized restoration relies on an elementary restoration filter and compression spreading reconstruction quality unevenly based on the energy distribution of the degradation filter. Nicely, these ideas are realized also in the proposed practical methods. Finally, we present experiments showing good results for image deblurring and inpainting using the JPEG2000 and HEVC compression standards.'
author:
- |
Yehuda Dar, Michael Elad, and Alfred M. Bruckstein\
[^1]
bibliography:
- 'IEEEabrv.bib'
- 'complexity\_regularized\_inverse\_problems\_\_refs.bib'
title: Restoration by Compression
---
[ ]{}
\
Complexity regularization, rate-distortion optimization, signal restoration, image deblurring, alternating direction method of multipliers (ADMM).
Introduction
============
restoration methods are often posed as inverse problems using regularization terms. While many solutions can explain a given degraded signal, using regularization will provide signal estimates based on prior assumptions on signals. One interesting regularization type measures the complexity of the candidate solution in terms of its compression bit-cost. Indeed, encoders (that yield the bit cost) rely on signal models and allocate shorter representations to more likely signal instances. This approach of complexity-regularized restoration is an attractive meeting point of signal restoration and compression, two fundamental signal-processing problems.
Numerous works [@saito1994simultaneous; @natarajan1995filtering; @chang1997image; @mihcak1999low; @rissanen2000mdl; @chang2000adaptive; @liu2001complexity] considered the task of denoising a signal corrupted by an additive white Gaussian noise using complexity regularization. In [@natarajan1995filtering; @liu2001complexity], this idea is translated to practically estimating the clean signal by employing a standard lossy compression of its noisy version. However, more complex restoration problems (e.g., deblurring, super resolution, inpainting), involving non-trivial degradation operators, do not lend themselves to a straightforward treatment by compression techniques designed for the squared-error distortion measure. Moulin and Liu [@moulin2000statistical] studied the complexity regularization idea for general restoration problems, presenting a thorough theoretical treatment together with a limited practical demonstration of Poisson denoising based on a suitably designed compression method. Indeed, a general method for complexity-regularized restoration remained as an open question for a long while until our recent preliminary publication [@dar2016image], where we presented a generic and practical approach flexible in both the degradation model addressed and the compression technique utilized.
Our strategy for complexity-regularized signal restoration relies on the alternating direction method of multipliers (ADMM) approach [@boyd2011distributed], decomposing the difficult optimization problem into a sequence of easier tasks including $ \ell_2 $-regularized inverse problems and standard rate-distortion optimizations (with respect to a squared-error distortion metric). A main part of our methodology is to replace the rate-distortion optimization with standardized compression techniques enabling an indirect utilization of signal models used for efficient compression designs. Moreover, our method relates to various contemporary concepts in signal and image processing. The recent frameworks of Plug-and-Play Priors [@venkatakrishnan2013plug; @sreehari2016plug] and Regularization-by-Denoising [@romano2017little] suggest leveraging a Gaussian denoiser for more complicated restoration tasks, achieving impressive results (see, e.g., [@venkatakrishnan2013plug; @sreehari2016plug; @romano2017little; @dar2016postprocessing; @dar2016reducing; @rond2016poisson]). Essentially, our approach is the compression-based counterpart for denoising-based restoration concepts from [@venkatakrishnan2013plug; @sreehari2016plug; @romano2017little].
Commonly, compression methods process the given signal based on its decomposition into non-overlapping blocks, yielding block-level rate-distortion optimizations based on block bit-costs. The corresponding complexity measure sums the bit-costs of all the non-overlapping blocks, however, note that this evaluation is shift sensitive. This fact motivates us to propose a shift-invariant complexity regularizer by quantifying the bit-costs of all the overlapping blocks of the signal estimate. This improved regularizer calls for our restoration procedure to use averaging of decompressed signals obtained from compressions of shifted signals. Our shift-invariant approach conforms with the Expected Patch Log-Likelihood (EPLL) idea [@zoran2011learning], where a full-signal regularizer is formed based on a block-level prior in a way leading to averaging MAP estimates of shifted signal versions. Our extended method also recalls the cycle spinning concept, presented in [@coifman1995translation] for wavelet-based denoising. Additional resemblance is to the compression postprocessing techniques in [@nosratinia2001enhancement; @nosratinia2003postprocessing] enhancing a given decompressed image by averaging supplementary compression-decompression results of shifted versions of the given image, thus, our method generalizes this approach to any restoration problem with an appropriate consideration of the degradation operator. Very recent works [@beygi2017compressed; @beygi2017efficient] suggested the use of compression techniques for compressive sensing of signals and images, but our approach examines other perspectives and settings referring to restoration problems as will be explained below.
In this paper we extend our previous conference publication [@dar2016image] with improved algorithms and new theoretical and experimental results. In [@dar2016image] we implemented our concepts in procedures relying on the half quadratic splitting optimization technique, in contrast, here we present improved algorithms designed based on the ADMM approach. The new ADMM-based methods introduce the following benefits (with respect to using half quadratic splitting as in [@dar2016image]): significant gains in the restoration quality, reduction in the required amount of iterations, and an easier parameter setting. In addition, in this paper we provide an extensive experimental section. While in [@dar2016image] we experimentally examined only the inpainting problem, in this paper we present new results demonstrating the practical complexity-regularized restoration approach for image deblurring. While deblurring is a challenging restoration task, we present compelling results obtained using the JPEG2000 method and the image compression profile of the HEVC standard [@RefWorks:112]. An objective comparison to other deblurring techniques showed that the proposed HEVC-based implementation provides good deblurring results. Moreover, we also extend our evaluation given in [@dar2016image] for image inpainting, where here we use the JPEG2000 and HEVC compression standards in our ADMM-based approach to restore images from a severe degradation of 80% missing pixels. Interestingly, our compression-based image inpainting approach can be perceived as the dual concept of inpainting-based compression of images and videos suggested in, e.g., [@galic2008image; @schmaltz2009beating; @andris2016proof] and discussed also in [@adam2017denoising].
Another prominent contribution of this paper is the new theoretical study of the problem of complexity-regularized restoration, considering the estimation of a cyclo-stationary Gaussian signal from a degradation procedure consisting of a linear shift-invariant operator and additive white Gaussian noise. We gradually establish few equivalent optimization forms, emphasizing two main concepts for complexity-regularized restoration: the degraded signal should go through a simple inverse filtering procedure, and then should be compressed so that the decompression components will have a varying quality distribution determined by the degradation-filter energy-distribution. We explain how these ideas materialize in the practical approach we propose, thus, establishing a theoretical reasoning for the feasible complexity-regularized restoration.
This paper is organized as follows. In section \[sec:Complexity-Regularized Restoration\] we overview the settings of the complexity-regularized restoration problem. In section \[sec:Proposed Methods\] we present the proposed practical methods for complexity-regularized restoration. In section \[sec:Rate-Distortion Theoretic Analysis for the Gaussian Case\] we theoretically analyze particular problem settings where the signal is a cyclo-stationary Gaussian process. In section \[sec:Experimental Results\] we provide experimental results for image deblurring and inpainting. Section \[sec:Conclusion\] concludes this paper.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The Hamiltonian Mean Field (HMF) model has a low-energy phase where $N$ particles are trapped inside a cluster. Here, we investigate some properties of the trapping/untrapping mechanism of a single particle into/outside the cluster. Since the single particle dynamics of the HMF model resembles the one of a simple pendulum, each particle can be identified as a high-energy particle (HEP) or a low-energy particle (LEP), depending on whether its energy is above or below the separatrix energy. We then define the trapping ratio as the ratio of the number of LEP to the total number of particles and the “fully-clustered” and “excited” dynamical states as having either no HEP or at least one HEP. We analytically compute the phase-space average of the trapping ratio by using the Boltzmann-Gibbs stable stationary solution of the Vlasov equation associated with the $N \to \infty$ limit of the HMF model. The same quantity, obtained numerically as a time average, is shown to be in very good agreement with the analytical calculation. Another important feature of the dynamical behavior of the system is that the dynamical state changes transitionally: the “fully-clustered” and “excited” states appear in turn. We find that the distribution of the lifetime of the “fully-clustered” state obeys a power law. This means that clusters die hard, and that the excitation of a particle from the cluster is not a Poisson process and might be controlled by some type of collective motion with long memory. Such behavior should not be specific of the HMF model and appear also in systems where [*itinerancy*]{} among different “quasi-stationary” states has been observed. It is also possible that it could mimick the behavior of transient motion in molecular clusters or some observed deterministic features of chemical reactions.'
author:
- 'Hiroko Koyama[^1]'
- 'Tetsuro Konishi[^2]'
- 'Stefano Ruffo[^3]'
title: 'Clusters die hard: Time-correlated excitation in the Hamiltonian Mean Field model'
---
introduction
============
In systems with long-range interactions [@yellowbook] it is quite common that particle dynamics leads to the formation of clusters. This happens for instance in self-gravitating systems [@binney], where massive particles interacting with Newtonian potential, initially put in a homogeneous state, can create patterns made of many clusters. This phenomenon can be observed in simplified models, like the one-dimensional self-gravitating systems (sheet models) [@sheetmodel]. For this model an itinerant behavior [@itinerancy] between “quasi-equilibria” and “transient” states has been observed in the long-time evolution [@TGK]. In the “quasi-equilibrium” states particles are clustered, as at equilibrium [@rybicki], but with different energy distributions. In the “transient” states one particle emitted from the cluster bears the highest energy throughout the lifetime of the state. The authors of Ref. [@TGK] also claimed that averaging over a sufficiently long time, which includes many quasi-equilibrium and transient states, should give approximately thermal equilibrium. Motion over several quasi-stationary states is observed also in other Hamiltonian systems, like globally coupled symplectic map systems [@KK], or even in realistic systems of anisotropically interacting molecules [@oomine]. This shows that thermal equilibrium is not the only possible asymptotic behavior of Hamiltonian dynamics. For such cases approaches other than standard statistical mechanics would be needed. Coming back to one-dimensional self-gravitating systems, the generation of high-energy particles plays an important role in dynamical evolution. However, a difficulty of the model is that the definition of high-energy particle is ambiguous, which is an obstacle to precisely define “quasi-stationary” and “transient” states.
A time continuous Hamiltonian model for which particle clustering has been studied both from the statistical and the dynamical point of view is the Hamiltonian Mean Field Model (HMF) [@IK; @AR], which describes the motion of fully coupled particles on a circle with attractive/repulsive cosine potential. Recent reviews discussing this model can be found in Refs. [@hmf-review-2002; @chavanis]. This model has a second order phase transition and, in the ordered low energy phase, particles are clustered. However, when the number of particles is finite, some particles can leave the cluster and acquire a high energy. Hence, the “fully-clustered” state has a finite lifetime and an “excited” state appears where at least one particle does not belong to the cluster [@AR; @nakagawa-kaneko-2000-jpsj]. Therefore, below the critical energy, we can observe a similar itinerant behavior as for one-dimensional self-gravitating systems, between a “fully-clustered” state and an “excited” state.
In this paper we investigate and characterize the intermittent transitions between these states during a long-time evolution for the HMF model. The main advantage of studying this phenomenon for the HMF model is that the ambiguity to define the dynamical states can be resolved. In fact, the equations of motion of each HMF particle can be represented as those of a perturbed pendulum. An ordinary simple pendulum shows two types of motion: libration and rotation. It shows libration when the phase-point is inside the separatrix, and rotation when it is outside the separatrix. We then define High-Energy Particles (HEP) of the HMF model as those particles which are outside the separatrix, and Low-Energy Particles (LEP) as those which are inside the separatrix. This allows us to define a “trapping ratio” which takes the value $1$ for the “fully-clustered” state and is strictly smaller than $1$ in the “excited” state. Contrary to an ordinary simple pendulum, the value of the separatrix energy is not constant in time and hence the trapping ratio can fluctuate in time. Here, we show that the numerically computed time-averaged trapping ratio agrees with that obtained by a statistical average performed for the Boltzmann-Gibbs stable stationary solution of the Vlasov equation associated to the HMF model [@IK; @AR]. However, we find numerically that the probability distribution of the lifetime of the “fully-clustered” state is not exponential but follows instead a power law. Therefore, although an average trapping ratio exist, there appear to be no typical trapping ratio in the probabilistic sense.
This paper is organized as follows. In Sec. \[sec:model\], we review the HMF model and define the dynamical states of the system. In Sec. \[sec:scs\] we estimate analytically the trapping ratio, using a Vlasov equation approach and compare it with the value obtained from numerical simulations. In Sec. \[sec:lifetime\] we numerically compute the probability distribution of the lifetime of the “fully-clustered” state in order to show that it obeys a power law. The final section is devoted to summary and discussion.
Model and definition of dynamical states {#sec:model}
========================================
In this section we introduce the HMF model and define the dynamical states of the system. The Hamiltonian of the HMF model [@AR] is $$\label{eq:hamiltonian}
H=K+V=\sum_{i=1}^{N}\frac{p_i^2}{2}+\frac{\varepsilon}{2N}\sum_{i,j=1}^N[1-\cos(\theta_i-\theta_j)].$$ The model describes a system of $N$ particles moving on a circle, each characterized by an angle $\theta_i$ and possessing momentum $p_i$. The interaction force between each pair of particles is attractive or repulsive, for $\varepsilon>0$ or $\varepsilon<0$, respectively. In the following we will consider only the attractive case, with $\varepsilon=1$. In this case, the model displays a second order phase transition at the energy density $U=H/N=3/4$ from a “clustered” phase at low energy (where particles are clumped) to a “gas” phase at high energy (where particles are homogeneously distributed on the circle). The HMF model is a globally coupled pendulum system, and the equations of motion can be expressed as those of a perturbed pendulum, $$\label{pendulum}
\ddot{\theta}_i=-M\sin(\theta_i-\phi),$$ where $M$ (the order parameter of the phase transition) and the phase $\phi$ are defined as $$\begin{aligned}
\label{eq:m}
M&\equiv&\sqrt{M_x^2+M_y^2},\nonumber\\
\tan\phi&\equiv&\frac{M_y}{M_x},\nonumber\\
(M_x,M_y)&\equiv&
\frac{1}{N}\left(\sum_{j=1}^{N}\cos\theta_j,\sum_{j=1}^{N}\sin\theta_j\right).\end{aligned}$$ The single particle energy is $$\label{eq:ei}
e_i=\frac{p_i^2}{2}+[1-M\cos(\theta_i-\phi)].$$ Then, the separatrix energy $E_{sep}$ is $$\label{eq:sep}
E_{sep}=1+M,$$ and the resonance width is $2\sqrt{M}$.
An ordinary simple pendulum shows two types of motion: libration and rotation. It shows libration when the phase point is inside the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'With growing consumer adoption of online grocery shopping through platforms such as Amazon Fresh, Instacart, and Walmart Grocery, there is a pressing business need to provide relevant recommendations throughout the customer journey. In this paper, we introduce a production within-basket grocery recommendation system, RTT2Vec, which generates real-time personalized product recommendations to supplement the user’s current grocery basket. We conduct extensive offline evaluation of our system and demonstrate a 9.4% uplift in prediction metrics over baseline state-of-the-art within-basket recommendation models. We also propose an approximate inference technique 11.6x times faster than exact inference approaches. In production, our system has resulted in an increase in average basket size, improved product discovery, and enabled faster user check-out.'
address: 'Walmart Labs, Sunnyvale, California, USA'
bibliography:
- 'strings.bib'
- 'refs.bib'
title: 'A Large-Scale Deep Architecture for Personalized Grocery Basket Recommendations'
---
Recommender System, Personalization, Representation Learning
Introduction {#sec:intro}
============
A critical component of a modern day e-commerce platform is a user-personalized system for serving recommendations. While there has been extensive academic research for recommendations in the general e-commerce setting, user personalization in the online groceries domain is still nascent. An important characteristic of online grocery shopping is that it is highly personal. Customers show both regularity in purchase types and purchase frequency, as well as exhibit specific preferences for product characteristics, such as brand affinity for milk or price sensitivity for wine.
One important type of grocery recommender system is a within-basket recommender, which suggests grocery items that go well with the items in a customer’s shopping basket, such as milk with cereals or pasta with pasta sauce. In practice, customers often purchase groceries with a particular intent, such as for preparing a recipe or stocking up for daily necessities. Therefore, a within-basket recommendation engine needs to consider both item-to-item compatibility within a shopping basket as well as user-to-item affinity, to generate efficient product recommendations that are truly user-personalized.
In this paper, we introduce Real-Time Triple2Vec, **RTT2Vec**, a real-time inference architecture for serving within-basket recommendations. Specifically, we develop a representation learning model for personalized within-basket recommendation task, and then convert this model into an approximate nearest neighbour (ANN) retrieval task for real-time inference. Further, we also discuss some of the scalability trade-offs and engineering challenges when designing a large-scale, deep personalization system for a low-latency production application.
For evaluation, we conducted exhaustive offline experiments on two grocery shopping datasets and observe that our system has superior performance when compared to the current state-of-the-art models. Our main contributions can be summarized as follows:
- We introduce an approximate inference method which transforms the inference phase of a within-basket recommendation system into an Approximate Nearest Neighbour (ANN) embedding retrieval.
- We describe a production real-time recommendation system which serves millions of online customers, while maintaining high throughput, low latency, and low memory requirements.
Related Work {#sec:relatedwork}
============
Collaborative Filtering (CF) based techniques have been widely adopted in academia and industry for both user-item [@hu2008collaborative] and item-item recommendations [@linden2003amazon]. Recently,this approach has been extended to the within-basket recommendation task. The factorization-based models, **BFM** and **CBFM** [@le2017basket], consider multiple associations between the user, the target item, and the current user-basket to generate within-basket recommendations. Even though these approaches directly optimize for task specific metrics, they fail to capture non-linear user-item and item-item interactions.
Due to the success of using latent representation of words (such as the **skip-gram** technique [@mikolov2013distributed; @mikolov2013efficient]) in various NLP applications, representation learning models have been developed across other domains. The **word2vec** inspired **CoFactor** [@liang2016factorization] model utilizes both Matrix Factorization (MF) and item embeddings jointly to generate recommendations. **Item2vec** [@barkan2016item2vec] was developed to generate item embeddings on itemsets. Using these, item-item associations can be modeled within the same itemset (basket). **Prod2vec** and **bagged-prod2vec** [@grbovic2015commerce] utilize the user purchase history to generate product ads recommendations by learning distributed product representations. Another representation learning framework, **metapath2vec** [@dong2017metapath2vec], uses meta-path-based random walks to generate node embeddings for heterogenous networks, and can be adapted to learn latent representations on a user-item interaction graph. By leveraging both basket and browsing data jointly, **BB2vec** [@trofimov2018inferring] learns dual vector representations for complementary recommendations. Even though the above skip-gram based approaches are used in wide areas of applications such as digital advertising and recommendation systems, they fail to jointly optimize for user-item and item-item compatibility.
{width="\textwidth" height="6cm"}
There has also been significant research to infer functionally complementary relations for item-item recommendation tasks. These models focus on learning compatibility [@veit2015learning], complementarity [@zhang2018quality; @kang2019complete; @DBLP:journals/corr/abs-1904-12574], and complementary-similarity [@mcauley2015inferring; @mane2019complementary] relations across items and categories from co-occurrence of items in user interactions.
Method {#sec:method}
======
In this section, we explain the modeling and engineering aspects of a production within-basket recommendations system. First, we briefly introduce the state-of-the-art representation learning method for within-basket recommendation tasks, triple2vec. Then, we introduce our Real-Time Triple2Vec (RTT2Vec) system inference formulation, production algorithm, and system architecture.
**Problem Definition**: Consider $m$ users $\mathfrak{U}$ = $\{u_1, u_2, .....,u_m\}$ and $n$ items $\mathfrak{I}$ = $\{i_1,i_2,...i_n\}$ in the dataset. Let ${\mathfrak{B}}_u$ denote a basket corresponding to user $u \in \mathfrak{U} $, where basket refers to a set of items $\{i^{'} | i^{'} \in \mathfrak{I}\}$. The goal of the within-basket recommendation task is given ($u$, ${\mathfrak{B}}_u$) generate top-k recommendations $\{i^{*} | i^{*} \in \mathfrak{I}\setminus {\mathfrak{B}}_u\}$ where $i^*$ is complementary to items in ${\mathfrak{B}}_u$ and compatible to user $u$.
Triple2vec model {#ssec:T2V}
----------------
We utilize the triple2vec [@wan2018representing] model for generating personalized recommendations. The model employs (user $u$, item $i$, item $j$) triples, denoting two items ($i$, $j$) bought by the user $u$ in the same basket, and learns representation $h_u$ for the user $u$ and a dual set of embeddings ($p_i, q_j$) for the item pair ($i$, $j$).
$$\label{cohesion_score}
\begin{split}
s_{i,j,u} = p_i^T q_j + p_i^T h_u + q_j^T h_u
\end{split}$$
The cohesion score for a triple ($u,i,j$) is defined by Eq. \[cohesion\_score\]. It captures both user-item compatibility ($p_i^T h_u $, $q_j^T h_u$) as well as item-item complementarity ($p_i^T q_j$). The embeddings are learned by maximizing the co-occurrence log-likelihood of each triple as:
$$\label{likelihood}
\begin{split}
L = \sum_{\forall (i,j,u)}\log{P(i|j,u)}+\log{P(j|i,u)}+\log{P(u|i,j)}
\end{split}$$
where $P(i|j,u)=\frac{\exp(s_{i,j,u})}{\sum_{i^{'}} \exp(s_{i^{'},j,u})}$. Similarly, $P(j|i,u)$ and $P(u|i,j)$ can be obtained by interchanging ($i$,$j$) and ($i$,$u$) respectively.
In accordance with most skip-gram models with negative sampling, the softmax function in Eq. \[likelihood\] is approximated by the Noise Contrastive Estimation (NCE) loss function, using TensorFlow [@abadi2016tensorflow]. A log-uniform (Zipf) distribution is used to sample negative examples.
RTT2Vec: Real-Time Model Inference {#ssec:RTI}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper provides a sample of a LaTeX document which conforms, somewhat loosely, to the formatting guidelines for ACM SIG Proceedings.[^1]'
author:
- Ben Trovato
- 'G.K.M. Tobin'
- 'Lars Th[ø]{}rv[ä]{}ld'
- 'Lawrence P. Leipuner'
- Sean Fogarty
- Charles Palmer
- John Smith
- 'Julius P. Kumquat'
bibliography:
- 'sample-bibliography.bib'
subtitle: Extended Abstract
title: SIG Proceedings Paper in LaTeX Format
---
<ccs2012> <concept> <concept\_id>10010520.10010553.10010562</concept\_id> <concept\_desc>Computer systems organization Embedded systems</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10010520.10010575.10010755</concept\_id> <concept\_desc>Computer systems organization Redundancy</concept\_desc> <concept\_significance>300</concept\_significance> </concept> <concept> <concept\_id>10010520.10010553.10010554</concept\_id> <concept\_desc>Computer systems organization Robotics</concept\_desc> <concept\_significance>100</concept\_significance> </concept> <concept> <concept\_id>10003033.10003083.10003095</concept\_id> <concept\_desc>Networks Network reliability</concept\_desc> <concept\_significance>100</concept\_significance> </concept> </ccs2012>
[^1]: This is an abstract footnote
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A Hermitian symplectic manifold is a complex manifold endowed with a symplectic form $\omega$, for which the bilinear form $\omega(I\cdot,\cdot)$ is positive definite. In this work we prove ${dd^c}$-lemma for 1- and (1,1)-forms for compact Hermitian symplectic manifolds of dimension 3. This shows that Albanese map for such manifolds is well-defined and allows one to prove Kählerness if the dimension of the Albanese image of a manifold is maximal.'
author:
- Grigory Papayanov
date: 2015
title: Cohomological properties of Hermitian sympletic threefolds
---
[**Cohomological properties of Hermitian\
symplectic threefolds** ]{}\
Grigory Papayanov\
Introduction {#introduction .unnumbered}
============
A Hermitian symplectic manifold is a complex manifold $(M,I)$ together with a symplectic form $\omega$, for which the bilinear form $\omega(I\cdot,\cdot)$ is positive definite (that is, $\omega(IX,X)>0$ for any vector field $X$ on $M$). Any Kähler manifold is obviously Hermitian symplectic, and it is an open problem whether there exist other examples of Hermitian symplectic manifolds. Hermitian symplectic manifolds were studied by Streets and Tian in [@Streets_Tian:pluriclosed] and [@Streets_Tian:flow]; they constructed an appropriate Ricci flow on Hermitian symplectic manifolds, and studied its convergency properties. Since then, many people searched for non-trivial examples of Hermitian symplectic manifolds.
The search for non-Kähler examples of Hermitian symplectic manifolds was vigorous, but ultimately unsuccessful. All common sources of examples of non-Kähler manifolds were tapped at some point.
For complex dimension 2, Hermitian symplectic structures are all Kähler. This was shown by Streets and Tian in [@Streets_Tian:pluriclosed]. Another proof could be obtained from the Lamari ([@Lamari]) result about existence of positive, exact $(1,1)$-current on any non-Kähler complex surface.
In [@Peternell], it was shown that any non-Kähler Moishezon manifold admits an exact, positive $(n-1,n-1)$-current; therefore, Moishezon manifolds which are Hermitian symplectic are also Kähler.
In [@Enrietti_Fino_Vezzoni] it was shown that no complex nilmanifold can admit a Hermitian symplectic structure, and in [@Fino_Kasuya_Vezzoni] this result was extended to all complex solvmanifolds and Oeljeklaus-Toma manifolds.
Existence of Kähler metric implies some restrictions on the cohomology of a manifold: for example the Frölicher spectral sequence of Kähler manifold always degenerates at the first page. Results of Cavalcanti ([@Cavalcanti:SKT]) show that the Frölicher spectral sequence for Hermitian symplectic manifolds degenerates at the first page.
In this work we define some Laplacian-like operators, kernels of which conjecturally isomorphic to the spaces of cohomology, and, with the help of these operators, prove ${dd^c}$-lemma for (1,1)-forms on Hermitian symplectic threefolds. Argument of Gauduchon ([@Gauduchon]) shows that ${dd^c}$-lemma for (1,1)-forms is equivalent to the equality $b^1=2h^{0,1}$. It follows that the Albanese map is well-defined and, if its image is not a point, the generic fiber of ${\operatorname{Alb}}$ is Kähler. The question of existence of special (e.g. Kähler or balanced) metrics on total spaces of maps with Kähler base and fibers is studied, for example, in [@HL] and [@Michelsohn]. Using the Albanese map, we are able to prove that if a Hermitian symplectic threefold $M$ has ${\operatorname{dim}}{\operatorname{Alb}}(M)=3$, then it admits a Kähler metric, and if ${\operatorname{dim}}{\operatorname{Alb}}(M)=1$, $M$ is balanced. If $dd^c$-lemma holds for $(2,2)$-forms, then by [@HL] ${\operatorname{dim}}{\operatorname{Alb}}(M)=2$ would imply that $M$ is Kähler, but, unfortunately, we have not proven ${dd^c}$-lemma in full generality yet.
[**Acknowledgements.**]{} The author would like to thank M.Verbitsky for many extremely helpful discussions. Work on sections 1–3 was supported by RSCF, grant number 14-21-00053, within the Laboratory of Algebraic Geometry. Work on section 4 was supported by RFBR 15-01-09242.
Preliminaries
=============
[ ]{}Let $M$ be a smooth manifold of dimension 2n, $I:TM {{\:\longrightarrow\:}}TM$ an integrable complex structure, ${\mathcal{A}}^{p,q}$ the corresponding Hodge decomposition on the bundle of differential forms: ${\mathcal{A}}^n\otimes {{\Bbb C}}=\bigoplus_{n=p+q}{\mathcal{A}}^{p,q}$, ${\omega^{1,1}}$ a form in ${\mathcal{A}}^{1,1}$. We will say that ${\omega^{1,1}}$ is [*Hermitian*]{} if the tensor $h(\cdot,\cdot):={\omega^{1,1}}(\cdot,I\cdot)$ is a Riemannian metric on $M$, and we will say that ${\omega^{1,1}}$ is [*Hermitian symplectic*]{} if there exists a symplectic form $\omega$ such that ${\omega^{1,1}}$ is the (1,1)-component in the Hodge decomposition of $\omega$. If $M$ is endowed with such ${\mathcal{I}}$ and ${\omega^{1,1}}$, we will call it a Hermitian symplectic manifold.
For a Hermitian symplectic manifold $(M,I,\omega)$, let $d: {\mathcal{A}}^\bullet{{\:\longrightarrow\:}}{\mathcal{A}}^{\bullet+1}$ be the usual de Rham differential acting on forms, $d^c:=IdI^{-1}: {\mathcal{A}}^{\bullet}{{\:\longrightarrow\:}}{\mathcal{A}}^{\bullet+1}$ the twisted differential, $L: A^\bullet{{\:\longrightarrow\:}}A^{\bullet+2}$ the operator of (left) multiplication by $\omega$, $L(\eta):= \omega\wedge \eta$, $\Lambda: {\mathcal{A}}^{\bullet}{{\:\longrightarrow\:}}{\mathcal{A}}^{\bullet-2}$ the adjoint operator ([@Yau_Tseng]). In the local Darboux coordinates $p_i, q_i$ where $\omega=\sum dp_i\wedge dq_i$, operator $\Lambda$ looks like $\sum i_{\!\frac{{\partial}}{{\partial}p_i}}i_{\!\frac{{\partial}}{{\partial}q_i}}$. We will denote by $L^{1,1}$ the operator of multiplication by the hermitian form ${\omega^{1,1}}$, and by $\Lambda^{1,1}$ the adjoint operator to $L^{1,1}$.
[ ]{}\[SKT\] The form ${\omega^{1,1}}$ is the SKT form, that is, ${\partial}{\overline}{\partial}{\omega^{1,1}}=0$.
[**Proof:**]{} Let $\omega={\omega^{1,1}}+\alpha$, where $\alpha$ lies in ${\mathcal{A}}^{2,0}\oplus{\mathcal{A}}^{0,2}$. Since $d\omega=0$, ${\partial}{\omega^{1,1}}=-{\overline}{\partial}\alpha$ and ${\partial}{\overline}{\partial}{\omega^{1,1}}={\overline}{\partial}^2\alpha=0$.
[ ]{}Let $\alpha$ be a differential form on $M$. We will say that $\alpha$ is [*primitive with respect to $\omega$*]{} if $\Lambda\alpha=0$, and that $\alpha$ is primitive with respect to ${\omega^{1,1}}$ if $\Lambda^{1,1}\alpha=0$.
[ ]{}(The Weil identities). Let $B^{p,q}$ be a primitive with respect to ${\omega^{1,1}}$ $(p,q)$-form, $p+q=r$. Then the following formula holds ([@Voisin Proposition 6.29]):
$$*B^{p,q}=(-1)^{\frac{r(r+1)}{2}}(\sqrt{-1})^{p-q}\frac{1}{(n-r)!}({\omega^{1,1}})^{n-k}\wedge B^{p,q}.$$
[ ]{}An operator $\Delta$ defined as double graded commutator, $\Delta:=\{d,\{d^c,\Lambda^{1,1}\}\}$ is called [*the Hermitian symplectic*]{} Laplacian.
[ ]{}$\Delta$ is not a Laplacian associated to the Riemannian metric $h$. Nevertheless they differ by a differential operator of first order (see e.g. [@Liu_Yang] for the exact formula), therefore they have equal symbols, so $\Delta$ is elliptic.
Recall the graded Jacobi identity for the graded commutator: $$\{a,\{b,c\}\}=\{\{a,b\},c\}+(-1)^{deg(a)deg(b)}\{b,\{a,c\}\}.$$
[ ]{}\[commutators\] $\Delta=\{d^c,\{d,\Lambda^{1,
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove that almost every finite collection of matrices in $GL_d( \mathbb{R} )$ and $SL_d({{\mathbb R}})$ with positive entries is Diophantine. Next we restrict ourselves to the case $d=2$. A finite set of $SL_2({{\mathbb R}})$ matrices induces a (generalized) iterated function system on the projective line $\RP^1$. Assuming uniform hyperbolicity and the Diophantine property, we show that the dimension of the attractor equals the minimum of 1 and the critical exponent.'
address:
- 'Yuki Takahashi, Department of Mathematics, Bar-Ilan University, Ramat Gan, Israel'
- 'Boris Solomyak, Department of Mathematics, Bar-Ilan University, Ramat Gan, Israel'
author:
- BORIS SOLOMYAK
- YUKI TAKAHASHI
bibliography:
- 'bib.bib'
title: 'Diophantine property of matrices and attractors of projective iterated function systems in $\RP^1$'
---
Introduction and main results
=============================
Diophantine property of matrices
--------------------------------
Recently there has been interest in Diophantine properties in non-Abelian groups. The following is a variant of [@GJS1999 Definition 4.2].
Let $\Ak = \{A_i\}_{i\in \Lam}$ be a finite subset of a topological group $G$ equipped with a metric $\varrho$. Write $A_\bi = A_{i_1}\cdots A_{i_n}$ for $\bi = i_1\ldots i_n$. We say that the set $\Ak$ is [*Diophantine*]{} if there exists a constant $c>0$ such that for every $n\in \N$, we have $$\label{Dioph1}
\bi,\bj\in \Lam^n,\ A_\bi\ne A_\bj \implies \varrho(A_\bi,A_\bj) > c^n.$$ The set $\Ak$ is [*strongly Diophantine*]{} if there exists $c>0$ such that for all $n\in \N$, $$\label{Dioph2}
\bi,\bj\in \Lam^n,\ \bi\ne \bj \implies \varrho(A_\bi,A_\bj) > c^n.$$
Clearly, $\Ak$ is strongly Diophantine if and only if it is Diophantine and generates a free semigroup. Gamburd, Jacobson, and Sarnak [@GJS1999 Definition 4.2] gave a definition of a Diophantine set, which is equivalent to ours, except that they always consider symmetric sets (that is, $g\in \Ak\ \Rightarrow\ g^{-1}\in \Ak$). Diophantine-type questions in groups arise in connection with spectral gap estimates, see [@GJS1999; @Bourgain2014].
See [@ABRS2015; @ABRS2018] for a recent discussion of Diophantine properties in groups and related problems. In [@ABRS2018] a Lie group $G$ is called Diophantine, if almost every $k$ elements of $G$, chosen independently at random according to the Haar measure, together with their inverses, form a Diophantine set in $G$. Gamburd et al. [@GJS1999] conjectured that $SU_2({{\mathbb R}})$ is Diophantine. More generally, it is conjectured that semi-simple Lie groups are Diophantine. Kaloshin and Rodnianski [@KR2001] proved a weaker Diophantine-type property: for a.e. $(A,B) \in SO_3({{\mathbb R}})\times SO_3({{\mathbb R}})$, there exists $c>0$ such that for any $n{\geqslant}1$ and any two distinct words $W_1, W_2$ over the set $\Ak=\{A,B,A^{-1},B^{-1}\}$ of length $n$, $$\|W_1-W_2\|{\geqslant}c^{n^2}.$$ It is mentioned in [@KR2001] that their method is general, and applies to $SU_2({{\mathbb R}})$ as well, and also to $m$-tuples of matrices for any $m{\geqslant}2$.
Next we state our first result. For any collection of linearly independent vectors $v_1,\ldots,v_{d}$ in ${{\mathbb R}}^{d}$ consider the simplicial cone $$\label{cone}
\Sig=\Sig_{v_1,\ldots,v_{d}} = \{x_1 v_1 + \cdots + x_{d} v_{d}:\ x_1,\ldots,x_{d}{\geqslant}0\}.$$ If a matrix $A\in GL_{d}({{\mathbb R}})$ satisfies $$A({\Sig}{\smallsetminus}\{0\}) \subset \Sig^\circ,$$ we say that $\Sig$ is [*strictly invariant*]{} for $A$. Given a cone $\Sig=\Sig_{v_1,\ldots,v_{d}}$, denote by $\Xk_{\Sig,m}$ (respectively, $\Yk_{\Sig,m}$) the set of all $GL_{d}({{\mathbb R}})$ (respectively, $SL_{d}({{\mathbb R}})$) $m$-tuples of matrices for which $\Sig$ is strictly invariant. We consider $\Xk_{\Sig,m}$ as an open subset of ${{\mathbb R}}^{d^2m}$ and $\Yk_{\Sig,m}$ as a $(d^2-1)m$-dimensional manifold.
\[main\_thm\] Let $\Sig=\Sig_{v_1,\ldots,v_{d}}$ be a simplicial cone in ${{\mathbb R}}^{d}$ and $m{\geqslant}2$.
[(i)]{} For a.e. $\mathcal{A} \in \mathcal{X}_{\Sig, m}$, the $m$-tuple $\mathcal{A}$ is strongly Diophantine. In particular, a.e. $m$-tuple of positive $GL_{d}({{\mathbb R}})$ matrices is strongly Diophantine.
[(ii)]{} For a.e. $\mathcal{A} \in \mathcal{Y}_{\Sig, m}$, the $m$-tuple $\mathcal{A}$ is strongly Diophantine. In particular, a.e. $m$-tuple of positive $SL_{d}({{\mathbb R}})$ matrices is strongly Diophantine.
*1. Unfortunately, our results do no cover any example of a symmetric set, since the strict invariance property cannot hold for a matrix $A$ and $A^{-1}$ simultaneously.*
2\. Every $m$-tuple of matrices with algebraic entries is Diophantine (but not necessarily strongly Diophantine), see, e.g., [@GJS1999 Prop.4.3].
3\. It is well-known that Diophantine numbers in ${{\mathbb R}}$ form a set of full measure, which is, however, meagre in Baire category sense (its complement contains a dense $G_\delta$ set). Baire category genericity of non-Diophantine $m$-tuples in $SU_2({{\mathbb R}})$ has been pointed out in [@GJS1999]. In $G=SL_d({{\mathbb R}})$ the situation is different, since there are, for example, open sets of $m$-tuples in $G\times G$ which satisfy (\[Dioph2\]). For instance, if ${{\mathbb R}}^d_+$ is mapped by $A,B$ into closed cones that are disjoint, except at the origin, then (\[Dioph2\]) holds for $\{A,B\}$. On the other hand, there are open sets in $(SL_d({{\mathbb R}}))^m$ in which non-Diophantine pairs are dense. For instance, the set of elliptic matrices in $SL_2({{\mathbb R}})$ is open, and a standard argument shows that a generic $m$-tuple that contains an elliptic matrix is not Diophantine.
The scheme of the proof of Theorem \[main\_thm\] is as follows. We consider the induced action of the matrices on the projective space, and show that, given a non-degenerate family of $m$-tuples strictly preserving an open set, depending on a parameter real-analytically, for all parameters outside an exceptional set of zero Hausdorff dimension, the induced iterated function system (IFS) satisfies a version of the “exponential separation condition”. This property implies the strong Diophantine condition for the matrices. We then locally foliate the space of $m$-tuples of matrices and apply Fubini’s Theorem. The result on the zero-Hausdorff dimensional set of exceptions uses the notion of [*order-$k$ transversality*]{}, which is a modified version of that which appeared in the work of Hochman [@Hochman2014; @Hochman2015]. The strict open set preservation property is needed to ensure that the induced IFS is contracting (uniformly hyperbolic).
Projective IFS and linear cocycles
----------------------------------
Let $\Ak = \{A_i\}_{i\in \Lam}$ be a finite collection of $SL_d({{\mathbb R}})$ matrices. The linear action of $SL_d({{\mathbb R}})$ on ${{\mathbb R}}^{d}$ induces an action on the projective space $\RP^{d-1}$, and thus $\Ak$ defines an IFS $\Phi_\Ak=\{\varphi_A\}_{A\in \Ak}$ on $\RP^{d-1}$, called a [*(real) projective IFS*]{}. Such IFS
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Public observatory project is playing more and more important role in science popularization education and scientific research, and many amateur astronomers also have began to build their own observatories in remote areas. As a result of the limitation of technical condition and construction funds for amateur astronomers, their system often breaks down, and then a stable remote unattended operation system becomes very critical. Hardware connection and control is the basic and core part in observatory design. Here we propose a conception of engineering hardware design for public observatory operation as a bridge between observatory equipment and observation software. It can not only satisfy multiple observation mode requirement, but also save cost.'
author:
- 'Jun Han,$^1$ Dongwei Fan,$^1$ Chenzhou Cui,$^1$ Chuanzhong Wang,$^2$ Shanshan Li,$^1$ Linying Mi,$^1$ Zheng Li,$^1$ Yunfei Xu,$^1$ Boliang He,$^1$ Changhua Li,$^1$ Yihan Tao,$^1$ and Sisi Yang$^1$'
bibliography:
- 'P1-4.bib'
title: A Conception of Engineering Design for Remote Unattended Operation Public Observatory
---
Introduction
============
Since the first microprocessor emerges, people have tried to make the telescope operation smarter and started numerous engineering attempts. Until the mid of 1960s, some began operating stably and the 8” reflector telescope at the University of Wisconsin is one of earliest examples( @1992ASPC...34....3C). They could carry out astronomical photometry according to the scheduled list, so named as Automated Scheduled Telescope and also is the beginning of robotic telescope. Then Remotely Operated Telescope is proposed to meet the needs of observation, which could be controlled by remote users and observe astronomical objects automatically. Today observatory are highly integrated, more complex and more advanced. Telescope observatory not only needs automatic unattended operation, but also can be connected remotely. It can run without human’s help and can adjust itself according to weather, equipment status and so on. This is the Robotic Autonomous Observatory, and some observatories have achieved this goal.
Robotic has two main important advantages, autonomous and remote. Autonomous means that a better use of telescope time by telescope’s real-time follow-up, saving manpower in unattended operation mode, operation mistakes reduced, higher observation efficiency, and focusing science more but not operation logic. Remote means that users can be located anywhere to save time, share same telescope in different time to save cost and build observatory at high altitude and distant location to obtain the best observation environment. Robotic observatory has made significant effect in student education, for example the Bradford Robotic telescope, the original Micro-Observatory telescopes and so on(e.g. @2017AstRv..13...28G, @1996ASPC..101..380D). These learning and observation experience could motivate students to continue higher level courses and scientific career. Even some high school students continued deeper research and produced papers( @2011PASA...28...83F). As a result of robotic telescope’s huge advantages, more amateur astronomers have began to build their own observatory and have made many important scientific outputs, for example Xingming Observatory built in 2007 by an amateur astronomer. It is located in Xinjiang, China, and releases the figures by Popular Supernova Project.[^1] This project platform is managed by Chinese Virtual Observatory, and any people can participant. Until now, 17 supernova and nova candidates have been reported, and 12 of them have been confirmed by optical spectrum. Public observatory and amateur astronomers, whether in education or scientific research, have been an important astronomy strength.
With economic progress, light pollution becomes worse and worse,[^2] so that building observatory in remote areas becomes inevitable. There are some solutions by using modern robotic observatory mode, but usually too expensive and complex, and also not necessary to amateur astronomers. They usually organize their own system by themselves, especially for hardware equipment connection and related control. As a result of the limitation of technical condition and construction funds for amateur astronomers, their system usually consists of multiple sub-systems made by different people. This kind of combination is very rough and less compatible, so as to break down often. A stable remote unattended operation system suitable for amateur astronomer’s observatory becomes very critical. In fact, remote unattended operation observatory is not a new conception, and it is the so called robotic telescope above, but a little difference and mainly faces the requirements of amateur astronomers’ observatory. Here we propose a conception of engineering design for public observatory. It can not only satisfy observation requirement, but also save cost.
The Conception of Engineering Design
====================================
Nowadays public observatory also is not a single telescope, but an integration system with telescope, various sensors and so on. When design a common observatory hardware system, there are two key problems - the connection and intelligent control to equipment. We propose a hardware system as a bridge between observatory equipment and software system, and some criterions should be followed.
- Connect and control every equipment easily without any dependence on operation system and software platform.
- Multiple connection modes to meet various users.
- Hardware system itself should be clever to open or close some resource according to system status.
Based on these, we design a Remote Observatory System (ROS for short) to meet the criterions above, and make it to have the ability to connect and control the equipment in observatory. The framework for public observatory is showed in Figure \[P1-4\_f1\]. We define this system as a closed-loop system and make it has the capability to evaluate its operation through redundant inputs to detect errors. This system is made of multiple single chips, tiny internet chips and logic circuits, and could be reprogrammed. The ROS system consists of three control modules and five functional modules.
As a result of light pollution and actual demand, observatory could be located anywhere. Different control modules should be designed to meet different users. They are mainly used to transfer and analyse control commands. Three control modules are designed and listed next.
- Local Control Module - This module is the most basic part. Users can operate all the resource in the observatory, and it has the highest control priority.
- Network Control Module - It has the lowest control priority and mainly used by remote users. Network protocol is independent of platform, and so it can be connected by any network equipment, for example computer, pad and so on.
- Phone Control Module - This module is a special part and most useful for emergency control, for example network interruption. It connects each other by phone tower and the control is by message or voice.
The following is the five functional modules. Communication interface to observation equipment is mainly by internet or serial bus. They are used to connect and control observation equipment in observatory, for example telescope, dome, all-sky camera, sky brightness, weather station, security camera, other auxiliary equipment, etc.
- ARM/PC Module - This module is used to deploy related observation software for computing, data transfer, backup and telescope control, including equatorial mount, filter, focus and so on.
- Dome Module - Dome open, follow-up and close.
- Power Module - The power supply and control logic for every equipment.
- Network Module - The network entrance and export. All network equipment should access and connect it.
- Monitor Module - It not only monitor observatory equipment and its status, but also push status code to users and adjust resource’s operation by predefined algorithm.
Summary
=======
We propose a closed-loop hardware system as a bridge between observatory and users. It supports multiple control modules to meet different users, and provides internet and serial bus interface as the communication interface to connect observation equipment. The interface also could be extended according to factual requirements. It is a kind of open source hardware platform, and people could define control and transfer logic by themselves and then reprogramme it. Based on this hardware platform, we will develop software driver environment so as to access RTS2, ASCOM easily, and also make our own observation control software system special for public observatory in the future.
This work is supported by National Natural Science Foundation of China (NSFC)(11503051, 61402325) and the Joint Research Fund in Astronomy (U1531111, U1531115, U1531246, U1731125, U1731243) under cooperative agreement between the NSFC and Chinese Academy of Sciences (CAS) and the Young Researcher Grant of National Astronomical Observatories, Chinese Academy of Sciences. We would like to thank the National R&D Infrastructure and Facility Development Program of China, “Earth System Science Data Sharing Platform” and “Fundamental Science Data Sharing Platform” (DKA2017-12-02-XX). Data resources are supported by Chinese Astronomical Data Center (CAsDC) and Chinese Virtual Observatory (China-VO).
[^1]: Populsar Supernova Project reference web link <http://psp.china-vo.org/>.
[^2]: The light pollution map reference web link <https://www.lightpollutionmap.info>.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper presents a compilation procedure which determines internal and external indices for signs in a unification based grammar to be used in improving the computational efficiency of lexicalist chart generation. The procedure takes as input a grammar and a set of feature paths indicating the position of semantic indices in a sign, and calculates the fixed-point of a set of equations derived from the grammar. The result is a set of independent constraints stating which indices in a sign can be bound to other signs within a complete sentence. Based on these constraints, two tests are formulated which reduce the search space during generation.'
author:
- Arturo Trujillo
bibliography:
- 'ref.bib'
title: Determining Internal and External Indices for Chart Generation
---
Introduction
============
One problem with the classical transfer approach to machine translation (MT) is that it involves complex transformations of syntactic and semantic structures from the source to the target language. These transformations can have intricate interactions with each other, making transfer modules difficult to reverse, debug and maintain. They can also make monolingual components more heavily dependent on the language pair at hand. Much of the complexity in transfer stems from the recursive nature of the syntactic and semantic frameworks normally used. However, recent work in formal semantics has found it expedient to minimise the recursive structure of semantic representations to efficiently encode certain types of ambiguity [@reyle95]. Naturally, flat semantics mitigate many structural differences between natural languages and their application to MT readily follows (see [@copestakeetal95b]). Unfortunately, simplicity in the transfer component comes at the cost of generation complexity for such representations since their lack of structure increases the non-determinism of most generation algorithms, just as lexical-only transfer increases the complexity of bag generation in Shake-and-Bake MT [@whitelock94]. For this reason, several researchers have investigated the efficiency of generators whose input has a flat structure, be this in the form of lists of semantic predicates or of lexical elements. Such generators, of the chart, bag and lexicalist varieties, differ in many ways but the source of their complexity is the same: the search space grows factorially on the size of the input for many algorithms, since they are based on a modified chart parser which essentially attempts all permutations of the input. This is the issue addressed by the paper, taking chart generation as an instance of the problem.
Chart Generation
================
A chart generator [@kay96] takes as input a flat semantic representation and, using a chart data structure, outputs the string corresponding to it. The unordered character of the semantic input permits such generators to be viewed as parsers for languages with completely free word order: an active edge combines with an inactive edge only if the two edges have no semantic predicates in common; no other restrictions apply. This regime leads to the combinatorial explosion mentioned above.
Example
-------
Consider the following flat semantic representation corresponding to the string [*John ran fast*]{}:
> r : run(r), past(r), fast(r), arg1(r,j), name(j,John)
Here, $r$ is the distinguished index for the expression. These predicates will unify with the semantic component of suitably defined lexical entries resulting in the agenda entries shown below:
Word Cat Semantics
------ --------- --------------------------------
John np(j) j : name(j,John)
ran vp(r,j) r : run(r), arg1(r,j), past(r)
fast adv(r) r : fast(r)
Items are then moved into the chart and their interactions considered. Moving [*John*]{} results in no interactions, since the chart is empty. Moving [*ran*]{} results in [*John ran*]{} assuming the rule:
> s(x) $\rightarrow$ np(x), vp(x,y)
This is a complete sentence, but it does not subsume all the semantic material from the input; it therefore remains in the chart but cannot constitute an output sentence. Next, [*fast*]{} is moved, adding in [*ran fast*]{} to the agenda and then to the chart, at which point [*John ran fast*]{} is built. Generation thus terminates. One of the main sources of inefficiency in chart generation is that a multitude of edges are constructed which either do not subsume the entire semantics of the input or which can never be part of the solution because they omit semantic material which only they could have subsumed. In the example, [*John runs*]{} is one such edge. The problem is that these edges, if left in the chart, will interact with other edges to form yet further edges which can never be part of the final result, but which cause the search space to explode.
Internal and External Indices {#inn-out-sec}
-----------------------------
To overcome this inefficiency, it is necessary to discard edges which would make it impossible to incorporate all the input into an output sentence. Achieving this involves exploiting the fact that after an edge is constructed, only certain indices in its semantic predicates are accessible by other rules in the grammar. For example, treatments of English VPs (e.g. [*chased the cat*]{}) typically disallow modification of the object NP once the VP has been analysed; thus if [*cat*]{} received index [*c*]{}, it is not possible to bind into this index. Intuitively, this means that modifying the VP cannot lead to modification of the object NP. Following Kay, indices not available outside a category (i.e. outside an inactive edge) are called [*internal*]{} indices, while those which are accessible are called [*external*]{} indices. When an inactive edge is constructed, all indices in predicates not subsumed by the edge must be i) different from the indices the edge subsumes, or ii) be external to it. This ensures that inactive edges subsume all predicates indexed by their internal indices. The objective of this paper is to present a general algorithm for determining which indices are internal and external to a category without requiring the explicit identification of such information by the grammar writer.
Overview of the Algorithm
=========================
Ideally, internal indices should be determined directly from the rules of a grammar. However, different grammar writers adopt different index binding strategies, making such identification by automatic on-line inspection of rules very difficult at best. The algorithm proposed here therefore automatically extracts information from a grammar off-line and uses it to determine whether an index is internal or external to a category. Based on this information, it is possible to identify those edges which are incomplete with respect to the input and which may consequently be eliminated from further consideration. The algorithm has been implemented and tested on a lexicalist generator operating on a small unification-based grammar; a description of the test and further discussion of the issues involved is given in [@trujilloetal96]. The algorithm takes as input a unification-based phrase structure (PS) grammar and a set of paths and outputs a set of constraints on pairs of signs indicating which indices in the two signs can be bound for some possible derivation tree. Principal among the techniques used are those for predictive parser compilation [@ahoetal86] adapted to unification based grammars [@trujillo94]. In addition, following standard practice in data flow analysis [@kennedy81], a data structure is maintained tracing how variables (or in this case, indices) are modified (or in this case, bound) in a valid derivation.
Inner and Outer Domains
-----------------------
Two main phases, themselves analogous to the calculation of FIRST and FOLLOW sets for predictive parsers, constitute the bulk of the algorithm. The first phase determines the indices at the root of a tree which are bound to items at the leaves; this phase will be called the calculation of [*inner domains*]{}. The second phase uses inner domains to calculate [*outer domains*]{}, which indicate the indices in a sign which are bound to the indices of signs outside the sign’s subtree. Thus, inner domains express the relationship between phrases with related semantic material within subtrees for which they are roots, while outer domains express the relationship between a phrase and outside phrases with which the phrase shares semantic material. Calculating both inner and outer domains requires the computation of the fixed-point of a set of equations derived from the grammar. The fixed-point of a function is the value of [**X**]{} which satisfies\
f([**X**]{}) = [**X**]{}
Grammar
-------
We adopt the following definition of a unification grammar:
A grammar is a tuple (N,T,P,S), where P is a set of productions $\alpha \Rightarrow \beta$, where $\alpha$ is a sign, $\beta$ is a list of signs, N is the set of all $\alpha$, T is the set of all signs appearing in $\beta$ such that they unify with lexical entries, and S is the start sign.
The grammar must generate sequences of coherent predicates (i.e. the graph with arcs for predicates sharing indices is connected).
The Triple Data Structure
-------------------------
A basic data structure in the algorithm will be triples of the form [*(Left Sign, Right Sign, Bindings)*]{}, where [*Bindings*]{} is a set of pairs consisting of a path in [*Left Sign*]{} and a path in [ *Right Sign*]{} such that the values at the end of each path are assumed to be token identical; [*Left Sign*]{} and [*Right Sign*]{} are phrasal or lexical signs. The following triple for example represents part of the inner domain of an NP:
> \(1) (NP\[sem:arg1:X\],Det\[sem:arg1:Y\], {$<$sem
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The family of left-to-right algorithms reduces input numbers by repeatedly subtracting the smaller number, or multiple of the smaller number, from the larger number. This paper describes how to extend any such algorithm to compute the Jacobi symbol, using a single table lookup per reduction. For both quadratic time algorithms (Euclid, Lehmer) and subquadratic algorithms (Knuth, Schönhage, Möller), the additional cost is linear, roughly one table lookup per quotient in the quotient sequence. This method was used for the 2010 rewrite of the Jacobi symbol computation in .'
author:
- Niels Möller
bibliography:
- 'ref.bib'
date: 2019
title: Efficient computation of the Jacobi symbol
---
Introduction
============
The Legendre symbol and its generalizations, the Jacobi symbol and the Kronecker symbol, are important functions in number theory. For simplicity, in this paper we focus on computation of the Jacobi symbol, since the Kronecker symbol can be computed by the same function with a little preprocessing of the inputs.
Jacobi and GCD
--------------
Two quadratic algorithms for computing the Kronecker symbol (and hence also the Jacobi symbol) are described as Algorithm 1.4.10 and 1.4.12 in [@cohen]. These algorithms run in quadratic time, and consists of a series of reduction steps, related to Euclid’s algorithm and the binary algorithm, respectively. Both Kronecker algorithms share one property with the binary algorithm: The reduction steps examine the current pair of numbers in both ends. They examine the least significant end to cast out powers of two, and they examine the most significant end to determine a quotient (like in Euclid’s algorithm) or to determine which number is largest (like in the binary algorithm).
Fast, subquadratic, algorithms work by divide-and-conquer, where a substantial piece of the work is done by examining only one half of the input numbers. Fast left-to-right is related to fast algorithms for computing the continued fraction expansion [@schoenhage:1971; @knuth:algorithms]. These are left-to-right algorithms, in that they process the input from the most significant end. The binary recursive algorithm [@stehle] is a right-to-left algorithm, in that it processes inputs from the least significant end. The asymptotic running times of these algorithms are $O(M(n) \log n)$, where $M(n)$ denotes the time needed to multiply two $n$-bit numbers. The algorithm used in recent versions of the library [@gmp] is a variant of Schönhage’s algorithm [@moller-sgcd].
It is possible to compute the Jacobi symbol in subquadratic time, with the same asymptotic complexity as . One algorithm is described in [@bach-shallit] (solution to exercise 5.52), which says:
> This complexity bound is part of the “folklore” and has apparently never appeared in print. The basic idea can be found in Gauss \[1876\]. Our presentation is based on that in Bachmann \[1902\]. H. W. Lenstra, Jr. also informed us of this idea; he attributes it to A. Schönhage.
Since the quadratic algorithms for the Jacobi symbol examines the data at both ends, some reorganization is necessary to construct a divide-and-conquer algorithm that processes data from one end. The binary algorithm has the same problem. In the binary recursive algorithm, this is handled by using a slightly different reduction step using 2-adic division.
Recently, the binary recursive algorithm has been extended to compute the Jacobi symbol [@brent:jacobi]. The main difference to the corresponding algorithm is that it needs the intermediate reduced values to be non-negative, and to ensure this the binary quotients must be chosen in the range $1 \leq q < 2^{k+1}$ rather than $|q| < 2^k$. As a result, the algorithm is slower than the algorithm by a small constant factor.
Main contribution
-----------------
This paper describes a fairly simple extension to a wide class of left-to-right algorithms, including Lehmer’s algorithm and the subquadratic algorithm in [@moller-sgcd], which computes the Jacobi symbol using only $O(n)$ extra time and $O(1)$ extra space[^1]. This indicates that also for the fastest algorithms for large inputs, the cost is essentially the same for computing the and computing the Jacobi symbol.[^2]
Like the algorithm described in [@bach-shallit], the computation is related to the quotient sequence. The updates of the Jacobi symbol are somewhat different, instead following an unpublished algorithm by Schönhage [@schoenhage-brent-communication] for computing the Jacobi symbol from the quotient sequence modulo four. In the algorithms in , the quotients are not always applied in a single step; instead, there is a series of reductions of the form $a {\leftarrow}a
- m b$, where $m$ is a positive number equal to or less than the correct quotient ${\lfloor a/b \rfloor}$. In the corresponding Jacobi algorithms, the Jacobi sign is updated for each such partial quotient. Most of the partial quotients are determined from truncated inputs where the least significant parts of the numbers are ignored. The least significant two bits, needed for the Jacobi computation, must therefore be maintained separately.
Notation
--------
The time needed to multiply two $n$-bit numbers is denoted $M(n)$, where $M(n) = O(n \log n)$ for the fastest known algorithms. [^3]
The Jacobi symbol is denoted $(a | b)$. We use the convention that $[\text{condition}]$ means the function that is one when the condition is true, otherwise 0, e.g., $(0 | b) = [b = 1]$.
Left-to-right GCD
=================
In this paper, we will not describe the details of fast algorithms. Instead we will consider Algorithm \[alg:gcd\], which is a generic left-to-right algorithm, with a basic reduction step where a multiple of the smaller number is subtracted from the larger number. We also describe the main idea of fast instantiations of this algorithm.
In: $a, b > 0$ $a \geq b$ $a {\leftarrow}a - m b$, with $1 \leq m \leq {\lfloor a/b \rfloor}$ $a = 0$ $b$ $b {\leftarrow}b - m a$, with $1 \leq m \leq {\lfloor b/a \rfloor}$ $b = 0$ $a$
This algorithm terminates after a finite number of steps, since in each iteration $\max(a,b)$ is reduced, until $a = b$ and the algorithm terminates. It returns the correct value, since $\GCD(a,b)$ is unchanged by each reduction step.
The running time of an instantiation of this algorithm depends on the choice of $m$ in each step, and on the amount of computation done in each step. E.g., if $m = 1$, the worst case number of iterations in exponential. Euclid’s algorithm is a special case where, in each step, $m$ is the correct quotient of the current numbers.
The faster algorithms implements an iteration that depends only on some of the most significant bits of $a$ and $b$: These bits determine which of $a$ and $b$ is largest, and they also suffice for computing an $m$ which is close to the quotient ${\lfloor a/b \rfloor}$ or ${\lfloor b/a \rfloor}$. Furthermore, one can compute an initial part of the sequence of reductions based on the most significant parts of $a$ and $b$, collect the reductions into a transformation matrix, and apply all the reductions at once to the least significant parts of $a$ and $b$ later on. This saves a lot of time, since it omits computing all the intermediate $a$ and $b$ to full precision. If one repeatedly chops off one or two of the most significant words, one gets Lehmer’s algorithm, and by chopping numbers in half, one can construct a divide-and-conquer algorithm with subquadratic complexity.
We will extend this generic algorithm to also compute the Jacobi symbol. To do that, we need to investigate how the basic reduction $a
- m b$ affects the Jacobi symbol. When we have sorted this out, in the next section, the result is easily applied to all variants of Algorithm \[alg:gcd\].
Left-to-right Jacobi
====================
In this section, we summarize the properties of the Jacobi symbol we use, derive the update rules needed for our left-to-right algorithm. Finally, we give the resulting algorithm and prove its correctness.
Jacobi symbol properties
------------------------
The Jacobi symbol $(a | b)$ is defined for $b$ odd and positive, and arbitrary $a$. We work primarily with non-negative $a$, and make use of the following properties of the Jacobi symbol.
Assume that $a$ is positive and that $b$ is odd and positive. Then
(i) \[it:zero\] $(0 | b) = [b = 1]$.
(ii) \[it:negation\] $(a | b) = (-1)^{(b-1)/2}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'It is shown that the radial Schroedinger equation for a power law potential and a particular angular momentum may be transformed using a change of variable into another Schroedinger equation for a different power law potential and a different angular momentum. It is shown that this leads to a mapping of the spectra of the two related power law potentials. It is shown that a similar correspondence between the classical orbits in the two related power law potentials exists. The well known correspondence of the Coulomb and oscillator spectra is a special case of a more general correspondence between power law potentials.'
author:
- |
C. V. Sukumar\
[*Wadham College,*]{}\
[*University of Oxford, Oxford OX1 3PN, U.K.* ]{}
title: '**Equivalent power law potentials**'
---
Introduction
============
In this study we investigate the circumstances under which Classical dynamics and Quantum Mechanics induce relationships between classical orbits, eigenvalue spectra and phaseshifts of different dynamical systems. We study in particular power law potentials which belong to a special category in the sense that they permit a unique scaling of length and energy which lead to dimensionless equations in Classical and Quantum Mechanics. The dimensionless equations allow certain transformations which reveal a connection between the orbits and spectra of two related power law potentials. It has been noted in earlier literature that the solutions to the Schroedinger equation for a Simple Harmonic Oscillator (SHO) may be related to the boundstate solutions in a Coulomb potential, a relationship that is also evident in the connection between Hermite polynomials and Laguerre polynomials in the mathematical literature. In this report we examine the possibility of more general connections between power law potentials. In section 2 of this paper we study the relation between the Schroedinger equations of two related power law potentials. This issue was discussed by Quigg and Rossner in Physics Reports [**56**]{} (1979) in their study of quarkonium states using non-relativistic Quantum Mechanics. We follow their method of analysis and extract additional features which could prove useful. In sections 3 and 4 we study the same issue from the point of view of semi-classical and Classical Mechanics and attempt to establish which of the features that appear in Classical Mechanics are preseved in the passage to Quantum Mechanics.
Power law potentials in Quantum Mechanics
=========================================
We start from the radial Schroedinger equation for a power law potential with power exponent $\nu_1$ and angular momentum $l_1$ $$\frac{\hbar^2}{2\mu_1} \ \frac{\partial^2u_1}{\partial r^2}\ +\ \Big(E_1\ -\ \lambda_1\ r^\nu\ -\ \frac{l_1(l_1+1}{2\mu_1 r^2}\ \Big) \ u_1\ =\ 0$$ For power law potentials a scaling length and a scaled energy may be identified $$a_1\ =\ \Big(\frac{\hbar^2}{2\mu_1|\lambda_1|}\Big)^{\frac{1}{\nu_1+2}}\ ,\ r\ =\ a_1\ \rho_1\ ,\ E_1\ =\ \frac{\hbar^2}{2\mu_1 a_1^2}\ \epsilon_1$$ in terms of which the Schroedinger equation in dimensionless form becomes $$\frac{\partial^2 u_1}{\partial\rho_1^2}\ +\ \Big(\epsilon_1\ -\ \rho_1^{\nu_1}\ -\ \frac{l_1(l_1+1)}{\rho_1^2} \Big)\ u_1\ =\ 0 \label{eq:I0}$$ We now introduce a new variable and a new function through the relations $$\rho_1^{\nu_1}\ =\ z^{-\nu_2}\ ,\ u_1(\rho_1)\ =\ z^{-\frac{\nu_1+\nu_2}{2\nu_1}}\ v(z)\label{eq:I1}$$ to tranform the radial equation to the form $$\Big(\ \Big(\frac{\nu_1}{\nu_2}\Big)^2 z^{2\big(1+\frac{\nu_1}{\nu_2}\big)} \Big[\frac{\partial^2v}{\partial z^2} + \Big(1 - \frac{\nu_2^2}{\nu_1^2}\Big) \frac{v}{4z^2}\Big] + \Big[\epsilon_1 - z^{-\nu_2} - \frac{l_1(l_1+1)}{z^2} z^{2\big(1+\frac{\nu_1}{\nu_2}\big)}\Big] v\Big)\ z^{-\frac{\nu_1+\nu_2}{2\nu_1}} \ =\ 0$$ If we now impose the conditions $$\begin{aligned}
&2\Big(1 +\ \frac{\nu_2}{\nu_1}\Big)\ +\ \nu_2\ =\ 0\ \rightarrow\ \frac{1}{\nu_1}\ +\ \frac{1}{\nu_2}\ +\ \frac{1}{2}\ =\ 0 \label{eq:I2}\\
&l_1(l_1+1) + \frac{1}{4}\Big(1 - \frac{\nu_1^2}{\nu_2^2}\Big) = l_2( l_2+1)\ \frac{\nu_1^2}{\nu_2^2}\ \rightarrow\ \big(l_1 + \frac {1}{2}\Big)^2 \nu_2^2\ =\ \Big(l_2 + \frac{1}{2}\Big)^2 \nu_1^2 \label{eq:I3}\end{aligned}$$ the resulting equation is $$\frac{\partial^2v}{\partial z^2}\ +\ \Big(-\frac{\nu_2^2}{\nu_1^2}\ +\ \epsilon_1 \frac{\nu_2^2}{\nu_1^2}\ z^{\nu_2}\ -\ \frac{l_2(l_2+1)}{\rho_1^2}\Big) v\ =\ 0$$ A new scaling length and scaled energy defined by $$v(z) = u_2( \rho_2)\ ,\ z = a_2\ \rho_2\ \ ,\ a_2^{\nu_2+2}\ \epsilon_1 \Big(\frac{\nu_2}{\nu_1}\Big)^2\ =\ 1\ ,\ \epsilon_2 = - a_2^2\ \frac{\nu_2^2}{\nu_1^2}\ =\ - \epsilon_1^{\frac{\nu_1}{\nu_2}}\Big(-\frac{\nu_1}{\nu_2}\Big)^{\nu_1} \label{eq:I4}$$ may now be identified which leads to the transformed radial equation $$\frac{\partial^2 u_2}{\partial \rho_2^2}\ +\ \Big( \epsilon_2\ +\ \rho_2^{\nu_2}\ -\ \frac{l_2(l_2+1)}{\rho_2^2} \Big)\ u_2\ =\ 0 \label{eq:I5}$$
Comparison of (\[eq:I0\]) and (\[eq:I5\]) shows that the radial equation for a confining potential with $0\le \nu_1 \le \infty$ and $\epsilon_1$ positive can be transformed to a new radial equation for an attractive singular potential with $-2 \le \nu_2\le 0$ , $ \epsilon_2$ negative and the eigenvalue spectra of the confining potential and the singular potential are related by (\[eq:I4\]).
If the exponents are restricted to lie in the range $[-2,\infty]$ (\[eq:I2\]) guarantees that $\nu_1$ and $\nu_2$ are always opposite in sign and the choice $$\Big(l_1 + \frac {1}{2}\Big)\ \nu_2\ =\ -\Big(l_2 + \frac{1}{2}\Big)\ \nu_1 \label{eq:I6}$$ which is consistent with (\[eq:I3\]) guarantees that a positive value of $l_1$ leads to a positive value of $l_2$ and ensures that the solution of the transformed equation vanishes as $\rho_2\rightarrow 0$. Upto this point our derivation parallels the derivation given by Quigg and Rossner in Physics Reports [**56**]{} 1979, pages 191-2. It is possible to take this discussion further which we now proceed to do.
If in addition (\[eq:I6\]) transforms an integer value of $l_1$ to an integer value of $l_2$ then the mapping we have discussed relates the eigenvalue spectrum of a potential representing a possible physical system to the eigen value spectrum of another potential representing another possible physical system. If $l_1$ and $l_2$ are positive integers with $l_1 > l_2$ then the spectrum of a confining power law potential with positive exponent $\nu_1$ and angular momentum $l_1$ is related to the spectrum of an attractive potential with negative exponent $\nu_2$ and angular momentum $l_2$ if the exponents are such that $$\nu_1 = \frac{4(l_1-l_2)}{2l_2+1}\ ,\ \nu_2 = -\frac{4(l_1-l_2)}{2l_1
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Recently we have studied in great detail a model of Hybrid Natural Inflation (HNI) by constructing two simple effective field theories. These two versions of the model allow inflationary energy scales as small as the electroweak scale in one of them or as large as the Grand Unification scale in the other, therefore covering the whole range of possible energy scales. In any case the inflationary sector of the model is of the form $V(\phi)=V_0 \left(1+a \cos(\phi/f)\right)$ where $0\leq a<1$ and the end of inflation is triggered by an independent waterfall field. One interesting characteristic of this model is that the slow-roll parameter $\epsilon(\phi)$ is a non-monotonic function of $\phi$ presenting a [*maximum*]{} close to the inflection point of the potential. Because the scalar spectrum $\mathcal{P}_s(k)$ of density fluctuations when written in terms of the potential is inversely proportional to $\epsilon(\phi)$ we find that $\mathcal{P}_s(k)$ presents a [*minimum*]{} at $\phi_{min}$. The origin of the HNI potential can be traced to a symmetry breaking phenomenon occurring at some energy scale $f$ which gives rise to a (massless) Goldstone boson. Non-perturbative physics at some temperature $T<f$ might occur which provides a potential (and a small mass) to the originally massless boson to become the inflaton (a pseudo-Nambu-Goldstone boson). Thus the inflaton energy scale $\Delta$ is bounded by the symmetry breaking scale, $\Delta\equiv V_H^{1/4} <f.$ To have such a well defined origin and hierarchy of scales in inflationary models is not common. We use this property of HNI to determine bounds for the inflationary energy scale $\Delta$ and for the tensor-to-scalar ratio $r$.'
author:
- |
Gabriel Germán$^{a}
\footnote{Corresponding author: gabriel@fis.unam.mx}$, Alfredo Herrera-Aguilar$^{b,c}$, Juan Carlos Hidalgo$^{a}$,\
Roberto A. Sussman$^{d}$, José Tapia$^{a,e}$\
\
[*$^a$Instituto de Ciencias Físicas,* ]{}\
[*Universidad Nacional Autónoma de México,*]{}\
[*Apdo. Postal 48-3, C.P. 62251 Cuernavaca, Morelos, México.*]{}\
\
[*$^b$Instituto de Física,* ]{}\
[*Benemérita Universidad Autónoma de Puebla,*]{}\
[*Apdo. Postal J-48, C.P. 72570 Puebla, Puebla, México.*]{}\
\
[*$^c$Instituto de Física y Matemáticas,*]{}\
[*Universidad Michoacana de San Nicolás de Hidalgo,*]{}\
[*Edificio C–3, Ciudad Universitaria, C.P. 58040 Morelia, Michoacán, México.*]{}\
\
[*$^d$Instituto de Ciencias Nucleares,* ]{}\
[*Universidad Nacional Autónoma de México,*]{}\
[*Apdo. Postal 70Ð543, 04510 México D. F., México.*]{}\
\
[*$^e$Centro de Investigación en Ciencias,* ]{}\
[*Universidad Autónoma del Estado de Morelos,*]{}\
[*Avenida Universidad 1001, Cuernavaca, Morelos 62209, México.*]{}
title: General bounds in Hybrid Natural Inflation
---
Introduction {#Intro}
============
In a recent article [@Ross:2016hyb] a model of inflation [@Guth:1980zm], [@Linde:1981mu], [@Albrecht:1982wi], [@Lyth:1998xn] of the hybrid type [@Linde:1994] has been studied with great detail. To show that it is posible within Hybrid Natural Inflation (HNI) [@Ross:2016hyb] to account for inflationary energy scales as small as the electroweak scale, or as large as the Grand Unification scale, two versions of the model have been constructed based on simple effective field theories. The resulting inflationary sector in any case is described by the following potential for the inflaton field $\phi$ $$V(\phi) = V_0\left(1+a\cos \left(\frac{\phi}{f} \right) \right),
\label{pot}$$ where $a$ is a positive constant less than one and $f$ is the scale of (Nambu-Goldstone) symmetry breaking. Here the end of inflation is triggered by an independent sector waterfall field in a rapid phase transition. The potential in Eq.(\[pot\]) is reminiscent of Natural Inflation [@Freese:1990rb], [@Adams:1992bn], [@Freese:2014nla] where $a=1$ sets a vanishing cosmological constant. Here, however, $a$ can take any positive value less than one and as a result the scale $f$ can be sub-Planckian. Once the waterfall field triggers the end of inflation the inflaton fast rolls to a global minimum with vanishing energy.
Hybrid Natural Inflation has also the interesting property that the slow-roll parameter $\epsilon(\phi)$ turns out to be a non-monotonic function of the inflaton [@German:2015qjq]. As a consequence the scalar spectrum of density perturbations develops a [*minimum*]{} (Fig.\[Espectro\]) for a value $\phi_{min}$ close to the inflection point of the potential. We know that inflation in HNI should start before $\phi$ reaches the minimum of the spectrum at $\phi_{min}$ because the spectrum has been observed to be decreasing during at least 8 e-folds of observable inflation. Thus, there should be at least 8 e-folds of inflation from $\phi_H$, at which observable perturbations are produced[^1], to $\phi_{min}$. This minimum amount of inflation with decreasing spectrum should give an upper bound for the tensor-to-scalar ratio $r$ and for the scale of inflation $\Delta$. The remaining $42-52$ e-folds of inflation would occur with an steepening spectrum thus care should be taken to not over-produce primordial black holes (PBH) [@Kohri:2007qn], [@Josan:2009qn], [@Carr:2009jm]. Also the fact that the inflationary energy scale $\Delta$ is bounded by the symmetry breaking scale $f$ imposes [*lower*]{} bounds to these quantities, whenever the minimum of the spectrum is reached after $N_{min}\leq 60$ e«folds of inflation. If all of inflation occurs without $\phi$ reaching $\phi_{min}$ no lower bounds are found.
Our paper is organised as follows: in Section \[slow\] we briefly recall expressions for the slow-roll parameters and observables. We also give an effective field theory derivation of the model we study and the hierarchy of energy scales is discussed. As a warming up exercise we initially study in Section \[NI\] this hierarchy of scales in Natural Inflation (where $a=1$) and Section \[ENI\] deals with “extended” Natural Inflation (ENI) where $a$ is not set to unity from the beginning. This allows us to study the fine tuning of $a$ (to have a vanishing cosmological constant) in terms of the parameters of the model. From here we proceed in Section \[restricted\] to HNI where the hierarchy of energy scales together with the observation that the scalar spectrum is decreasing during $8<N_{min}<60$ e-folds of observable inflation determine bounds for the inflationary energy scale $\Delta$ and for the tensor-to-scalar ratio $r$, this we call the restricted case. In Section \[general\] we obtain general bounds in HNI dropping the previous requirement that $N_{min}$ e-folds of inflation occur with decreasing spectrum. We are able to find general bounds for all the parameters (and observables) of the model and to clearly understand how the scale of inflation in HNI is able to sweep all range of values, from vanishingly small to GUT scales. A brief discussion of constraints coming from Primordial Black Hole (PBH) abundances and considerations regarding low scales of inflation can be found in Section \[PBH\]. Finally Section \[conclusions\] contains our conclusions and a discussion of the main results.
Slow-roll parameters, observables and model construction {#slow}
=========================================================
In slow-roll inflation, the spectral indices are given in terms of the slow-roll parameters of the model, which involve the potential $V(\phi)$ and its derivatives (see e.g. [@Liddle:94], [@Liddle:2000cg]) $$\epsilon \equiv \frac{M^{2}}{2}\left( \frac{V^{\prime }}{V }\right) ^{2},\quad
\eta \equiv M^{2}\frac{V^{\prime \prime }}{V}, \quad
\xi_2 \equiv M^{4}\frac{V^{\prime }V^{\prime \prime \prime }}{V^{2}},\quad
\xi_3 \equiv M^{6}\frac{V^{\prime 2 }V^{\prime \prime \prime \prime }}{V^{3}},
\label{Slowparameters}$$primes denote
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A computational method based on a first-principles multiscale simulation has been used for calculating the optical response and the ablation threshold of an optical material irradiated with an ultrashort intense laser pulse. The method employs Maxwell’s equations to describe laser pulse propagation and time-dependent density functional theory to describe the generation of conduction band electrons in an optical medium. Optical properties, such as reflectance and absorption, were investigated for laser intensities in the range $10^{10} \, \mathrm{W/cm^{2}}$ to $2 \times 10^{15} \, \mathrm{W/cm^{2}}$ based on the theory of generation and spatial distribution of the conduction band electrons. The method was applied to investigate the changes in the optical reflectance of $\alpha$-quartz bulk, half-wavelength thin-film and quarter-wavelength thin-film and to estimate their ablation thresholds. Despite the adiabatic local density approximation used in calculating the exchange–correlation potential, the reflectance and the ablation threshold obtained from our method agree well with the previous theoretical and experimental results. The method can be applied to estimate the ablation thresholds for optical materials in general. The ablation threshold data can be used to design ultra-broadband high-damage-threshold coating structures.'
author:
- 'Kyung-Min Lee'
- Chul Min Kim
- 'Shunsuke A. Sato'
- Tomohito Otobe
- Yasushi Shinohara
- Kazuhiro Yabana
- Tae Moon Jeong
title: 'First-principles simulation of the optical response of bulk and thin-film $\alpha$-quartz irradiated with an ultrashort intense laser pulse'
---
\[sec:intro\] Introduction
==========================
The advances made in femtosecond (fs) high-power laser technology in the last decade have made it possible to achieve laser intensities as high as $10^{22} \, \mathrm{W/cm^{2}}$ [@Bahk:2004]. With such a wide range of laser intensities available for investigations, the optical response of a material can be expected to show fairly different characteristics with varying intensity. For example, at very low intensities below $10^{10} \, \mathrm{W/cm^{2}}$, the optical properties of a medium follow a linear response to laser intensity variation [@Hecht:2001; @Born:1999], but start showing a nonlinear response as the laser intensity increases beyond a certain level [@Boyd:2008]. However, at still higher laser intensities of greater than $10^{14} \, \mathrm{W/cm^{2}}$, the optical medium suddenly starts behaving like a plasma medium, and its optical properties follow the properties of a plasma medium [@Kruer:1989]. In the intermediate intensity range ($10^{11} \, \mathrm{W/cm^{2}}$ to $10^{14} \, \mathrm{W/cm^{2}}$), the physical behavior of an optical medium is very complicated and many interesting phenomena, e.g., generation and heating of conduction band (CB) electrons and energy transfer to the lattice, followed by melting, boiling and ablation of the material, can be observed. These behaviors are related to the transition mechanism from solid to plasma and have been intensively studied in previous reports. A theoretical understanding of the laser–matter interactions in the intermediate intensity range is, therefore, of great interest. In addition, it can also provide important insights into laser-induced damage and ablation of optical materials in general.
Studies on laser-induced damage date back to as far as the late 1960s. The dependence of the damage on laser characteristics such as the wavelength, pulse duration and energy fluence as well on material type was investigated by Wood using nanosecond (ns) laser pulses [@Wood:2003]. Later, investigations of laser-induced damage in the picosecond (ps) and fs regime gained significance when the advent of the chirped-pulse amplification (CPA) technique [@Strickland:1985] made it feasible to develop fs and petawatt-class laser systems [@Sung:2010; @Yu:2012]. In particular, laser ablation occurring on the fs time scale became critical because a laser pulse duration of few tens of fs is much shorter than the time scale for electron energy transfer to the lattice and subsequent lattice heating. In 1995, Stuart $\mathit{et \, al.}$, investigated the laser-induced damage threshold at $1053 \, \mathrm{nm}$ and $526 \, \mathrm{nm}$ for pulse durations ranging from $270 \, \mathrm{fs}$ to $1 \, \mathrm{ns}$, through a theoretical model based on CB electron production via multiphoton ionization, Joule heating and collisional ionization [@Stuart:1995]. Subsequent studies by other groups were conducted for a more accurate analysis of the damage and ablation threshold by including energy dependence of the CB electrons [@Rethfeld:2002] and nonlinear pulse propagation effect in a medium [@Penano:2005; @Petrov:2008; @Gulley:2010; @Apalkov:2012] in the fs regime. However, all these studies were based on theoretical models that used experimental and/or empirical values of the material parameters such as ionization rate, refractive index, relaxation rate, and band structure. Hence, the need for developing a method that uses non-empirical values of the material parameters grew continuously in the search for a comprehensive and reliable method of investigating laser–matter interactions in the intermediate laser intensity range.
In this paper, we employ an alternative method to compute the optical response and the ablation threshold of an optical medium. In contrast to the previous studies, our method is based on first-principles simulations computed from fundamental equations. A multiscale approach using the wave equation and the time-dependent density functional theory (TDDFT) is applied to calculate directly the density of the CB electrons generated in the optical medium. The first report on the use of such a multiscale approach for investigation of the interaction between a laser pulse and an optical medium was made for crystalline silicon, where it was said to yield reliable results [@Yabana:2012]. In this approach, no empirical parameters and approximations were used except for information on the crystal structure and on the exchange–correlation potential. As far as these parameters and approximations are valid for a given set of conditions, our first-principles simulations can produce the most reliable and comprehensive results.
We applied the method to calculate the reflectance, the CB electron density and the absorbed energy for investigating the changes in the optical properties of bulk and thin-film $\alpha$-quartz (having different thicknesses) on being irradiated by fs laser pulses in the intensity range of $10^{10} \, \mathrm{W/cm^{2}}$ to $2 \times 10^{15} \, \mathrm{W/cm^{2}}$. By comparing the absorbed energy based on some criterion for laser-induced ablation, the ablation threshold can be computationally determined without the help of empirical values. The proposed approach can be easily applied to other optical materials and structures to design high-performance optical coatings, such as a high-damage-threshold broadband optical coating. The organization of the paper is as follows. Section \[sec:method\] describes in brief the theoretical methods and the simulation details. The calculated results and discussion are presented in Section \[sec:results\]. Finally, the conclusion of the paper is given in Section \[sec:conclusion\].
\[sec:method\] Theoretical methods
==================================
\[ssec:multiscale\] Multiscale description of laser-matter interaction
----------------------------------------------------------------------
We employ a theoretical method and a computational code developed by some of the present authors [@Yabana:2012]. In the following, we briefly describe the formalism. The interaction between a laser pulse and matter involves two characteristic lengths: the wavelength of the laser pulse and the electronic structure size of the atoms constituting the matter. In the case of fs laser pulses, the former lies on the macroscopic scale comprising the $\mu\mathrm{m}$ range, while the latter lies on the microscopic scale comprising the $\mathrm{nm}$ range. Any first-principles description of the interaction should incorporate these two different scales simultaneously. Let **R** denote the macroscopic scale in which the laser pulse evolves and **r** the microscopic scale in which the electrons move. To describe the dynamics of electrons in a unit cell under an external electromagnetic field, the time-dependent Kohn–Sham (TDKS) equation is used [@Runge:1984]: $$\begin{aligned}
\label{eq:tdks}
\mathrm{i}\hbar\frac{\partial}{\partial t}\psi_{i,\mathbf{R}}(\vec{r},t)&=&\biggl\{ \frac{1}{2m_{\mathrm{e}}}\left(-\mathrm{i}\hbar\nabla_{\mathbf{r}}+\frac{e}{c}\vec{A}_{\mathbf{R}}(t)\right)^{2}+V_{\mathrm{ion},\mathbf{R}}(\vec{r}) \nonumber \\
&&+V_{\mathrm{h},\mathbf{R}}(\vec{r},t)+V_{\mathrm{xc},\mathbf{R}}(\vec{r},t)\biggr\}\psi_{i,\mathbf{R}}(\vec{r},t),\end{aligned}$$ where $\psi_{i,\mathbf{R}}$ is the $i$th Kohn–Sham (KS) orbital, $\vec{A}_{\mathbf{R}}$ the vector potential of the laser pulse in the Coulomb gauge, $V_{\mathrm{ion},\mathbf{R}}$ the ionic potential, $V_{\mathrm{h},\mathbf{R}}$ the Hartree potential and $V_{\mathrm{xc},\mathbf{R}}$ the exchange–correlation potential.
Since the laser pulse we considered slowly varies over the electronic length scale, it can be assumed that $\vec{A}_{\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The perimeter and area generating functions of exactly solvable polygon models satisfy $q$-functional equations, where $q$ is the area variable. The behaviour in the vicinity of the point where the perimeter generating function diverges can often be described by a scaling function. We develop the method of $q$-linear approximants in order to extract the approximate scaling behaviour of polygon models when an exact solution is not known. We test the validity of our method by approximating exactly solvable $q$-linear polygon models. This leads to scaling functions for a number of $q$-linear polygon models, notably generalized rectangles, Ferrers diagrams, and stacks.'
author:
- |
C. Richard and A. J. Guttmann\
Department of Mathematics and Statistics\
The University of Melbourne, Parkville, Victoria 3052, Australia
date:
title: |
**$q$-linear approximants:\
Scaling functions for polygon models**
---
Introduction
============
Models of polygons and related combinatorial objects have received considerable attention in recent years (for a recent monograph, see [@J00]). They are of interest in physics as models of vesicles or polymer molecules in solution. The interplay between bulk energy and surface energy in these models gives rise to a phase transition from an extended phase to a compact, ball-shaped phase [@BOP93]. There have been many studies of combinatorial aspects of these models, including a general method for deriving the perimeter and area generating function of column-convex models [@B96]. Less is known about analytic aspects of the solutions, which are needed to understand the phase transitions of these models. Scaling functions which describe the crossover behaviour at critical points have been computed for a number of polygon models, mostly by indirect methods such as from a semi-continuous version of the models [@PO95; @PB95]. The only direct derivation of scaling behaviour has been for staircase polygons [@P94] by methods of uniform asymptotic expansions. There is, however, no known general method to obtain scaling functions directly from functional equations.
This paper presents such a method in the simplest case of a $q$-linear functional equation in the perimeter variable. This class of functional equations is satisfied by rectangles, Ferrers diagrams, and stacks [@PO95]. As a step towards the analysis of more complicated classes, we introduce $q$-linear approximants of first order so as to analyze models which do not obey a first-order $q$-linear equation, but which can be well approximated by one. We will test our method by approximating exactly solvable $q$-linear polygon models of generalized rectangles, Ferrers diagrams and stacks. In particular, we will analyze the model of Ferrers diagrams with a hole and obtain a differential equation for the scaling function by analysis of the approximants. We discuss the connection between this new type of approximant and the method of partial differential approximants [@FC82; @SF82; @RF88; @S90]. Finally we indicate how our methods can be extended to more general classes of polygon models.
In a subsequent publication we will consider $q$-quadratic and other non-linear approximants, which we expect will give good approximations to the scaling function of as yet unsolved models, such as self-avoiding polygons.
Phase diagrams and scaling functions
====================================
Let us briefly review phase diagrams of $q$-linear polygon models in order to fix our notation. (We follow [@PO95; @PB95; @Ow00]). The perimeter and area generating function of a polygon model is given by f(x,y,q) = \_[r,s,n=1]{}\^f\_[r,s,n]{} x\^r y\^s q\^n = \_[n=1]{}\^f\_n(x,y) q\^n, where $f_{r,s,n}$ denotes the number of configurations of area $n$, horizontal perimeter $r$ and vertical perimeter $s$. We introduce the area activity $q$, the horizontal perimeter activity $x$, and the vertical perimeter activity $y$. The perimeter generating function of the polygon model is given by $f(x,y,1)$. A phase diagram is the graph of the radius of convergence of $f(x,y,q)$ in the parameter space $x,y,q$. Let us consider the isotropic version $f(t,q) := f(t,t,q)$ of the model, with $t$ denoting the total perimeter, so that the phase diagram is two-dimensional. For a typical $q$-linear polygon model such as Ferrers diagrams or stacks, defined below, the phase diagram is as depicted in Figure \[fig:stacphase\].
Let us interpret the phase diagram using the grand-canonical ensemble in which we count, for fixed area, all polygons by perimeter. The curve $q_c(t)$ where the polygon generating function diverges is related to the [*free energy*]{} per unit area of the ensemble, -q\_c(t) = \_[n ]{} f\_n(t). The phase $q_c(t)<1$ consists of [*inflated*]{} polygons, whose perimeter grows like their area. The phase $q_c(t)=1$ consists of [*ball-shaped*]{} polygons, their perimeter growing as the square-root of their area. This results in a vanishing free energy for the ensemble. This behaviour is characteristic of a first order phase transition at the point $t_f$ where both phases meet. In the ball-shaped phase $q_c(t)=1$, there are contributions to the [*boundary*]{} free energy, however[^1]. Let us denote by $t_c$ the point where the perimeter generating function diverges. At this point a phase transition in the boundary free energy occurs: For $t<t_c$, the contributions to the boundary free energy are given by polygons of finite size, whereas for $t>t_c$ the contributions to the boundary free energy derive from polygons of infinite size.
In the remainder of this paper we will concentrate on the critical behaviour about the point $t_c$ where the perimeter generating function diverges. This point is the natural one to look at from the perspective of power series approximations, which we employ. Moreover, for rectangles and more complicated models such as self-avoiding polygons, the distinction between the two phase transitions is irrelevant since $t_c$ and $t_f$ coincide for these models.
To describe the singular behaviour about $t_c$ in more detail, consider $f(t,q)$ for $t$ fixed, as $q$ approaches unity. For $q$-linear polygon models, $q=1$ is a point of an essential singularity in the generating function: For $t<t_c$, $f$ converges to a finite limit. If $t=t_c$, $f$ has a power-law divergence with an exponent generally different from that of the perimeter generating function. If $t>t_c$, $f$ diverges with an essential singularity. In many cases, the crossover between these types of critical behaviour can be described by a scaling function $\bar{P}(\bar{s})$ of combined argument $\bar{s}= (t_c-t)(1-q)^{-\phi}$, f(t,q) \~ |[P]{}( ) ( ( t,q) (t\_c\^-,1\^-) ). \[eqn:scalingfctn\] The asymptotic behaviour of the scaling function at infinity is related to the behaviour of $f(t,q)$ for $t<t_c$. To see this, assume that $f(t,q)$ admits an asymptotic expansion of the form f(t,q) = \_[n=0]{}\^f\_n(t) (1-q)\^n (t<t\_c) about $q=1$, where the leading contributions of the coefficients $f_n(t)$ are given by f\_n(t) = + [O]{} ( (t\_c - t)\^[-\_n+1]{}), as $t$ approaches $t_c$[^2]. For $q$-linear polygon models, the asymptotic expansion can be computed recursively from the defining functional equation. It can be inferred from (\[eqn:scalingfctn\]) that the existence of a scaling function implies the restriction \_n = + \[eq:gamn\] on the exponents $\gamma_n$. Moreover, it can be seen that the numbers $p_n$ are the coefficients in the asymptotic expansion of the scaling function |[P]{}(|[s]{}) = \_[n=0]{}\^p\_n |[s]{}\^[-\_n]{}. We assume that the scaling function $\bar{P}(\bar{s})$ is regular at the origin. (This assumption is not always fulfilled. The simplest counterexample is the model of rectangles in its isotropic version.) In this situation, the behaviour of $f$ at $t=t_c$ is given by f(t\_c,q) \~ (q 1\^-). The exponents $\theta$ and $\gamma_0$ are called [*critical exponents*]{} of the model. $\theta$ describes the behaviour of $f(t_c,q)$ about $q=1$, whereas $\gamma_0$ describes the power-law behaviour of the perimeter generating function about $t_c$. The exponent $\phi$ is called the [*crossover exponent*]{} and relates the two critical exponents, see (\[eq:gamn\]).
$q$-linear polygon models
=========================
We call a polygon model $q$-linear of $N$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we develop a fully discrete Galerkin method for solving initial value fractional integro-differential equations(FIDEs). We consider Generalized Jacobi polynomials(GJPs) with indexes corresponding to the number of homogeneous initial conditions as natural basis functions for the approximate solution. The fractional derivatives are used in the Caputo sense. The numerical solvability of algebraic system obtained from implementation of proposed method for a special case of FIDEs is investigated. We also provide a suitable convergence analysis to approximate solutions under a more general regularity assumption on the exact solution.'
author:
- |
P. Mokhtary\
\
\
title: 'Discrete Galerkin Method for Fractional Integro-Differential Equations'
---
[**Subject Classification:**]{}[34A08; 65L60]{}
[**Keywords:**]{} Fractional integro-differential equation(FIDE), Galerkin Method, Generalized Jacobi Polynomials(GJPs), Caputo derivative.
Introduction
============
In this paper, we provide a convergent numerical scheme for solving FIDE $$\label{1}
\left\{\begin{array}{l}
\mathcal D^q u(x)=p(x) u(x)+f(x)+\lambda \int\limits_0^x{K(x,t) u(t) dt},~~~ x \in \Omega=[0,1],\\
\\
u(0)=0,
\end{array}\right.$$ where $q\in \mathbb R^+ \bigcap (0,1)$. The symbol $\mathbb R^+$ is the collection of all positive real numbers. $p(x)$ and $f(x)$ are given continuous functions and $K(x,t)$ is a given sufficiently smooth kernel function, $u(x)$ is the unknown function.
Note that the condition $u(0)=0$ is not restrictive, due to the fact that (\[1\]) with nonhomogeneous initial condition $u(0)=d,~~d
\neq 0$ can be converted to the following homogeneous FIDE $$\left\{\begin{array}{l}
\mathcal D^q \tilde u(x)=p(x) \tilde u(x)+\tilde f(x)+\lambda \int\limits_0^x{K(x,t) \tilde u(t) dt},~~~ x \in \Omega=[0,1],\\
\\
\tilde u(0)=0,
\end{array}\right.$$ by the simple transformation $\tilde u(x)=u(x)-d$, where $\tilde
f(x)=f(x)+d\bigg(p(x)+\lambda \int_{0}^{x}{K(x,t)dt}\bigg)$.
Such kind of equations arising in the mathematical modeling of various physical phenomena, such as heat conduction, materials with memory, combined conduction, convection and radiation problems([@r2], [@r5], [@r20], [@r21]).
$\mathcal D^q u(x)$ denotes the fractional Caputo differential operator of order $q$ and defines as([@r8], [@r13], [@r22]) $$\label{2} \mathcal D^q u(x) = \mathcal I^{1-q} u'(x),$$ where $$\label{3}
\mathcal I^\mu u(x)=\frac{1}{\Gamma{(\mu)}}
\int\limits_0^x{(x-s)^{\mu-1} u(s) ds},$$ is the fractional integral operator from order $\mu$. $\Gamma{(\mu)}$ is the well known Gamma function. The following relation holds[@r8] $$\label{20}\mathcal I^q(\mathcal D^q
u(x))=u(x)-u(0).$$
From the relation above, it is easy to check that (\[1\]) is equivalent with the following weakly singular Volterra integral equation $$\label{5}
u(x)=g(x)+\lambda \int\limits_0^x{{\bar K}(x,t) u(t) dt}.$$
Here $g(x)=\mathcal I^q f(x)$ and ${\bar
K}(x,t)=\frac{(x-t)^{q-1}}{\Gamma{(q)}}p(t)+\int\limits_t^x{\frac{(x-s)^{q-1}}{\Gamma{(q)}}
K(s,t)ds}.$ From the well known existence and uniqueness Theorems([@r3], [@r7]), it can be concluded that if the following conditions are fulfilled
- $f(x) \in C^l(\Omega),~~l \ge 1$
- $p(x) \in C^l(\Omega),~~l \ge 1$
- $K(x,t) \in C^l(D),~~ D=\{(x,t);0 \le t \le x\le
1\},~~l \ge1$
- $K(x,x)\neq 0$,
the regularity of the unique solution $u(x)$ of (\[5\]) and also (\[1\]) is described by $$\label{6}
u(x)=\sum\limits_{(j,k)}{\gamma_{j,k} x^{j+kq}}+U_l(x;q) \in
C^l(0,1]\bigcap C(\Omega),\hspace{.5 cm} \text{with} \hspace{.5 cm}
|u'(x)| \le C_q x^{q-1},$$ where the coefficients $\gamma_{j,k}$ are some constants, $U_l(.;q)
\in C^l(\Omega)$ and $(j,k):=\{(j,k):~~j,k \in \mathbb
N_0,~j+kq<l\}$. Here $\mathbb N_0=\mathbb N \bigcup \{0\}$, where the symbol $\mathbb N$ denotes the collection of all natural numbers. Thus, we must expect the first derivative of the solution to has a discontinuity at the origin. More precisely, if the given functions $g(x), p(x)$ and $K(x,t)$ are real analytic in their domains then it can be concluded that there is a function $U=U(z_1,z_2)$ real and analytic at $(0,0)$, so that solutions of (\[5\])and also (\[1\]) can be written as $u(x)=U(x,x^q)$([@r3], [@r7]).
Recently, several numerical methods for the numerical solution of FIDE’s have been proposed. In [@r19], fractional differential transform method was developed to solve FIDE’s with nonlocal boundary conditions. In [@r23], Rawashdeh studied the numerical solution of FIDE’s by polynomial spline functions. In [@r1], an analytical solution for a class of FIDE’s was proposed. Adomian decomposition method to solve nonlinear FIDE’s was proposed in [@r17]. In [@r25], authors solved fractional nonlinear Volterra integro differential equations using the second kind Chebyshev wavelets. In [@r11], Taylor expansion approach was presented for solving a class of linear FIDE’s including those of Fredholm and Volterra types. In [@r16], authors were solved FIDE’s by adopting Hybrid Collocation method to an equivalent integral equation of convolution type. In [@r12], Chebyshev Pseudospectral method was implemented to solve linear and nonlinear system of FIDE’s. In [@r15], authors proposed an analyzed spectral Jacobi Collocation method for the numerical solution of general linear FIDE’s. In [@r9], authors applied Collocation method to solve the nonlinear FIDE’s. In [@r18], Mokhtary and Ghoreishi, proved the $L^2$ convergence of Legendre Tau method for the numerical solution of nonlinear FIDE’s.
Many of the techniques mentioned above or have not proper convergence analysis or if any, very restrictive conditions including smoothness of the exact solution are assumed. In this paper we will consider non smooth solutions of (\[1\]). In this case although the discrete Galerkin method can be implemented directly but this method leads to very poor numerical results. Thus it is necessary to introduce a regularization procedure that allows us to improve the smoothness of the given functions and then to approximate the solution with a satisfactory order of convergence. To this end, we propose a regularization process which the original equation (\[1\]) will be changed into a new equation which possesses a more regularity properties by taking a suitable coordinate transformation. Our logic in choosing proper transformation is based upon the formal asymptotic expansion of the exact solution in (\[6\]). Consider (\[1\]), using the variable transformation $$\label{6xx}
x=v^{\frac{1}{q}},\;\; v=x^{q},\;\; t=w^{\frac{1}{q}},\;\; w=t^q,$$ we can change (\[1\]) to the following equation $$\label{6x}
\mathcal M^q \bar u(v)=\bar p(v) \bar u(v)+\bar
f(v)+\lambda\int\limits_0^v{\tilde{K}(v,w) \bar{u}(w)dw},$$ where $$\begin{aligned}
\label{rv4}
\nonumber\bar p(v)&=&p(v^{\frac{1}{q}}),\;\; \bar
f(v)=f(
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the convexity of mutual information along the evolution of the heat equation. We prove that if the initial distribution is log-concave, then mutual information is always a convex function of time. We also prove that if the initial distribution is either bounded, or has finite fourth moment and Fisher information, then mutual information is eventually convex, i.e., convex for all large time. Finally, we provide counterexamples to show that mutual information can be nonconvex at small time.'
author:
-
bibliography:
- 'mi\_arxiv\_v2.bib'
title: |
Convexity of mutual information\
along the heat flow
---
Introduction
============
The heat equation plays a fundamental role in many fields. In thermodynamics, it describes the diffusion of heat in a body due to temperature differences. In probability theory, it describes the evolution of the Brownian motion. In information theory, it describes the additive white Gaussian noise channel, which is one of the most important communication channels. In general, the heat equation can be used to model the transport of any quantity in a medium via a diffusion process. It also forms the basis for more general stochastic processes, such as the Ornstein-Uhlenbeck process or the Fokker-Planck process. Therefore, the heat equation has found applications in diverse scientific disciplines—from explaining the evolution of zebra stripes [@Tur52] to modeling stock prices via the Black-Scholes formula [@BlaSch73]. We are interested in the heat flow, which is the flow of the heat equation in the space of random variables.
The properties of the heat flow are closely linked to entropy. Indeed, one important interpretation of the heat flow is as the flow that increases entropy as fast as possible. More precisely, heat flow is the gradient flow (i.e., the steepest descent flow) of negative entropy in the space of probability distributions with the Wasserstein metric structure [@JKO98]. In this paper we will not need this result, but only use a certain key identity in our calculation. Nevertheless, this relation suggests an intricate connection between entropy and the heat flow.
The behavior of entropy along the heat flow has been long studied. The gradient flow interpretation above shows that entropy is increasing along the heat flow. In particular, De Bruijn’s identity [@Sta59] states that the time derivative of entropy along the heat flow is given by the Fisher information, which is always positive. Moreover, entropy is a concave function of time along the heat flow. This is because the second time derivative of entropy along the heat flow is the negative of the second-order Fisher information [@McKean66; @Tos99; @Vil00]; the latter identity also implies the concavity of entropy power along the heat flow [@Cos85; @Dembo89; @Dembo91]. It is further conjectured that the higher derivatives of entropy along the heat flow have alternating signs [@McKean66; @Vil02; @Che15]. In one dimension, this has been verified up to the fourth derivative [@Che15]; in multi dimension, this is true for the third derivative when the initial distribution is log-concave [@Tos15].
On the other hand, the behavior of mutual information along the heat flow has been less explored. Clearly mutual information is decreasing along the heat flow by the data processing inequality, since the heat flow is a Markov chain. De Bruijn’s identity implies that the time derivative of mutual information along the heat flow is the negative of the mutual Fisher information; the latter is proportional to the minimum mean square error (mmse) of estimating the initial from the final distribution, thus recovering the I-MMSE relation for the additive Gaussian channel [@GuoEtAl05]. Similarly, the second time derivative of mutual information along the heat flow is the mutual version of the second-order Fisher information; unfortunately, it does not always have a definite sign.
In this paper we study the convexity of mutual information along the heat flow. This amounts to determining when the mutual second-order Fisher information is positive along the heat flow. We show that in general, the mutual second-order Fisher information is positive whenever the final distribution is log-concave. Since the heat flow preserves log-concavity, this implies our first main result: If the initial distribution is log-concave, then mutual information is always convex along the heat flow. In some cases, for example when the initial distribution is bounded, the heat flow implies eventual log-concavity, which means the final distribution eventually becomes log-concave; this implies mutual information is eventually convex along the heat flow for these cases. Furthermore, we prove that in general, regardless of log-concavity, mutual information is eventually convex along the heat flow whenever the initial distribution has finite fourth moment and Fisher information.
Unlike entropy, however, we show that mutual information can be nonconvex along the heat flow. We provide explicit counterexamples, namely mixtures of point masses and mixtures of Gaussians, for which mutual information along the heat flow is nonconvex at small time; furthermore, by scaling we can arrange the region of nonconvexity to engulf any finite time. We elaborate on these results below.
Background and problem setup
============================
The heat flow
-------------
The heat equation in ${\mathbb{R}}^n$ is the partial differential equation: $${\frac{\partial \rho}{\partial t}} = \frac{1}{2} \Delta \rho$$ where $\rho = \rho(x,t)$ for $x \in {\mathbb{R}}^n$, $t \ge 0$, and $\Delta = \sum_{i=1}^n {\frac{\partial ^2}{\partial x_i^2}}$ is the Laplacian operator. This equation conserves mass, so if $\rho_0 = \rho(\cdot,0)$ is a probability distribution, then so is $\rho_t = \rho(\cdot,t)$ for all $t > 0$. The heat equation admits a closed-form solution via convolution: $$\rho_t = \rho_0 \ast \gamma_t$$ where $\gamma_t(x) = (2\pi t)^{-\frac{n}{2}} e^{-\frac{\|x\|^2}{2t}}$ is the heat kernel at time $t$. Probabilistically, if $X_0 \sim \rho_0$ is a random variable in ${\mathbb{R}}^n$, then $X_t \sim \rho_t$ that evolves following the heat equation is given by $$X_t = X_0 + \sqrt{t} Z$$ where $Z \sim {\mathcal{N}}(0,I)$ is the standard Gaussian random variable in ${\mathbb{R}}^n$ independent of $X_0$. We call this the heat flow. (Note that the true solution to the heat equation is the Brownian motion, but at each time $t$ it has the same distribution as $X_t$ above.) Observe that even when $X_0 \sim \rho_0$ has a singular density, $X_t \sim \rho_t$ has a smooth positive density for all $t > 0$.
If $X_0 \sim \delta_{a}$ is a point mass at some $a \in {\mathbb{R}}^n$, then $X_t \sim {\mathcal{N}}(a, tI)$ is Gaussian with mean $a$ and covariance $tI$.
If $X_0 \sim {\mathcal{N}}(\mu,\Sigma)$ is Gaussian, then $X_t \sim {\mathcal{N}}(\mu,\Sigma+tI)$ is also Gaussian with the same mean and increasing covariance.
If $X_0 \sim \sum_{i=1}^k p_i \delta_{a_i}$ is a mixture of point masses, then $X_t \sim \sum_{i=1}^k p_i {\mathcal{N}}(a_i,tI)$ is a mixture of Gaussians with the same covariance $tI$.
If $X_0 \sim \sum_{i=1}^k p_i {\mathcal{N}}(a_i, \Sigma_i)$ is a mixture of Gaussians, then $X_t \sim \sum_{i=1}^k p_i {\mathcal{N}}(a_i,\Sigma_i+tI)$ is also a mixture of Gaussians with the same means and increasing covariance.
Entropy and Fisher information
------------------------------
Let $X$ be a random variable in ${\mathbb{R}}^n$ with a smooth positive density $\rho$.
The (differential) [*entropy*]{} of $X \sim \rho$ is $$H(X) = -\int_{{\mathbb{R}}^n} \rho(x) \log \rho(x) \, dx.$$
The [*Fisher information*]{} of $X \sim \rho$ is $$J(X) = \int_{{\mathbb{R}}^n} \rho(x) \|\nabla \log \rho(x)\|^2 \, dx.$$
The [*second-order Fisher information*]{} of $X \sim \rho$ is $$K(X) = \int_{{\mathbb{R}}^n} \rho(x) \|\nabla^2 \log \rho(x)\|_{{\mathrm{HS}}}^2 \, dx.$$ Here $\|A\|_{{\mathrm{HS}}}^2 = \sum_{i,j=1}^n A_{ij}^2 = \sum_{i=1}^n \lambda_i(A)^2$ is the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper studies Bloch oscillations of ultracold atoms in an optical lattice, in the presence of atom-atom interactions. A new, interaction-induced Bloch period is identified. Analytical results are corroborated by realistic numerical calculations.'
author:
- 'Andrey R. Kolovsky'
title: New Bloch period for interacting cold atoms in 1D optical lattices
---
The response of a quantum system to a static field has been a longstanding problem since the early days of quantum mechanics. A topic of particular interest in this wide field is the dynamics of a quantum particle in a periodic potential induced by a static force (modelling a crystal electron in an electric field). In this system, the effect of the field manifests in a very unintuitive way. Indeed, as already emphasised by Bloch [@Bloc28] and Zener [@Zene34], according to the predictions of wave mechanics, the motion of electrons in a perfect crystal should be oscillatory rather than uniform. This phenomenon, nowadays known as Bloch oscillations (BO), has recently received renewed interest which was stimulated by experiments on cold atoms in optical lattices [@Daha96; @Wilk96; @Raiz97; @Ande98]. This system (which mimics a solid state system – with the electrons and the crystal lattice substituted by the neutral atoms and the optical potential, respectively) offers unique possibilities for the experimental study of BO and of related phenomena. In turn, these fundamentally new experiments have stimulated considerable progress in theory (see review [@PR], and references therein), and it can be safely stated that BO in diluted quasi one-dimensional gases is well understood today. Other directions of research focus on BO in the presence of relaxation processes (spontaneous emission) [@PRA2], BO in 2D optical lattices [@PRL3], and BO in the presence of atom-atom interactions (‘BEC-regime’) [@Berg98; @Choi99; @Chio00; @Mors01]. The present Letter deals with the third problem, which is approached here by an ‘ab initio’ analysis of the dynamics of a system of many atoms. This distinguishes this work from previous studies of BO in the BEC regime [@Berg98; @Choi99; @Chio00], which were based on the a mean field approach using a nonlinear Schrödinger equation. A new effect, so far unaddressed by these earlier studies, is predicted: besides the usual Bloch dynamics, the atomic oscillations may exhibit another fundamental period, entirely defined by the strength of the atom-atom interactions.
Let us first recall some results on BO in the single-particle case. Using the tight-binding approximation [@Fuku73], the Hamiltonian of a single atom in an optical lattice has the form $$H = E_0\sum_l |l\rangle\langle l|
-\frac{J}{2}\left(\sum_l |l+1\rangle\langle l|+h.c.\right)$$ $$\label{1}
+dF\sum_l l |l\rangle\langle l| \;.$$ In Eq. (\[1\]), $|l\rangle$ denotes the $l$th Wannier state $\phi_l(x)$ corresponding to the energy level $E_0$ [@remark1], $J$ is the hopping matrix elements between neighbouring Wannier states, $d$ is the lattice period, and $F$ is the magnitude of the static force. The Hamiltonian (\[1\]) can be easily diagonalised, which yields the spectrum $E_l=E_0+dFl$ (the so-called Wannier-Stark ladder) and the eigenstates (Wannier-Stark states) $$\label{1a}
|\psi_l\rangle=\sum_m {\cal J}_{m-l}(J/dF)|m\rangle \;,\quad
\langle x|m\rangle=\phi_m(x) \;,$$ (here ${\cal J}_m(z)$ are the Bessel functions). As a direct consequence of the equidistant spectrum, the evolution of an arbitrary initial wave function is periodic in time, with the Bloch period $T_B=2\pi\hbar/dF$. In particular, we shall be interested in the time evolution of the Bloch states $|\psi_\kappa\rangle=\sum_l \exp(id\kappa l)|l\rangle$. Using the explicite expression for the Wannier-Stark states (\[1a\]), it is easy to show that $|\psi_\kappa(t)\rangle=\exp\{-i(J/dF)\sin(d\kappa(t))\}
|\psi_{\kappa(t)}\rangle$, where $\kappa(t)=\kappa+Ft/\hbar$ (from now on $E_0=0$ for simplicity). Note that the exponential pre-factor in the last equation contains the same parameter $J/dF$ as the argument of the Bessel function in Eq. (\[1a\]). Depending on the value of this parameter, the regimes of weak ($dF\ll J$) or strong ($dF\gg J$) static fields can be distinguished. In this Letter, we shall restrict ourselves to the strong field case, which, in some sense, is easier to treat than the weak field regime. Indeed, for $J/dF\ll1$, the Wannier-Stark states practically coincide with Wannier states, and $|\psi_\kappa(t)\rangle\approx|\psi_{\kappa(t)}\rangle$.
A remark concerning the characteristic values of the parameters is at place here: In the numerical simulations below, we use scaled variables, where $\hbar=1$, $d=2\pi$, and the energy is measured in units of the photon recoil energy. In typical experiments with cold atoms in an optical lattice, the amplitude $v$ of the optical potential equals few recoil energies. Then, for example, for $v$ equal to 10 recoil energies, the value of the dimensionless hopping matrix element is $J=0.0384$. The strength of the static field is restricted from below by the condition $dF> J$, and from above by the condition that Landau-Zener tunnelling events can be neglected. Since the probability of Landau-Zener tunnelling is proportional to $\exp(-\pi\delta^2/8dFJ)$ ($\delta$ is the energy gap separating the lowest Bloch band from the remaining part of the spectrum) [@Zene34; @PR], we have $F<30$ for $v=10$.
![Momentum distribution of the atoms in the optical lattice, for different amplitudes $v$ of the optical potential. (The amplitude $v$ is measured in units of the recoil energy, the momentum $k$ in units of $2\pi\hbar/d$.) The figure illustrates the transition from the SF-phase to the MI-phase as $v$ is varied ($F=0$, $L=N=7$).[]{data-label="fig1"}](fig1.eps){width="8cm"}
We proceed with the multi-particle case. A natural extension of the tight-binding model (\[1\]), which accounts for the repulsive interaction of the atoms, is given by the Bose-Hubbard model [@Fish89], $$H=-\frac{J}{2}\left(\sum_{l=1}^L \hat{a}^\dag_{l+1}\hat{a}_l
+h.c.\right)
+\frac{W}{2}\sum_{l=1}^L \hat{n}_l(\hat{n_l}-1)$$ $$\label{2}
+2\pi F\sum_{l=1}^L l\hat{n}_l \;.$$ In Eq. (\[2\]), $\hat{a}_l^\dag$ and $\hat{a}_l$ are the bosonic creation and annihilation operators, $\hat{n}_l= \hat{a}_l^\dag\hat{a}_l$ is the occupation number operator of the $l$th lattice site, and the parameter $W$ is proportional to the integral over the Wannier function raised to the fourth power. Since the Bose-Hubbard Hamiltonian conserves the total number of atoms $N$, the wave function of the system can be represented in the form $|\Psi\rangle=\sum_{\bf n} c_{\bf n}|{\bf n}\rangle$, where the vector ${\bf n}$, consisting of $L$ integer numbers $n_l$ ($\sum_l n_l=N$), labels the $N$-particle bosonic wave function constructed from $N$ Wannier functions. (In what follows, if not stated otherwise, $|\Psi\rangle$ refers to the ground state of the system.) As known, in the thermodynamic limit, and for $F=0$, the system (\[2\]) shows a quantum phase transition from a superfluid (SF) to a Mott insulator (MI) phase as the ratio $J/W$ is varied (see [@Sach01] and references therein). It is interesting to note that an indication of this transition can already be observed in a system of few atoms [@Jaks98]. As an example, Fig. \[fig1\] shows the diagonal elements of the one-particle density matrix, $$\label{4}
\rho(k,k')=\langle\Psi|\hat{\Phi}^\dag(k)\hat{\Phi}(k')
|\Psi\rangle \;,\quad
\hat{\Phi}(k)=\sum_{l=1}^L \hat{a}_l\phi_l(k) \;,$$ for $N=L
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In the past years, analyzers have been introduced to detect classes of non-terminating queries for definite logic programs. Although these non-termination analyzers have shown to be rather precise, their applicability on real-life Prolog programs is limited because most Prolog programs use non-logical features. As a first step towards the analysis of Prolog programs, this paper presents a non-termination condition for Logic Programs containing integer arithmetics. The analyzer is based on our non-termination analyzer presented at ICLP 2009. The analysis starts from a class of queries and infers a subclass of non-terminating ones. In a first phase, we ignore the outcome (success or failure) of the arithmetic operations, assuming success of all arithmetic calls. In a second phase, we characterize successful arithmetic calls as a constraint problem, the solution of which determines the non-terminating queries.
Keywords: non-termination analysis, numerical computation, constraint-based approach
author:
- |
Dean Voets[^1] $~~~~~$ Danny De Schreye\
Department of Computer Science, K.U.Leuven, Belgium\
Celestijnenlaan 200A, 3001 Heverlee\
{Dean.Voets, Danny.DeSchreye }@cs.kuleuven.be
bibliography:
- 'prolog.bib'
title: 'Non-termination Analysis of Logic Programs with integer arithmetics'
---
**Note:** This article has been published in *Theory and Practice of Logic Programming, volume 11, issue 4-5, pages 521-536, 2011*.
Introduction
============
The problem of proving termination has been studied extensively in Logic Programming. Since the early works on termination analysis in Logic Programming, see e.g. [@DBLP:journals/jlp/SchreyeD94], there has been a continued interest from the community for the topic. Lots of in-language and transformational tools have been developed, e.g. [@Giesl06aprove1.2] and [@DBLP:journals/corr/abs-0912-4360], and since 2004, there is an annual Termination Competition[^2] to compare the current analyzers on the basis of an extensive database of logic programs. In contrast with termination analysis, the dual problem, to detect non-terminating classes of queries, is a fairly new topic. The development of the first and most well-known non-termination analyzer, $NTI$ [@nti_06], was motivated by difficulties in obtaining precision results for termination analyzers. Since the halting problem is undecidable, one way of demonstrating the precision of a termination analyzer is with a non-termination analyzer. For $NTI$ it was already shown that for many examples one can partition queries in terminating and non-terminating. $NTI$ compares the consecutive calls in the program using binary unfoldings and proves non-termination by comparing the head and body of these binary clauses with a special more general relation.
Recently, in joined work with Yi-Dong Shen, we integrated loop checking into termination analysis, yielding a very accurate technique to predict the termination behavior for classes of queries described using modes [@term_prediction]. Classes of queries are represented as *moded queries*. A moded query consists of a query and a label, input or output, for each variable in the query. These moded queries are then evaluated with a *moded SLD-tree* obtained by applying clauses to the partially instantiated query and propagating the labels. To guarantee a finite analysis, this moded SLD-tree is constructed using a complete loop check. After evaluating the moded query, the analysis predicts the termination behavior of the program for the considered queries based on the labels and substitutions in the moded SLD-tree.
Motivated by the elegance of this approach and the accuracy of the predictions, our research focused on defining a non-termination condition based on these moded queries. In [@DBLP:conf/iclp/VoetsS09], we introduced a non-termination condition identifying paths in a moded SLD-tree that can be repeated infinitely often. This approach was implemented in a system called $P2P$, which proved more accurate than $NTI$ on the benchmark of the termination competition. An evaluation of the classes of queries not handled by current approaches lead to considerable improvements in our non-termination analysis. These improvements were presented in [@VDS10] and implemented in the analyzer $pTNT$. Both termination and non-termination analyzers have been rather successful in analyzing the termination behavior of definite logic programs, but only a few termination analyzers, e.g. [@DBLP:conf/lpar/SerebrenikS01a], and none of the non-termination analyzers handle non-logical features such as arithmetics or cuts, typically used in practical Prolog programs. In this paper, we introduce a technique for proving non-termination of logic programs containing a subset of the built-in predicates for integer arithmetic, commonly found in Prolog implementations.
Given a program, containing integer arithmetics, and a class of queries, described using modes, we infer a subset of these queries for which we prove existential non-termination (i.e. the derivation tree for these queries contains an infinite path). The inference and proof are done in two phases. In the first phase, non-termination of the logic part of the program is proven by assuming that all comparisons between integer expressions succeed. We will show that only a minor adaption of our technique presented in [@DBLP:conf/iclp/VoetsS09] is needed to achieve this. In the second phase, given the moded query, integer arguments are identified and constraints over these arguments are formulated, such that solutions for these constraints correspond to non-terminating queries.
The paper is structured as follows. In the next section, we introduce some preliminaries concerning logic programs, integer arithmetics and we present the symbolic derivation trees used to abstract the computation. In Section 3, we introduce our non-termination condition for programs containing integer arithmetics. In Section 4, we describe our prototype analyzer and some results. Finally, we conclude in Section 5.
Preliminaries
=============
Logic Programming
-----------------
We assume the reader is familiar with standard terminology of logic programs, in particular with SLD-resolution as described in [@Lloyd_foundations]. Variables are denoted by strings beginning with a capital letter. Predicates, functions and constant symbols are denoted by strings beginning with a lower case letter. We denote the set of terms constructible from a program $P$ by $Term_P$. Two atoms are called *variants* if they are equal up to variable renaming. An atom $A$ is *more general* than an atom $B$ and $B$ is an *instance* of $A$ if there exists a substitution $\theta$ such that $A\theta = B$.
We restrict our attention to definite logic programs. A logic program $P$ is a finite set of clauses of the form $H\leftarrow A_1,..., A_n$, where $H$ and each $A_i$ are atoms. A goal $G_i$ is a headless clause $\leftarrow A_1,..., A_n$. A top goal is also called the query. Without loss of generality, we assume that a query contains only one atom.
Let $P$ be a logic program and $G_0$ a goal. $G_0$ is evaluated by building a *generalized SLD-tree* as defined in [@term_prediction], in which each node is represented by $N_i:G_i$ where $N_i$ is the name of the node and $G_i$ is a goal attached to the node. Throughout the paper, we choose to use the best-known *depth-first, left-most* control strategy, as is used in Prolog, to select goals and atoms. So by the *selected atom* in each node $N_i:\leftarrow A_1,..., A_n$, we refer to the left-most atom $A_1$. For any node $N_i:G_i$, we use $A_i^1$ to refer to the selected atom in $G_i$. Let $A_i^1$ and $A_j^1$ be the selected atoms at two nodes $N_i$ and $N_j$, respectively. $A_i^1$ is an *ancestor* of $A_j^1$ if the proof of $A_i^1$ goes through the proof of $A_j^1$.
A derivation step is denoted by $N_i:G_i\Longrightarrow_{C} N_{i+1}:G_{i+1}$, meaning that applying a clause $C$ to $G_i$ produces $N_{i+1}:G_{i+1}$. Any path of such derivation steps starting at the root node $N_0:G_0$ is called a *generalized SLD-derivation*.
Integer arithmetics
-------------------
Prolog implementations contain special purpose predicates for handling integer arithmetics. Examples are $is/2, \geq/2, =:=/2,\ldots$
\[integer\_expressions\] An expression $Expr$ is an *integer expression* if it can be constructed by the following recursive definition.
- $Expr = z \in
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We report on results of [*BeppoSAX*]{} Target Of Opportunity (TOO) observations of the source MXB 1730-335, also called the Rapid Burster (RB), made during its outburst of February–March 1998. We monitored the evolution of the spectral properties of the RB from the outburst decay to quiescence. During the first TOO, the X–ray light curve of the RB showed many Type II bursts and its broadband (1-100 keV) spectrum was acceptably fit with a two blackbody plus power law model. Moreover, to our knowledge, this is the first time that this source is detected beyond 30 keV.'
author:
- 'F. Frontera$^{1,2}$, N. Masetti$^1$, M. Orlandini$^1$, L. Amati$^1$, E. Palazzi$^1$, D. Dal Fiume$^1$, S. Del Sordo$^3$, G. Cusumano$^3$, A.N. Parmar$^4$, G. Pareschi$^5$, I. Lapidus$^6$ and L. Stella$^7$'
title: |
Discovery of hard X-ray emission from\
Type II bursts of the Rapid Burster
---
Observations
============
Four Target Of Opportunity (TOO) observations were performed with [*BeppoSAX*]{} (Boella et al. 1997a) on the Rapid Burster (=MXB 1730–335; hereafter RB) during the activity state which started on January 28, 1998 (Fox et al. 1998). These TOOs spanned over one month (from February 18 to March 18) and caught the object in four different snapshots, from the post–maximum decay to the quiescent state. Figure 1, left panel, shows the ASM light curve of the [*Rossi-XTE*]{} satellite with superimposed the times of the four [*BeppoSAX*]{} observations. Here we report on RB data from three of the four instruments mounted on [*BeppoSAX*]{}: LECS (0.1-10 keV; Parmar et al. 1997), MECS (1.5-10 keV; Boella et al. 1997b, and PDS (15-300 keV; Frontera et al. 1997). For the PDS the default rocking collimator law was modified by offsetting the RB by 40$^{'}$ from the center of the field of view in order to reduce as much as possible the contamination from a nearby variable X–ray source, GX 354-0 (=4U 1728-34), located at about 30$^{'}$ from the RB. Unfortunately, due to failure of the rocking law setup program, the collimator did not move as requested during TOO2 and TOO3; so, we have only LECS and MECS data for these two observations.
In this paper we report on preliminary results of these observations. Definitive results along with their implications will be the subject of another paper (Masetti et al. 2000). In the following, for the luminosity estimates we will assume that the RB lies at a distance $d$ = 8 kpc (Ortolani et al. 1996).
Spectral analysis and temporal evolution of the RB
==================================================
During TOO1, the RB was in a strong state of bursting activity. The 2-10 keV light curve obtained with MECS (see Fig. 1, right panel, part [*a*]{}) showed 113 Type II X–ray bursts during 9457 seconds of good observational data. Evidence of Type II bursts was also observed in the 0.1-2 keV data obtained with LECS. We divided the MECS TOO1 data into two subsets: persistent emission (PE; below 5 counts s$^{-1}$) and bursting emission (BE; above 5 counts s$^{-1}$).
The MECS PE and BE spectra could be well fit with a photoelectrically absorbed two-component blackbody (2BB); these BB components may originate from the neutron star (NS) surface, a boundary layer between the NS and the inner edge of the accretion disk, or the inner region of the disk itself. The same model was used for the RB by Guerriero et al. (1998) who found values consistent with ours. In Table 1 we report the best-fit parameters along with their 90% confidence errors. The temperature values of the two BB components were slightly higher during the PE than during the BE, while their luminosities were much higher (by a factor 20 for the cooler BB and 60 for the hotter BB) during the BE than during the PE. We also remark that during the BE the hotter BB component was brighter (by more than a factor 3) than the cooler BB, while during the PE they had similar luminosities. This implies that the BE influences more the higher temperature component than the other one. If the BE is due to spasmodic accretion onto the compact object, the higher temperature component should be the one coming from the NS surface.
The source was also visible in the hard X–ray (15-100 keV) energy range. However the statistics of the PDS light curve was much lower and did not allow distinguishing the Type II bursts. In order to construct the BE spectrum we used the time intervals in which the bursts were observed with MECS. Also, we could not derive the correct 15-100 keV flux and spectrum of both BE and PE given the residual source contamination by GX 354-0. Thus, in order to overcome this problem, we used as background level for the 1-100 keV BE spectrum the total count rate level measured during the PE time intervals. The combined LECS+MECS+PDS PE-subtracted bursting spectrum, shown in Fig. 2, was no longer fit with a 2BB model. By adding a power law component we obtained an acceptable fit (see Table 1). The further addition of a Fe K emission line at 6.5 keV slightly improved the fit, with parameter values found for this line in general agreement with the findings by Stella et al. (1988) for Type II bursts.
During TOO2 the RB drastically reduced its bursting activity, and the bursts were concentrated at the beginning of this TOO (Fig. 1, right panel, part [*b*]{}). Also, the emission intensity level decreased. The best–fit model was a photoelectrically absorbed 2BB model (Table 1). No evidence of a Fe emission line was present.
During TOO3 the object further reduced its bursting activity, and no Type II bursts were seen throughout the observation. The best–fit model spectrum was still an absorbed 2BB (Table 1). As in the case of TOO2, no iron emission line at 6.5 keV was found.
The RB was instead no longer visible in MECS/LECS images during TOO4. Stray light from GX354-0 prevented us to get a deep observation of the source. The 3$\sigma$ upper limit to the RB X–ray emission was 1.5$\times$10$^{-12}$ erg cm$^{-2}$ s$^{-1}$ in the 2-10 keV energy band.
Boella G., Butler R.C., Perola G.C. et al., 1997a, A&AS 122, 299\
Boella G., Chiappetti L., Conti G. et al., 1997b, A&AS 122, 327\
Fox D., Guerriero R., Lewin W.H.G., 1998, ATEL n. 9\
Frontera F. Costa E., Dal Fiume D. et al., 1997, A&AS 122, 357\
Guerriero R., Fox D., Kommers J. et al., 1999, MNRAS 307, 179\
Masetti N., Frontera F., et al., 2000, A&A, submitted\
Ortolani S., Bica E., Barbuy B., 1996, A&A 306, 134\
Parmar A., Martin D.D.E., Bavdaz M. et al., 1997, A&AS 122, 309\
Stella L., Haberl F., Lewin W.H.G. et al., 1988, ApJ 324, 379
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
A closed form expression for multiplicity-free quantum 6-j symbols (MFS) was proposed in [@MFS] for symmetric representations of $U_q(sl_N)$, which are the simplest class of multiplicity-free representations. In this paper we rewrite this expression in terms of q-hypergeometric series ${}_4\Phi_3$. We claim that it is possible to express any MFS through the 6-j symbol for $U_q(sl_2)$ with a certain factor. It gives us a universal tool for the extension of various properties of the quantum 6-j symbols for $U_q(sl_2)$ to the MFS. We demonstrate this idea by deriving the asymptotics of the MFS in terms of associated tetrahedron for classical algebra $U(sl_N)$.
Next we study MFS symmetries using known hypergeometric identities such as argument permutations and Sears’ transformation. We describe symmetry groups of MFS. As a result we get new symmetries, which are a generalization of the tetrahedral symmetries and the Regge symmetries for $N=2$.
author:
- '[**Victor Alekseev$^{a,b,c}$[^1], Andrey Morozov$^{a,b,c}$[^2], Alexey Sleptsov$^{a,b,c}$[^3]**]{}'
bibliography:
- 'bib.bib'
date:
title: |
[**Multiplicity-free $U_q(sl_N)$ 6-j symbols:\
relations, asymptotics, symmetries**]{}
---
ITEP-TH-25/19\
IITP-TH-17/19\
MIPT-TH-15/19
$^a$\
$^b$\
$^c$\
Introduction
============
Racah-Wigner coefficients or 6-j symbols play an important role in mathematics and theoretical physics, because they appear in many different problems. From mathematical point of view they describe the associativity data, which are still unknown for $U_q(sl_N)$. The main difficulty is in the appearance of the so-called multiplicities, which happens when the algebra rank $N$ is greater than 2. However, even for multiplicity-free representations analytical formulas for 6j-symbols are known only for a small class of representations, namely, for symmetric representations.
In theoretical physics the algebra $U_q(sl_N)$ is very important especially in quantum physics. Here is an incomplete list of topics, in which 6-j symbols of quantum Lie algebra $U_q(sl_N)$ or its classical version $U(sl_N)$, appear:\
$\bullet$ quantum mechanics [@LL] and quantum computing [@qcomp],\
$\bullet$ quantum $\mathcal{R}$-matrices and integrable systems [@Rint],\
$\bullet$ WZW conformal field theory and 3d Chern-Simons theory [@WZW1; @WZW2],\
$\bullet$ lattice gauge theory [@lattice],\
$\bullet$ 3-d quantum gravity [@qgrav],\
$\bullet$ quantum $sl_N$ invariants of knots [@RT],\
$\bullet$ Turaev-Viro invariants of 3-manifolds and topological field theory [@TV1; @TV2],\
$\bullet$ Drinfeld associator and Kontsevich integral [@DA; @KI],\
$\bullet$ orthogonal polynomials [@ortpol; @ortpol2; @ortpol3].
One can see that 6-j symbols are widely used in both classical and modern works. Note that in many situations, e.g. in the quantum gravity or in statistical models, one considers partition functions, which contain a sum over all possible 6-j symbols of the given gauge group. In such problems it would be very useful to use symmetries between different 6-j symbols in order to reduce the sum and simplify the computation.
Quantum 6-j symbols have a lot of symmetries, most of them are still unknown. Nowadays we have different situations for $U_q(sl_2)$ and more general $U_q(sl_N)$ 6-j symbols. All symmetries of $U_q(sl_2)$ 6-j symbols are well known and well studied, many interesting and surprising results are obtained, see e.g. [@Roberts; @Boalch; @brehamet2015regge; @sleptsov_new_sym]. In the present paper we are interested in the so-called *linear* symmetries. *Non-linear* symmetries (e.g. the pentagon relation), that are more complicated, are out of the scope of this paper. Linear symmetries of $U_q(sl_2)$ Racah coefficients include Regge symmetries, the tetrahedral symmetries and transformation $q \leftrightarrow q^{-1}$ [@klimyk]. [*Known*]{} symmetries of $U_q(sl_N)$ include complex conjugation, a $q\leftrightarrow {q^{-1}}$ and the tetrahedral symmetries [@WZW2].
Some symmetries may be obtained with the help of the eigenvalue hypothesis [@NewSymsFromEvHyp; @Mironov:2016; @cabling; @Dhara:2017ukv; @Alekseev:2019] including some generalization for Regge symmetries. It says that the Racah matrices are uniquely defined by the eigenvalues of the $\hat{\mathcal{R}}$-matrices. All studied examples says that it is true and this hypothesis becomes a useful tool to derive symmetries. Moreover, there is an exact expression for the Racah matrices through the $\hat{\mathcal{R}}$-matrix eigenvalues for the matrices of the size up to $5\times 5$ [@Ev_Hyp] and $6\times 6$ [@Universality].
The 6-j symbols calculation is a big problem for $U_q(sl_N)$ representations. There are few calculation methods and each of them is extremely tedious. Unlike the $U_q(sl_2)$ case, where the answer is known in a closed form for each representation [@KR], the analytical expression for arbitrary representations is still unknown. However, for the special case of symmetric and conjugated symmetric $U_q(sl_N)$ representations, the analytical expression was proposed recently [@MFS; @Mironov:2014]. The result gives us plenty of new questions. In particular, which properties of the expression are special for $U_q(sl_2)$ and which can be generalized to the more complex cases. For instance, in this context it was found [@racah_pol] that 6-j symbols for symmetric representations of $U_q(sl_N)$ can be expressed in terms of orthogonal q-Racah polynomials as well as their counterpart for $U_q(sl_2)$. Also note that 6j-symbols of $U_q(sl_N)$ for non-symmetric representations were studied in [@Morozov:2019haw; @Morozov:2019jqp; @Morozov:2019kgx].
In this paper we study the analytical expression from [@MFS] in order to find new symmetries. In section \[S2\] we start by introducing Racah coefficients and 6-j symbols for $U_q(sl_N)$. In this paper we consider 6-j symbols that have only symmetric and conjugate to symmetric representations. All these 6-j symbols may be transformed via tetrahedral symmetries into either type I and type II [@WZW2]. For type I the only conjugate to symmetric representation is the second one, for type II – the third one. Each type can be considered as a natural generalization of $U_q(sl_2)$ 6-j symbols because each tensor product decomposition for this case has no multiplicities and can be enumerated by an integer number rather than a whole Young diagram. We consider the expression for both types as an analytic function and study its special properties to obtain new symmetries. In section \[S3\] we simplify the expression. Firstly, we prove that the expression may be reduced and the series became much more similar to $U_q(sl_2)$ series. This was done for both types independently and as it appears they can be represented as one universal expression for both types. Then we express it in terms of q-hypergeometric function $_4\Phi_3$ with some factor. Also it is proven that this expression does not have any inequality restrictions on its arguments, as it was proposed in the original article. As a result, the expression becomes more convenient for studying symmetries.
In section \[S4\] we analyze the hypergeometric expression of multiplicity-free 6-j symbol. We find the transformation between the multiplicity-free $U_q(sl_N)$ 6-j symbol and its $U_q(sl_2)$ counterpart. This result creates a lot of possibilities to generalize well-known $U_q(sl_2)$ 6-j symbol properties to the considered case. As an immediate output of such relation in section \[S5\] we derive the classical ($q=1$) 6-j symbol asymptotics, using known results for $U(sl_2)$. Originally it was written in terms of the associated tetrahedron [@Ponzano_Regge; @Roberts]. The $U(sl_N)$ generalization modifies the expression so that the tetrahedron now depends on $N$ and deforms differently for two types of 6-j symbols.
In section \[S6\] the resulting 6-j symbol expression has been studied for symmetries. Obtained $_4\Phi_3$ series has two known symmetries: permutations of arguments in each row and the Sears’ transformation [@gaspar
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Recognizing how objects interact with each other is a crucial task in visual recognition. If we define the context of the interaction to be the objects involved, then most current methods can be categorized as either: (i) training a single classifier on the combination of the interaction and its context; or (ii) aiming to recognize the interaction independently of its explicit context. Both methods suffer limitations: the former scales poorly with the number of combinations and fails to generalize to unseen combinations, while the latter often leads to poor interaction recognition performance due to the difficulty of designing a context-independent interaction classifier. To mitigate those drawbacks, this paper proposes an alternative, context-aware interaction recognition framework. The key to our method is to explicitly construct an interaction classifier which combines the context, and the interaction. The context is encoded via word2vec into a semantic space, and is used to derive a classification result for the interaction.
The proposed method still builds one classifier for one interaction (as per type (ii) above), but the classifier built is adaptive to context via weights which are context dependent. The benefit of using the semantic space is that it naturally leads to zero-shot generalizations in which semantically similar contexts (subject-object pairs) can be recognized as suitable contexts for an interaction, even if they were not observed in the training set. Our method also scales with the number of interaction-context pairs since our model parameters do not increase with the number of interactions. Thus our method avoids the limitation of both approaches. We demonstrate experimentally that the proposed framework leads to improved performance for all investigated interaction representations and datasets.
author:
- |
[ Bohan Zhuang, Lingqiao Liu, Chunhua Shen, Ian Reid]{}\
School of Computer Science, University of Adelaide, Australia
bibliography:
- 'CSRef.bib'
title: 'Towards Context-aware Interaction Recognition[^1] '
---
Introduction
============
Object interaction recognition is a fundamental problem in computer vision and it can serve as a critical component for solving many visual recognition problems such as action recognition [@mallya2016learning; @ramanathan2015learning; @wang2015action; @bilen2016dynamic; @Zhang_2016_CVPR], visual phrase recognition [@hu2016modeling; @rohrbach2016grounding; @li2017vip], sentence to image retrieval [@ma2015multimodal; @karpathy2015deep] and visual question answering [@wu2016ask; @lu2016knowing; @wu2016value]. Unlike object recognition in which the object appearance and its class label have a clear association, the interaction patterns, e.g. “eating”, “playing”, “stand on”, usually have a vague connection to visual appearance. This phenomenon is largely caused by the same interaction being involved with different objects as its context, i.e. the subject and object of an interaction type. For example, “cow eating grass” and “people eating bread” can be visually dissimilar although both of them have the same interaction type “eating”. Thus the subject and object associated with the interaction – also known as the *context* of the interaction – could play an important role in interaction recognition.
In existing literature, there are two ways to model the interaction and its context. The first one treats the combination of interaction and its context as a single class. For example, in this approach, two classifiers will be built to classify “cow eating grass" and “people eating bread." To recognize the interaction “eating”, images that are classified as either “cow eating grass” or “people eating bread” will be considered as having interaction “eating". This treatment has been widely used in defining action (interaction) classes in many action (interaction) recognition benchmarks [@mallya2016learning; @ramanathan2015learning; @wang2015action; @bilen2016dynamic; @Zhang_2016_CVPR]. This approach, however, suffers from poor scalability and generalization ability. The number of possible combinations of the interaction and its context can be huge, and thus it is very inefficient to collect training images for each combination. Also, this method fails to generalize to an unseen combination even if both its interaction type and context are seen in the training set.
To handle these drawbacks, another way is to model the interaction and the context separately [@lu2016visual; @desai2011discriminative; @gupta2008beyond; @sadeghi2015viske]. In this case, the interaction is classified independently of its context, which can lead to poor recognition performance due to the difficulty of associating the interaction with certain visual appearance in the absence of context information. To overcome the imperfection of interaction classification, some recent works employ techniques such as language priors [@lu2016visual] or structural learning [@li2017vip; @Liang2017VRD] to avoid generating an unreasonable combination of interaction and context. However, the context-independent interaction classifier is still used as a building block, and this prevents the system from gaining more accurate recognition from visual cues.
The solution proposed in this paper aims to overcome the drawbacks of both methods. To avoid the explosion of the number of classes, we still separate the classification of the interaction and the context into two stages. However, different to the second method, the interaction classifier in our method is designed to be adaptive to its context. In other words, for the same interaction, different contexts will result in different classifiers and our method will encourage interactions with similar contexts to have similar classifiers. By doing so, we can achieve context-aware interaction classification while avoiding treating each combination of context and interaction as a single class. Based on this framework, we investigate various feature representations to characterize the interaction pattern. We show that our framework can lead to performance improvements for all the investigated feature representations. Moreover, we augment the proposed framework with an attention mechanism, which leads to further improvements and yields our best performing recognition model. Through extensive experiments, we demonstrate that the proposed methods achieve superior performance over competing methods.
Related work
============
*Action recognition:* Action is one of the most important interaction patterns and action recognition in images/videos has been widely studied [@mallya2016learning; @ramanathan2015learning; @wang2015action; @bilen2016dynamic; @Zhang_2016_CVPR]. Various action recognition datasets such as Stanford 40 actions [@yao2011human], UCF-101 [@soomro2012ucf101] and HICO [@chao2015hico] have been proposed, but most of them focus on actions (interactions) with limited number of context. For example, in the relatively large HICO [@chao2015hico] dataset, there are only 600 categories of human-object interactions. Thus the interplay of the interaction and its context has not been explored in the works of this direction.
*Visual relationships:* Some recent works focus on the detection of visual relationships. A visual relationship is composed of an interaction and its context, i.e. subject and object. Thus this direction is most relevant to this paper. In fact, the interaction recognition can be viewed as the most challenging part of the visual relationship detection. Some recent works in visual relationship detection have made progress in improving the detection performance and the detection scalability. The work in [@lu2016visual] leveraged language priors to produce relationship detections that make sense to human beings. The latest approaches [@Liang2017VRD; @li2017vip; @zhang2017visual] attempt to learn the visual relationship detector in an end-to-end manner and explicitly reason the interdependency among relationship components at the visual feature level.
*Language-guided visual recognition:* Our method uses language information to guide the visual recognition. This corresponds to the recent trend in utilizing language information for benefiting visual recognition. For example, language information has also been incorporated in phrase grounding [@plummer2016phrase; @hu2016modeling; @rohrbach2016grounding] tasks. In [@hu2016modeling; @rohrbach2016grounding], attention model is employed to extract linguistic cues from phrases. Language guided attention has also been widely used in visual question answering [@donahue2015long; @karpathy2015deep; @malinowski2015ask; @ren2015image] and has recently been applied to one-shot learning [@vinyals2016matching].
Methods
=======
Context-aware interaction classification framework
--------------------------------------------------
In general, an interaction and its context can be expressed as a triplet $\left\langle \emph{O1-P-O2} \right\rangle$, where $P$ denotes the interaction, and $O1$ and $O2$ denote its subject and object respectively. In our study, we assume the interaction context (*O1,O2*) has been detected by a detector (i.e. we are given bounding boxes and lables for both subject $O1$ and object $O2$) and the task we are addressing is to classify their interaction type $P$. To recognize the interaction, existing works take two extremes in designing the classifier. One is to directly build a classifier for each $P$ and assume that the same classifier applies to $P$ with different context. Another takes the combination of $\left\langle \emph{O1-P-O2} \right\rangle$ as a single class and build a classifier for each combination. As discussed in the introduction section, the former does not fully leverage the contextual information for interaction recognition while the latter suffers from the scalability and generalization issues. Our proposed method lies between those two extremes. Specifically, we still allocate one classifier for each interaction type, however we make the classifier parameters adaptive to the context of the interaction. In other words, the classifier is a function of the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We review the current status of the study of rotation curve (RC) of the Milky Way, and present a unified RC from the Galactic Center to the galacto-centric distance of about 100 kpc. The RC is used to directly calculate the distribution of the surface mass density (SMD). We then propose a method to derive the distribution of dark matter (DM) density in the in the Milky Way using the SMD distribution. The best-fit dark halo profile yielded a local DM density of We also review the estimations of the local DM density in the last decade, and show that the value is converging to a value at $\rho_\odot=0.39\pm 0.09$ .\
[**Key words**]{} galaxies: DM—galaxies: individual (Milky Way)—galaxies: rotation curve\
([*Invited review accepted for Galaxies to appear in special issue on “Debate on the Physics of Galactic Rotation and the Existence of Dark Matter”*]{})
author:
- |
Yoshiaki Sofue\
Institute of Astronomy, The University of Tokyo, Mitaka, Tokyo 181-0015, Japan\
E-mail: sofue@ioa.s.u-tokyo.ac.jp
title: Rotation Curve of the Milky Way and the Dark Matter Density
---
0[ V\_0 ]{}
Introduction
============
The rotation curve (RC) of the Milky Way was obtained by observations of galactic objects in the non-MOND (MOdified Newtonian Dynamics)frame work. The existence of the dark halo (DH) has been confirmed by the analysis of the observed RCs, assuming that Newtonian dynamics applies evenly to the result of the observations. In this article, current works of RC observations are briefly reviewed, and a new estimation of the local dark matter (DM) density is presented in the framework of Newtonian dynamics.
An RC is defined as the mean circular velocity $\Vrot$ around the nucleus plotted as a function of the galacto-centric radius $R$. Non-circular streaming motion due to the triaxial mass distribution in a bar is crucial for kinematics in the innermost region, though it does not affect the mass determination much in the disk and halo. Spiral arms are another cause for local streaming, which affect the mass determination by several percent, while they do not influence the mass determination of the dark halo much.
There are several reviews on RCs and mass determination of galaxies \[[@SofueRubin2001; @Sofue2017; @Salucci2019]\]. In this review, we revisit recent RC studies and determination of the local DM density in our Milky Way. In Section \[Section2\], we briefly review the current status of the RC determinations along with the methods. We adopt the galactic constants: $(\rzero,\vzero)$=(8.0 kpc, 238 ) \[[@Honma+2012; @Honma+2015]\], where $\rzero$ is the distance of the Sun from the galactic center (GC) and $\vzero$ is the circular velocity of the local standard of rest (LSR) at the Sun \[[@Fich+1991]\].
Rotation Curve of the Milky Way {#Section2}
===============================
Progress in the Last Decades
----------------------------
The galactic RC is dependent on the galactic constants. Accordingly, the uncertainty and error in the RC include uncertainties of the constants. Currently recommended, determined, or measured values are summarized in Table \[tabGal\], where they appear to be converging to around $\sim 8.0-8.3$ and . In this paper, we adopt $R_0= 8.0$ kpc and $V_0 =238$ from the recent measurements with VERA (VLBI Experiments for Radio Astrometry) \[[@Honma+2012; @Honma+2015]\].
**Authors (Year)** **(kpc)** **()**
---------------------------------------------------------------- ----------------- -------------
IAU recommended (1982) 8.2 220
Review before 1993 (Reid 1993) \[[@Reid1993]\] $8.0 \pm 0.5$
Olling and Dehnen 2003 \[[@Olling+2003]\] $7.1\pm 0.4$ $184\pm 8$
VLBI Sgr A$^*$ (Ghez et al. 2008) \[[@Ghez+2008]\] $8.4 \pm 0.4$
ibid (Gillessen et al. 2009) \[[@Gillessen+2009]\] $8.33 \pm 0.35$
Maser astrometry (Reid et al. 2009) \[[@Reid+2009]\] $8.4\pm 0.6$ $254\pm 16$
Cepheids (Matsunaga et al. 2009) \[[@Matsunaga+2009]\] $8.24 \pm 0.42$
VERA (Honma et al. 2012, 2015) \[[@Honma+2012; @Honma+2015]\]. $8.05\pm 0.45$ $238\pm 14$
Adopted in this paper 8.0 238
: Galactic constants ($R_0,V_0$). []{data-label="tabGal"}
The RC of the galaxy has been obtained by various methods as described in the next subsection, and many authors presented their results based on different galactic constants (Table \[tabrcmw\]).
**Authors (Year)** **Radii (kpc)** **Method**
----------------------------------------------------------------------------------------------------------- ----------------- -------------------------
Burton and Gordon (1978)\[[@Burton+1978]\] 0–8 HI tangent
Blitz et al. (1979) \[[@Blitz+1979]\] 8–18 OB-CO assoc.
Clemens (1985)\[[@Clemens1985]\] 0 -18 CO/compil.
Dehnen and Binney (1998)\[[@Dehnen+1998]\] 8–20 compil. + model
Genzel et al. (1994–), Ghez et al. (1998–)\[[@Genzel+2010; @Ghez+2008]\] 0–0.0001 GC IR spectr.
Battinelli, et al. (2013)\[[@Battinelli+2013]\] 9–24 C stars
Bhattacharjee et al.(2014)\[[@Bhattacharjee+2014]\] 0–200 Non-disk objects
Lopez-Corredoira (2014)\[[@Lopez2014]\] 5–16 Red-clump giants $\mu$
Boby et al. (2012)\[[@Bovy+2012b]\] 4-14 NIR spectroscopy
Bobylev (2013); — & Bajkova (2015)\[[@Bobylev2013; @Bobylev+2015]\] 5–12 Masers/OB stars
Reid et al. (2014)\[[@Reid+2014]\] 4-16 Masers SF regions, VLBI
Honma et al. (2012, 2015)\[[@Honma+2012; @Honma+2015]\] 3–20 Masers,VLBI
Iocco et al. (2015, 2016); Pato & Iocco (2017a,b)\[[@Iocco+2015; @Iocco+2016; @Pato+2017a; @Pato+2017b]\] 1–25 kpc CO/HI/opt/maser/compil.
Huang et al. (2016)\[[@Huang+2016]\] 4.5–100 HI/opt/red giants
Kre[ł]{}owski et al (2018)\[[@Krelowski+2018]\] 8–12 GAIA
Lin and Li (2019)\[[@Lin+2019]\] 4–100 compil.
Eilers et al (2019)\[[@Eilers+2019]\] 5–25 Wise, 2Mass, GAIA
Mróz et al. (2019)\[[@Mroz+2019]\] 4–20 Classical cepheids
Sofue et al. (2009); Sofue (2013, 2015, this work)\[[@Sofue+2009; @Sofue2013; @Sofue2015]\] 0.01–1000 CO/HI/maser/opt/compil.
: Rotation curves (RCs) of the Milky Way galaxy.[]{data-label="tabrcmw"}
In the 1970–1980s, the inner RC was extensively measured using the terminal-velocities of HI (neutral hydrogen) and CO (carbon monoxide) gases \[[@Burton+1978; @Clemens1985; @Fich+1989]\]. In the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Polarization-resolved magneto-luminescence, together with simultaneous magneto-transport measurements, have been performed on a two-dimensional electron gas (2DEG) confined in CdTe quantum well in order to determine the spin-splitting of fully occupied electronic Landau levels, as a function of the magnetic field (arbitrary Landau level filling factors) and temperature. The spin splitting, extracted from the energy separation of the $ \sigma^+$ and $\sigma^-$ transitions, is composed of the ordinary Zeeman term and a many-body contribution which is shown to be driven by the spin-polarization of the 2DEG. It is argued that both these contributions result in a simple, rigid shift of Landau level ladders with opposite spins.'
author:
- 'J.'
- 'K.'
- 'F. J.'
- 'P.'
- 'B. A.'
- 'D. K.'
- 'M.'
- 'V.'
- 'G.'
- 'T.'
title: 'Enhancement of the spin-gap in fully occupied two-dimensional Landau levels'
---
A number of experiments on two-dimensional electron gases (2DEGs) [@Nicholas88; @Usher90; @Leadley98; @Maude98] clearly show that the thermal activation of carriers across the Fermi energy, located between the spin split Landau levels at odd integer filling factors ($\nu$), is governed by a gap which can significantly surpass the single particle Zeeman energy included in band structure models. This phenomenon, referred to in the literature as $g$ factor or spin gap enhancement, [@Fang68; @Janak69; @Ando74] is thought to be driven by the spin polarization of a 2DEG and is a primary manifestation of the interactions between two-dimensional electrons in the integer quantum Hall effect (QHE) regime. It is a result of the specific character of the spin-excitation spectra of a 2DEG at odd integer $\nu$-QHE states [@Bychkov81; @Kallin84]. It can be seen as arising from the contribution of Coulomb interactions (including exchange terms) to the energy which is required to remove, or inject, an electron from, or to, a given spin resolved Landau level (LL).
To date, the effect of the spin-gap enhancement has been generally limited to experiments [@Nicholas88; @Usher90; @Maude98; @Dolgopolov97; @Wang92; @Wiegers97] which probe the spin splitting at the Fermi level, for QHE states at exactly odd filling factors. This limitation has been thought to be overcome with spectroscopic methods such as, for example, interband optics [@Kukushkin93; @Kukushkin96; @Potemski98] or tunneling experiments [@Dial07], which, within their trivial description, permit to investigate the processes of removing/adding an electron from/to a 2DEG, at arbitrary energy, filling factor and temperature. Among the different spectroscopic methods, magneto-luminescence measurements has been widely invoked to investigate electron-electron correlation in the QHE regime, however, measurements to probe the spin-gap enhancement are rather scarce [@Kukushkin93; @Dial07].
Here, we report on magneto-photoluminescence studies of a 2DEG confined in a high quality CdTe quantum well, and, show that the enhancement of the spin splitting is not only a property of spin excitations at the Fermi level, but that it is also relevant for fully occupied spin Landau levels, located well below the Fermi energy. We have measured the many body contribution to the spin gap for fully populated spin Landau levels over a wide range of filling factors and temperatures, and show that it is driven by Coulomb interaction, apparent via the spin polarization of the investigated 2DEG with its relatively large bare Zeeman splitting.
The increasingly high quality of GaAs/GaAlAs structures has been driving advances in the physics of interacting 2D electrons. Notably, 2D electrons in a GaAs matrix are characterized by a relatively small bare $g$ factor (-0.44) and therefore by a small value of the interaction parameter $\eta=E_z/\mathcal{D}$, where $E_z=g\mu_BB$, $\mathcal{D}=e^2/ \epsilon l_B$, and, $l_B=\sqrt{\hbar/eB}$ is the magnetic length. The small value of $\eta$ is responsible for the rich physics exhibited by interacting 2D electrons in the QHE regime, for example the occurrence of competing spin polarized/unpolarized many body ground states [@Clark89] or Skyrmion-type spin texture excitations [@Sondhi93; @Schmeller95; @Maude96]. However, this complex physics often masks the appearance of simpler and basic many body effects, which should emerge more clearly when $\eta$ is sufficiently large. Disorder is an additional source of complications in ascertaining the spin polarization in systems with small $g$ factors. While high electron mobilities are obviously advantageous, GaAs-based structures are also rather fragile, displaying, for example, metastable effects upon illumination, with an associated decrease in mobility and homogeneity, which frequently prevents the simultaneous basic characterization of such structures using magneto-optics and magneto-transport. A 2DEG in a CdTe matrix [@Karczewski98], used in our experiments, is characterized by relatively large (bare) $g$ factor (-1.6) and the $\eta$-parameter in this system exceeds by a factor of $\approx 3$ its value in GaAs structures (the dielectric screening $\epsilon=10$ is slightly less efficient in CdTe). CdTe, which has a conduction band as simple as the one in GaAs, appears to be an almost ideal model system to study the QHE physics of the primary spin-polarized states. The significant progress in the crystal growth of CdTe quantum wells permits nowadays to attain a 2DEG with reasonably high mobilities. As shown in Fig. \[Fig1\], the sample studied here shows a well pronounced fractional QHE and permits a trouble-free, simultaneous measurement of high quality magneto-photoluminescence and magneto-transport.
The active part of the investigated structure consist of a $20$ nm-wide CdTe quantum well (QW), modulation doped on one side with iodine, and embedded between Cd$_{0.74}$Mg$_{0.26}$Te barriers. The sample, in form of 1.5$\times$6mm rectangle, was equipped with electrical contacts in a Hall bar configuration to permit simultaneous optical and electric measurements. Experiments have been carried out using either a $^3$He/$^4$He dilution refrigerator or a variable temperature $^4$He cryostat, in magnetic fields supplied by a resistive (28 T) or superconducting (11 T) magnets. A standard, low frequency ($\approx 10$ Hz) lock-in technique has been applied for the resistance measurements. Polarization resolved, $\sigma^{+}$ and $\sigma^{-}$ photoluminescence (PL) spectra have been measured using a single 600 $\mu$m-diameter optical fiber to transmit the excitation beam (514 nm-line of Ar$^+$ laser) and to collect the photoluminescence signal for the spectrometer (spectral resolution $\approx100\mu
eV$) equipped with a CCD camera. An appropriate linear polarizer and $\lambda$/4-plate were placed directly between the end of the fiber and the sample. The $\sigma^{+}$ and $\sigma^{-}$ PL components were measured by reversing the polarity of the magnetic field. Special attention has been paid to assure a low level of laser excitation ($\approx50$ $\mu$W/cm$^2$), to precisely calibrate the magnetic field, and to measure the spectra at small intervals (down to 5 mT) of the magnetic field. Under our experimental conditions (continuous laser illumination), the 2DEG density of $\approx4.5\times 10^{11}$ cm$^{-2}$ and mobility of $\mu=2.6 \times 10^{5}$ cm$^2$/Vs were well reproduced in different experimental runs.
The representative results of simultaneous magneto-PL and magneto-resistivity measurements of our sample are shown in Fig. \[Fig1\]. As can be seen in Fig. \[Fig1\](b), the investigated 2DEG shows all typical attributes of the QHE in a system with fairly high mobility and relatively high electron concentration; well developed integer QHE states and the appearance of $5/3$, $4/3$ and $2/3$ fractional states (which will be discussed elsewhere). From the field at which the Shubnikov de Haas (SdH) oscillations ($B_{1}\approx94$ mT), and spin-splitting appears ($B_{2}\approx0.51$ T), we obtain a first estimate of the enhanced $g$ factor, $g^*\approx3.7$ using the condition ($\hbar
eB_1/m^{*} \approx g^{*}\mu _{B}B_2$) where the electron effective mass $m^{*}=0.1m_{e}$ was derived from cyclotron resonance absorption measured on a parent sample. A Dingle analysis of the SdH oscillations gives a quantum lifetime $\tau_q=\hbar/2\Gamma=(3.0\pm0.3)$ ps (broadening of Lorentzian Landau levels $\Gamma\approx110~\mu$eV) as compared to the transport lifetime $\tau_{\tau}\approx15$ ps (derived from the measured mobility).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $\left(a_{n}\right)_{n}$ be a strictly increasing sequence of positive integers, denote by $A_{N}=\left\{ a_{n}:\,n\leq N\right\} $ its truncations, and let $\alpha\in\left[0,1\right]$. We prove that if the additive energy $E\left(A_{N}\right)$ of $A_{N}$ is in $\Omega\left(N^{3}\right)$, then the sequence $\left(\left\langle \alpha a_{n}\right\rangle \right)_{n}$ of fractional parts of $\alpha a_{n}$ does not have Poissonian pair correlations (PPC) for almost every $\alpha$ in the sense of Lebesgue measure. Conversely, it is known that $E\left(A_{N}\right)=\mathcal{O}\left(N^{3-\varepsilon}\right)$, for some fixed $\varepsilon>0$, implies that $\left(\left\langle \alpha a_{n}\right\rangle \right)_{n}$ has PPC for almost every $\alpha$. This note makes a contribution to investigating the energy threshold for $E\left(A_{N}\right)$ to imply this metric distribution property. We establish, in particular, that there exist sequences $\left(a_{n}\right)_{n}$ with $$E\left(A_{N}\right)=\Theta\left(\frac{N^{3}}{\log\left(N\right)\log\left(\log N\right)}\right)$$ such that the set of $\alpha$ for which $\left(\alpha a_{n}\right)_{n}$ does not have PPC is of full Lebesgue measure. Moreover, we show that for any fixed $\varepsilon>0$ there are sequences $\left(a_{n}\right)_{n}$ with $E\left(A_{N}\right)=\Theta\left(\frac{N^{3}}{\log\left(N\right)\left(\log\log N\right)^{1+\varepsilon}}\right)$ satisfying that the set of $\alpha$ for which the sequence $\left(\bigl\langle\alpha a_{n}\bigr\rangle\right)_{n}$ does not have PPC is of full Hausdorff dimension.'
author:
- 'Thomas Lachmann[^1], and Niclas Technau[^2]'
title: On Exceptional Sets in the Metric Poissonian Pair Correlations problem
---
Introduction
============
The theory of uniform distribution modulo $1$ dates back, at least, to the seminal paper of Weyl [@Weyl:; @=0000DCber; @die; @Gleichverteilung; @von; @Zahlen; @mod.; @Eins]. Weyl showed, inter alia, that for any fixed irrational $\alpha\in\mathbb{R}$ and integer $d\geq1$ the sequences $\left(\bigl\langle\alpha n^{d}\bigr\rangle\right)_{n}$ are uniformly distributed modulo $1$. However, in recent years various authors have been investigating a more subtle distribution property of such sequences - namely, whether the asymptotic distribution of the pair correlations has a property which is called Poissonian, and defined as follows:
Let $\left\Vert \cdot\right\Vert $ denote the distance to the nearest integer. A sequence $\left(\theta_{n}\right)_{n}$ in $\left[0,1\right]$ is said to have (asymptotically) Poissonian pair correlations, if for each $s\geq0$ the pair correlation function[^3] $$R_{2}\left(\left[-s,s\right],\left(\theta_{n}\right)_{n},N\right)\coloneqq\frac{1}{N}\#\left\{ 1\leq i\neq j\leq N:\,\left\Vert \theta_{i}-\theta_{j}\right\Vert \leq\frac{s}{N}\right\} \label{eq: definition of the Pair Correlation Counting function}$$ tends to $2s$ as $N\rightarrow\infty$. Moreover, let $\left(a_{n}\right)_{n}$ denote a strictly increasing sequence of positive integers. If no confusion can arise, we write $$R\left(\left[-s,s\right],\alpha,N\right)\coloneqq R_{2}\left(\left[-s,s\right],\left(\alpha a_{n}\right)_{n},N\right)$$ and say that a sequence $\left(a_{n}\right)_{n}$ has metric Poissonian pair correlations if $\left(\alpha a_{n}\right)_{n}$ has Poissonian pair correlations for almost all $\alpha\in\left[0,1\right]$ in the sense of Lebesgue measure.
It is known that if a sequence $\left(\theta_{n}\right)_{n}$ has Poissonian pair correlations, then it is uniformly distributed modulo $1$, cf. [@Aistleitner; @Lachmann; @Pausinger:; @Pair; @correlations; @and; @equidistribution; @Larcher; @Grepstad:; @On; @pair; @correlation; @and; @discrepancy]. Yet, the sequences $\left(\left\langle \alpha n^{d}\right\rangle \right)_{n}$ do *not* have Poissonian pair correlations for *any* $\alpha\in\mathbb{R}$ if $d=1$. For $d\geq2$, Rudnick and Sarnak [@Rudnick; @Sarnak:; @The; @pair; @correlation; @function; @of; @fractional; @parts; @of; @polynomials] proved that $\left(n^{d}\right)_{n}$ has metric Poissonian pair correlations (metric PPC). For alternative proofs, we refer the reader to Heath-Brown and the work of Marklof and Strömbergsson [@Marklof; @Str=0000F6mbergsson:; @Equidistribution; @of; @Kronecker; @sequences; @along; @closed; @horocycles].[^4] Given these results, it is natural to investigate which properties of a sequence of integers $\left(a_{n}\right)_{n}$ implies the metric PPC of $\left(a_{n}\right)_{n}$. Partial answers are known, e.g. it follows from work of Boca and Zaharescu [@Boca; @Zaharescu:; @Pair; @correlation; @of; @values; @of; @rational; @functions; @(mod; @p)] that $\left(P\left(n\right)\right)_{n}$ has metric PPC if $P$ is any polynomial with integer coefficients of degree at least two. An interesting general result in this direction is due to Aistleitner, Larcher, and Lewko [@Aistleitner; @Larcher; @Lewko:; @Additive; @Energy; @and; @the; @Hausdorff; @Dimension; @of; @the; @Exceptional; @Set; @in; @Metric; @Pair; @Correlation; @Problems] who used a Fourier analytic approach combined with a bound on GCD sums of Bondarenko and Seip [@Bondarenko; @Seip:; @GCD; @sums; @and; @complete; @sets; @of; @square-free; @numbers] to relate the metric PPC of $\left(a_{n}\right)_{n}$ with its combinatoric properties. For stating it, let $\left(a_{n}\right)_{n}$ denote henceforth a strictly increasing sequence of positive integers and denote the set of the first $N$ elements of $\left(a_{n}\right)_{n}$ by $A_{N}$. Moreover, define the additive energy $E\left(I\right)$ of a finite set integers $I$ via $$E\left(I\right)\coloneqq\sum_{\underset{a+b=c+d}{a,b,c,d\in I}}1.$$ In the following, let $\mathcal{O}$ and $o$ denote the standard Landau symbols/O-notation.\
\
A main finding of [@Aistleitner; @Larcher; @Lewko:; @Additive; @Energy; @and; @the; @Hausdorff; @Dimension; @of; @the; @Exceptional; @Set; @in; @Metric; @Pair; @Correlation; @Problems] is the implication that if the truncations $A_{N}$ satisfy $$E\left(A_{N}\right)=\mathcal{O}\left(N^{3-\varepsilon}\right)\label{eq: Aistleitner bound}$$ for some fixed $\varepsilon>0$, then $\left(a_{n}\right)_{n}$ has metric PPC. Note that $\left(\#I\right)^{2}\leq E\left(I\right)\leq\left(\#I\right)^{3}$ where $\#I$ denotes the cardinality of $I\subset\mathbb{Z}$. Roughly speaking, a set $I$ has large additive energy if and only if it contains a “large” arithmetic progression like structure. Indeed, if $\left(a_{n}\right)_{n}$ is a geometric progression or of the form $\left(n^{d}\right)_{n}$ for $d\geq2,$ then (\[eq: Aistleitner bound\]) is satisfied. Furthermore,
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Maria Silvia Garelli
- Feodor V Kusmartsev
date: 'Received: date / Revised version: date'
title: 'Buckyball Quantum Computer: Realization of a Quantum Gate'
---
Introduction
============
During recent years there is a strong progress in modeling physical realizations of a quantum computer. Many quantum physical systems have been investigated for the realization of quantum gates. The most remarkable studies were related to systems associated to Quantum Optics Ion Traps, to Quantum Electrodynamics in Optical Cavities and to Nuclear Magnetic Resonance. All these experiments are aimed to realize a quantum gate. The first type of experiments is based on trapping ions in electromagnetic traps, where the ions, which encode the qubit in the charge degrees of freedom, are subjected to the mutual electrostatic interaction and to a state selective displacement generated by an external state dependent force [@Cirac; @Steane; @Sasura; @Calarco]. Cavity quantum electrodynamics (QED) techniques are based on the coherent interaction of a qubit, generally represented by an atom or semiconductor dot system, with a single mode or a few modes of the electromagnetic field inside a cavity. Depending on the particular system, the qubit can be represented by the polarization states of a single photon or by two excited states of an atom. Although cavity QED experiments are very promising, they have been accomplished for few qubits [@Pellizzari; @van; @Rauschenbeutel; @Duan]. In the third experiment, nuclear spins represent qubits. These spins can be manipulated using nuclear magnetic resonance techniques, and through the study of the quantum behavior of spins, quantum operations are realized. However, the number of spins which can be collected in a system is very limited, and this forbids the building up of a scalable quantum computer [@Gershenfeld; @Schmidt; @Leibfried; @Nielsen]. From the study of such systems, we learn that the decoherence phenomenon is the main issue which prevents the realization of quantum gates. Here we will focus on a physical systems, which will be able to produce a realistic quantum gate. The basic elements of our system are fullerene molecules with encapsulated atoms or ions, which are called *buckyballs* or *endohedral fullerenes*. Each of the trapped atoms carries a spin. This spin, associated with electronic degrees of freedom, encodes the qubit. It has been shown [@Greer], that these endohedral systems provide a long lifetime for the trapped spins and that the fullerene molecules represent a good sheltering environment for the very sensible spins trapped inside. These endohedral systems are typically characterized by two relaxation times. The first is $T_1$, which is due to the interactions between a spin and the surrounding environment. The second one is $T_2$ and it is due to the dipolar interaction between the qubit encoding spin and the surrounding endohedral spins randomly distributed in the sample. While $T_1$ is dependent on temperature, $T_2$ is practically independent of it. The experimental measure of the two relaxation times shows that $T_1$ increases with decreasing temperature from about $100\mu s$ at $T=300 K$ to several seconds below $T=5K$, and that the value of the other relaxation time, $T_2$, remains constant, that is $T_2\simeq 20\mu s$ [@Knorr1; @Knorr2]. In comparison with $T_2$ the value of $T_1$ is very large, therefore the system decoherence is determined by the spin-spin relaxation processes. It is supposed that the value of $T_2$ can be increased, if it will be possible to design a careful experimental architecture, which could screen the interaction of the spins with the surrounding magnetic moments. It should be possible to reduce the relaxation time of the system due to the random spin-spin interactions, if we consider a system composed by arrays of endohedrals encapsulated in a nanotube [@Khlobystov], this system is also called as *peapod*, or considering buckyballs embedded on a substrate. These should be reliable systems for the realization of quantum gates. In such architectures the decoherence time for each encapsulated spin should be longer.
Quantum computing through the study of doped fullerene systems has been investigated in many works [@Harneit; @Harneit1; @Feng; @Suter; @Twam]. Although we have followed many ideas suggested in these previous papers, we consider a different approach for the realization of quantum gates.
Our study is focused on a system composed by two buckyballs. Our aim is the realization of a quantum *$\pi$-gate*, which is a generalization of the *phase gate*, this will be treated in Sec. \[phasegate\]. To perform the $\pi$-gate, we need to know the time evolution of the coefficients of the standard computational basis states over which we expand the wave function of our system. The two particle phases are evaluated through the numerical solution of the Schr[ö]{}dinger equation, see Secs. \[ind\]-\[dip\]. We have used two approaches: a time independent Hamiltonian, see Sec. \[ind\], and a time dependent one, see Sec. \[dip\]. The main result of our study is the gate time, that is the time required by the system in order to perform the $\pi$-gate. The values obtained are around $\tau\simeq1\times10^{-8}s$, which is a few orders smaller than the shortest relaxation time, $T_2$. From the comparison of the gate time, $\tau$, to the relaxation time, $T_2$, we get that it is theoretically possible to realize some thousands of basic gate operations before the system decoheres. We have also checked the reliability of our gate through the analysis of the *concurrence* of the two-qubit state, see Sec. \[concurr\]. The best value for the concurrence is obtained in the case of a time dependent Hamiltonian, while the gate time is nearly the same in both cases.
Physical Features of the System
===============================
The system under consideration is composed by two interacting buckyballs. Several experimental and theoretical studies on buckyballs [@Greer; @Harneit; @Heath; @Shinohara; @Saunders; @Weid], show that many different types of atoms can be encased in fullerenes molecules. However, in most of the studied endohedral\
fullerenes, there is a charge transfer from the encapsulated atom to the fullerene cage, with a resulting considerable alteration of the electronic properties of the cage. This is not the case for group V encased atoms. These atoms reside just at the center of the fullerene molecule, therefore there is no hybrididazion of the electron cloud of the encased atom and there is no Coulomb interaction with the fullerene cage. In particular, the most promising endohedral molecule should be the $N@C_{60}$, which is characterized by many interesting chemical-physical properties. Following Refs. [@Greer; @Harneit; @Weid] , experiments and theoretical calculations suggest that there is a repulsive exchange interaction between the fullerene and the electronic cloud of the encapsulated atom. The electrons in the cloud of the encased nitrogen are tighter bound than in a free nitrogen atom, which allow the encased nitrogen to be less reactive even at room temperature. These results, together with the location of the nitrogen atom in the central site, suggest that in $N@C_{60}$ the nitrogen can be considered as an independent particle, with all the properties of the free atom. Since any charge interaction is screened, the fullerene cage does not take any part in the interaction process and it can be considered just as a trap for the nitrogen atom. Therefore, the only physical quantity of interest is the spin of the trapped particle. A nitrogen atom can be effectively described as a $\frac{3}{2}$-spin particle. This spin is associated with the electronic degrees of freedom. Taking into account also the nuclear spin, which is $\frac{1}{2}$ for the $N@C_{60}$, the number of relevant degrees of freedom will be not increased [@Meher]. We will consider a more simple model assuming that the encased atoms are described as $\frac{1}{2}$-spin particles. In absence of any mutual interaction and without any applied magnetic field, the energy levels associated with these spin particles are degenerate. If we apply a static magnetic field, this degeneracy is lifted. As a result, due to the Zeeman effect, a two level system arises for each $\frac{1}{2}$-spin particle. Each of these two levels encodes the qubit. The spin-up component, $m_s=+\frac{1}{2}$, encodes the computational basis state $\mid1\rangle$, and the spin-down component, $m_s=-\frac{1}{2}$, represents the state $\mid0\rangle$.
Gate Operation: The Phase Gate {#phasegate}
==============================
Quantum computers operate with the use of *Quantum Gates*. Quantum gates are defined as fundamental quantum computational operations. They are presented as unitary transformations, which act on the quantum states, which describe the qubits. Therefore a quantum computer must operate with the use of many quantum gates. The simplest gates are the single-qubit gates. Since our system is composed by two qubits, we will consider a two-qubit quantum gate. One of the most important quantum gates is the *Universal Two-Qubit Quantum Gate* [@Nielsen], which is called the CNOT-gate. The CNOT operation is defined by the following four by four unitary matrix $$\label{cnot}
U_{C
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We are interested in finite groups acting orientation-preservingly on $3$–manifolds (arbitrary actions, ie not necessarily free actions). In particular we consider finite groups which contain an involution with nonempty connected fixed point set. This condition is satisfied by the isometry group of any hyperbolic cyclic branched covering of a strongly invertible knot as well as by the isometry group of any hyperbolic $2$–fold branched covering of a knot in $S^3$. In the paper we give a characterization of nonsolvable groups of this type. Then we consider some possible applications to the study of cyclic branched coverings of knots and of hyperelliptic diffeomorphisms of $3$–manifolds. In particular we analyze the basic case of two distinct knots with the same cyclic branched covering.'
address: |
Università degli Studi di Trieste\
Dipartimento di Matematica e Informatica\
34100 Trieste\
Italy
author:
- Mattia Mecchia
bibliography:
- 'link.bib'
title: 'Finite groups acting on $3$–manifolds and cyclic branched coverings of knots'
---
We are interested in finite groups acting orientation-preservingly on 3-manifolds (arbitrary actions, ie not necessarily free actions). In particular we consider finite groups which contain an involution with nonempty connected fixed point set. This condition is satisfied by the isometry group of any hyperbolic cyclic branched covering of a strongly invertible knot as well as by the isometry group of any hyperbolic 2-fold branched covering of a knot in the 3-sphere. In the paper we give a characterization of nonsolvable groups of this type. Then we consider some possible applications to the study of cyclic branched coverings of knots and of hyperelliptic diffeomorphisms of 3-manifolds. In particular we analyze the basic case of two distinct knots with the same cyclic branched covering.
We are interested in finite groups acting orientation-preservingly on 3–manifolds (arbitrary actions, ie not necessarily free actions). In particular we consider finite groups which contain an involution with nonempty connected fixed point set. This condition is satisfied by the isometry group of any hyperbolic cyclic branched covering of a strongly invertible knot as well as by the isometry group of any hyperbolic 2–fold branched covering of a knot in S<sup>3</sup>. In the paper we give a characterization of nonsolvable groups of this type. Then we consider some possible applications to the study of cyclic branched coverings of knots and of hyperelliptic diffeomorphisms of 3–manifolds. In particular we analyze the basic case of two distinct knots with the same cyclic branched covering.
Introduction
============
The following problem has been diffusely studied in the literature: which finite groups admit an action on a homology $3$–sphere. The choice of the coefficients of the homology changes completely the situation.
If a finite group $G$ acts freely on an integer homology $3$–sphere (and in particular on the standard $3$–sphere $S^3$), the group $G$ has periodic cohomology of period four. Milnor [@Mn] gave a list of groups which are candidates for free actions on integer homology $3$–spheres. This list consists of the finite subgroups of ${\rm SO}(4)$ and the Milnor groups $Q(8n,k,l)$. The recent results of Perelman imply that no group of type $Q(8n,k,l)$ acts on $S^3$ [@P1; @P2]. On the contrary some Milnor groups admit an action on an integer homology $3$–sphere [@Mg].
If we admit arbitrary actions, the list of candidates is again comparable with the list of finite subgroups of ${\rm SO}(4)$. For example Reni and Zimmermann (see Zimmermann [@Z] and Mecchia and Zimmermann [@MZ1]) characterized the nonsolvable groups acting on integer homology $3$–spheres; the unique simple group that admits an action on an integer homology $3$–sphere is $\A_5$ (and it cannot act freely). For the standard $3$–sphere, Thurston’s orbifold geometrization theorem [@BLPo] implies that the finite groups with nonfree actions are exactly the subgroups of ${\rm SO}(4)$.
On the other hand, Cooper and Long [@CL] proved that every finite group admits an action on a rational homology $3$–sphere (and even a free action).
The class of $\Bbb{Z}_2$–homology $3$–spheres is intermediate between these two cases. This class is interesting also because $\Bbb{Z}_2$–homology $3$–spheres appear more frequently than integer homology $3$–spheres; for example $2$–fold branched coverings of knots in $S^3$ are $\Bbb{Z}_2$–homology $3$–spheres. Dotzel and Hamrick [@DH] proved that every finite $2$–group acting on a $\Bbb{Z}_2$–homology $3$–sphere acts orthogonally on $S^3$. This property is not true in general for solvable groups (already for integer homology $3$–spheres). In [@MZ1] a list of nonsolvable groups which are candidates for actions on $\Bbb{Z}_2$–homology $3$–spheres was given; in this case the only simple groups, that occur, are the projective special linear groups ${\rm PSL}(2,q)$.
In the present paper we consider finite groups acting orientation-preservingly on $3$–manifolds which contain an involution with nonempty connected fixed point set. We recall that any involution acting on a $\Bbb{Z}_2$–homology $3$–sphere has connected fixed point set (maybe empty), so there are some relations with our situation. For example the $2$–fold branched coverings of knots satisfy both assumptions but in general the two conditions give different classes of $3$–manifolds.
In fact not all $\Bbb{Z}_2$–homology $3$–spheres admit the action of an involution with nonempty fixed point set. For example if $K$ is a hyperbolic knot in $S^3$ without symmetries, for coefficients sufficiently large, Dehn surgery along the knot gives a hyperbolic manifold with trivial isometry group (by Thurston’s hyperbolic surgery theorem [@T]); moreover for $p$ odd a $p/q$–surgery gives a $\Bbb{Z}_2$–homology $3$–sphere.
On the other hand all the $3$–manifolds that are the $n$–fold cyclic branched covering of a strongly invertible knot admit the action of an involution with nonempty and connected fixed point set; it is easy to find examples of $n$–fold cyclic branched coverings of strongly invertible knots that have nontrivial first $\Bbb{Z}_2$–homology group (some computation of first homology group can be found in [@Go]). The possibility to study the $n$–fold cyclic branched coverings of strongly invertible knots is one of the motivations of this paper. Another example of a $3$–manifold admitting an involution with nonempty connected fixed point set can be obtained by a $3$–component link $L$ admitting a symmetry $t$ with nonempty fixed point set which acts as a reflection on one component while exchanging the remaining two (eg the Borromean rings); the $2$–fold branched covering $M$ of $L$ has nontrivial first $\Bbb{Z}_2$–homology group (see Sakuma [@Sa Sublemma 15.4]) and the lift of $t$ is an involution with the desired property.
When we consider finite groups acting on $3$–manifolds, the two different assumptions imply different analyses. In fact for $\Bbb{Z}_2$–homology $3$–spheres we have some global information about $2$–groups which admit an action. In our case we can control directly only the centralizer of the involution with nonempty connected fixed point set, thus it is more difficult to pass to a global description of the group, even in the case of $2$–groups.
A first step in this direction was obtained by Reni and Zimmermann.
\[thm0\][[@RZ1]]{}Let $G$ be a finite group of orientation-preserving diffeomorphisms of a closed orientable $3$–manifold; if $G$ contains an involution with nonempty connected fixed point set, then $G$ has sectional $2$–rank at most four (ie every $2$–subgroup is generated by at most four elements).
In this paper we try to analyze the whole group. We describe the structure of the group “up to solvable sections”. The interest for nonsolvable groups is also motivated by geometry. For example, if two knots have the same hyperbolic cyclic branched covering $M$ and the isometry group of $M$ is solvable, then it is possible to describe the relation between the two knots [@RZ1]. The problem is not completely solved if the isometry group is not solvable.
We summarize part of the description in the following theorem; we recall that a group $E$ is *semisimple* if it is perfect and the factor group of $E$ by its center is a direct product of nonabelian simple groups (see Suzuki [@S2 Chapter 6.6] or Gorenstein, Lyons and Solomon [@GLS1 p16]).
\[thm1\] Let $G$ be a finite group of orientation-preserving diffeomorphisms of a closed orientable $3$–manifold;
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Assurance cases are often required as a means to certify a critical system. Use of formal methods in assurance can improve automation, and overcome problems with ambiguity, faulty reasoning, and inadequate evidentiary support. However, assurance cases can rarely be fully formalised, as the use of formal methods is contingent on models validated by informal processes. Consequently, we need assurance techniques that support both formal and informal artifacts, with explicated inferential links and assumptions that can be checked by evaluation. Our contribution is a mechanical framework for developing assurance cases with integrated formal methods based in the Isabelle system. We demonstrate an embedding of the Structured Assurance Case Meta-model (SACM) using Isabelle/DOF, and show how this can be linked to formal analysis techniques originating from our verification framework, Isabelle/UTP. We validate our approach by mechanising a fragment of the Tokeneer security case, with evidence supplied by formal verification.'
author:
- Yakoub Nemouchi
- Simon Foster
- Mario Gleirscher
- Tim Kelly
bibliography:
- 'FM2019.bib'
title: |
Mechanised Assurance Cases with\
Integrated Formal Methods in Isabelle
---
Introduction {#sec:intro}
============
Preliminaries {#sec:prelim}
=============
Running Example: Tokeneer {#sec:tokeneer}
=========================
{#subsec:isacm}
Modelling and Verification of Tokeneer {#sec:model}
======================================
Mechanising the Tokeener Assurance Case {#sec:tokassure}
=======================================
Related Work {#subsec:relatedWork}
============
Conclusion {#sec:conclusion}
==========
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A theory of short-range correlations in two-nucleon removal due to elastic breakup (diffraction dissociation) on a light target is developed. Fingerprints of these correlations will appear in momentum distributions of back-to-back emission of the nucleon pair. Expressions for the momentum distributions are derived and calculations for reactions involving stable and unstable nuclear species are performed. The signature of short-range correlations in other reaction processes is also studied.'
author:
- 'C.A. Bertulani'
title: 'Short-range correlations in two-nucleon knockout reactions'
---
Introduction
============
A primary goal of nucleus-nucleus scattering has been to learn about nuclear structure. This has become even more critical in recent years, when many groups became very active in the investigation of the physics of nuclei far from the stability, mainly using nucleus-nucleus scattering processes at intermediate energies ($E_{lab}\simeq100$ MeV/nucleon). The theoretical complexity of such collisions has given rise to the use of a number of different approximations. The adequate theoretical tool for this purpose is Glauber’s multiple-scattering theory [@Gl59]. It has long been known both for its simplicity and amazing predictive power. One can find copious examples in the literature where the Glauber theory allows for a simple physical interpretation of experimental results as well as their quantitative analysis [@FH80; @HN81; @BHM02]. In fact, fragmentation reactions of the type discussed here have already been successfully analyzed in the framework of Glauber’s theory: in one-nucleon-removal reactions, the momentum distribution of the outgoing fragment has been shown to reflect the momentum distribution of the nucleon which is removed from the surface of the projectile nucleus [@HN81]. However, because of complications involving multiple scattering processes in nucleus-nucleus collisions, a full Glauber multiple scattering expansion is impracticable. Fortunately, the study of many direct nuclear processes, e.g. nucleon knockout, or stripping, elastic breakup (diffraction dissociation), etc, are possible using the optical limit of the Glauber theory, in which the nuclear ground-state densities and the nucleon-nucleon total cross sections are the main input. In fact, this method has become one of the main tools in the study of nuclei far from stability [@HT04]. When departures from the optical limit are observed, multiple nucleon-nucleon collisions and in-medium effects of the nucleon-nucleon interaction and nucleon-nucleon correlations become relevant.
Very peripheral collisions, with impact parameters just around the sum of the nuclear radii (grazing collisions), or larger, are well established tools for studying nuclear properties with intermediate energies and relativistic heavy ion collisions [@BB88; @Gl98; @BP99]. These collisions lead to excitation of giant resonances through both electromagnetic and strong interactions. At intermediate energy collisions ($E_{lab}\simeq100$ MeV/nucleon), or higher, the collision time is short and the action of the short-range nuclear interaction can excite the surface region of the colliding nuclei. This excitation can equilibrate forming a compound nucleus, and/or give rise to pre-equilibrium emission or other fast dissipation processes.
An interesting reaction mechanism in high-energy peripheral nucleus-nucleus collisions was suggested by Feshbach and Zabek [@FZ77; @Fes80]. This mechanism has been applied in refs. [@BD78; @Kin86; @TD81; @TDM84; @NDT85; @DNT86] to the calculation of pion production in heavy ion collisions from subthreshold to relativistic energies. It is assumed that pions are produced in peripheral processes through the excitation of the projectiles to a $\Delta$-isobar giant resonance. The results of these calculations were compared to inclusive pion production data for incident energies from 50 MeV to 2 GeV per nucleon. As emphasized by those authors, this comparison is not very meaningful at high energy where peripheral processes are expected to contribute very little to the total pion production. However, at subthreshold energies, coherent pion production should dominate the cross section. This mechanism is known as the nuclear Weizsaecker-Williams method. It works as follows.
The uncertainty relation associated to the variation of the time-dependent nuclear field on a scale $\Delta z$ leads a relation between the energy, $\Delta E$, and momentum transfer, $\Delta p$:$$\Delta E\simeq\frac{\hbar}{\Delta t}=\frac{\hbar\mathrm{v}}{\Delta
z},\ \ \ \ \ \Delta p\simeq\frac{\hbar}{\Delta z}\ \ \ \ \Longrightarrow
\ \ \Delta E=\mathrm{v}\Delta p.$$ The last equation on the right is the dispersion relation of a phonon. For typical situations, $\Delta z$ is a few fermis and the nuclear interaction pulse carries several hundred MeV. This relation can also be directly obtained from the collision kinematics. Let $\left( E_{i},\mathbf{p}_{i}\right) $ be the initial momentum of the projectile and $\left( \Delta E,\Delta
\mathbf{p}\right) $ the energy-momentum transfer in the reaction. One has$$\mathbf{P}_{f}=\mathbf{P}_{i}-\Delta\mathbf{p,}\ \ \ \ \ \ \ \ \ E_{f}=E_{i}-\Delta E.$$ From these relations one finds$$\frac{\mathbf{P}_{i}.\Delta\mathbf{p}}{E_{i}}-\Delta E=\frac{-\left( \Delta
E\right) ^{2}+\left( \Delta p\right) ^{2}+\left( M_{i}^{2}-M_{f}^{2}\right) c^{4}}{2E_{i}}.$$ Neglect the term on the right-hand side, one gets$$\Delta E=\mathbf{v\cdot\Delta p}=\mathrm{v}\Delta p_{z},\label{phonon}$$ where $\Delta p_{z}$ is the momentum transfer along the longitudinal direction.
The above relation can only be satisfied for nuclear excitations of very small momentum transfers, even for moderately large energy transfers. This is the case for the excitation of giant resonances. Thus, the nuclear interaction in grazing nuclear collisions is an effective tool to probe giant resonances (for a review see, e.g. ref. [@CF95]). For very large impact parameters (larger than the sum of the nuclear density radii) only the electromagnetic interaction is present, and eq. \[phonon\] (with $v\simeq c$) is just the energy-momentum relation of a real photon. In fact, relativistic Coulomb excitation is another useful tool for investigating giant resonances [@BB88; @BP99].
The phonon-like relation, eq. \[phonon\], is also a tool for studying nucleon-nucleon short-range correlations. The energy in eq. \[phonon\] could hardly be absorbed by a single nucleon since it would carry the momentum $\sim\sqrt{2m\Delta E}$, which is appreciably larger than that of eq. \[phonon\]. However, the phonon could be absorbed by a correlated nucleon-pair, which can have large kinetic energy and small total momentum, when the nucleons move in approximately opposite directions. This mechanism has been exploited by previous authors to study the emission of correlated pairs in relativistic heavy ion collisions [@BCD89; @NN91]. Remarkably, refs. [@FZ77; @Fes80] do not treat properly the nuclear absorption at small impact parameters, leading to very large cross sections for the emission of correlated pairs in peripheral collisions.
In many-body physics the word correlation is used to indicate effects beyond mean-field theories. In nuclear physics one distinguishes between short- and long-range correlations. Nuclear collective phenomena such as vibrations and rotations are known to be ruled by long-range correlations. These effects are relatively well known. Short-range correlations is also a subject of intensive studies in nuclear physics (see, e.g. [@Fra81; @Fra88; @Dim00; @BD02; @Tan03; @Ryc04]). The sources of short-range correlations are the strong repulsive core of the microscopic nucleon-nucleon interaction at short internucleon distances. The nucleon-nucleon interaction becomes strongly repulsive at short distances. The phase shifts for $^{1}$S$_{0}$ and $^{3}$S$_{1}$ are positive at low, and become negative at higher energies [@MAW69]. This indicates a repulsive core at short distances and attraction at long distances. In the nuclear medium this repulsive interaction is strongly influenced by Pauli blocking. The search for nuclear phenomena showing short-range correlations effects is one of the most discussed topics in the nuclear structure community. For the nuclear reaction community, the importance of Pauli correlations in high energy nucleus-nucleus collisions has prompted the consideration of effects of dynamical short-range correlations. When one treats nucleus-nucleus collisions at high energies with an optical phase shift function one can include both the center-of-mass correlations and two-body correlations in a straightforward manner to obtain a rapidly converging series for the physical observables.
It would be proper at this time to look for fingerprints of short-range correlations in high-energy collisions involving rare nuclear isotopes. Recent experiments on knockout reactions seem to indicate a quenching of the spectroscopic factor relative to shell-model predictions in neutron-rich nuclei [@HT04]. This reduction is thought to be a consequence of short-range correlations which spread the single particle strength to states with higher energies. In fact, systematic studies with the $A\left(
e,e^{\prime}p\right) $ reaction have provided ample evidence for this quenching phenomenon [@Pan97]. In this context, two-
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.