text
stringlengths 0
12.5k
| meta
dict | sentences_perturbed
int64 0
15
⌀ | length_stats
dict |
|---|---|---|---|
---
abstract: |
Following the work of Burger, Iozzi and Wienhard for representations, in this paper we introduce the notion of maximal measurable cocycles of a surface group. More precisely, let $\mathbf{G}$ be a semisimple algebraic $\operatorname{\mathbb{R}}$-group such that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is of Hermitian type. If $\Gamma \leq L$ is a torsion-free lattice of a finite connected covering of $\operatorname{\textup{PU}}(1,1)$, given a standard Borel probability $\Gamma$-space $(\Omega,\mu_\Omega)$, we introduce the notion of Toledo invariant for a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with an essentially unique boundary map.
The Toledo invariant is a multiplicative constant, hence it remains unchanged along $G$-cohomology classes and its absolute value is bounded by the rank of $G$. This allows to define maximal measurable cocycles. We show that the algebraic hull $\mathbf{H}$ of a maximal cocycle $\sigma$ is reductive, the centralizer of $H=\mathbf{H}(\operatorname{\mathbb{R}})^\circ$ is compact, $H$ is of tube type and $\sigma$ is cohomologous to a cocycle stabilizing a unique maximal tube-type subdomain. This result is analogous to the one obtained for representations.
We conclude with some remarks about boundary maps of maximal Zariski dense cocycles.
address: 'Department of Mathematics, University of Bologna, Piazza di Porta San Donato 5, 40126 Bologna, Italy'
author:
- 'A. Savini'
bibliography:
- 'biblionote.bib'
date: '. ©[ A. Savini 2020]{}'
title: Algebraic hull of maximal measurable cocycles of surface groups into Hermitian Lie groups
---
[^1]
.
Introduction
============
Given a torsion-free lattice $\Gamma \leq G$ in a semisimple Lie group $G$, any representation $\rho:\Gamma \rightarrow H$ into a locally compact group $H$ induces a well-defined map at the level of continuous bounded cohomology groups. Hence fixed a preferred bounded class in the cohomology of $H$, one can pullback it and compare the resulting class with the fundamental class determined by $\Gamma$ via Kronecker pairing. This is a standard way to obtain *numerical invariants* for representations, whose importance has become evident in the study of rigidity and superrigidity properties. Indeed a numerical invariant has bounded absolute value and the maximum is attained if and only if the representation can be extended to a representation $G \rightarrow H$ of the ambient group.
Several examples of these phenomena are given by the work of Bucher, Burger, Iozzi [@iozzi02:articolo; @bucher2:articolo; @BBIborel] in the case of representations of real hyperbolic lattices, by Burger and Iozzi [@BIcartan] and by Duchesne and Pozzetti [@Pozzetti; @duchesne:pozzetti] for complex hyperbolic lattices and by the work of Burger, Iozzi and Wienhard [@BIW07; @BIW09; @BIW1] when the target group is of Hermitian type. In the latter case, of remarkable interest is the analysis of the representation space $\textup{Hom}(\Gamma,G)$ when $G$ is a group of Hermitian type and $\Gamma$ is a lattice in a finite connected covering of $\operatorname{\textup{PU}}(1,1)$, that is a hyperbolic surface group. Burger, Iozzi and Wienhard [@BIW1] exploited the existence of a natural Kähler structure on the Hermitian symmetric space associated to $G$ in order to define the notion of *Toledo invariant* of a representation $\rho:\Gamma \rightarrow G$. That invariant has bounded absolute value and its maximality has important consequences on the Zariski closure $\mathbf{H}=\overline{\rho(\Gamma)}^Z$ of the image of the representation. Indeed the authors show that in the case of maximality $\mathbf{H}$ is reductive, $H=\mathbf{H}(\operatorname{\mathbb{R}})^\circ$ has compact centralizer and it is of tube type and the representation $\rho$ is injective with discrete image and it preserves a unique maximal tube-type subdomain [@BIW1 Theorem 5]. A domain is of *tube-type* if it can be written in the form $V+i\Omega$, where $V$ is a real vector space and $\Omega \subset V$ is an open convex cone. Maximal tube-type subdomains in a Hermitian symmetric space $\operatorname{\mathcal{X}}$ generalize the notion of complex geodesic in $\operatorname{\mathbb{H}}^n_{\operatorname{\mathbb{C}}}$ and they are all $G$-conjugated.
Partial results in the direction of [@BIW1 Theorem 5] were obtained by several authors. For instance when $G=\operatorname{\textup{PU}}(n,1)$ with $n \geq 2$, Toledo [@Toledo89] proved that maximal representations must preserve a complex geodesic. It is worth mentioning also the papers by Hernández [@Her91], by Koziarz and Maubon [@koziarz:maubon] and by Bradlow, García-Prada and Gothen [@garcia:geom; @garcia:dedicata]. In the latter case those results were obtained using different techniques based on the notion of Higgs bundle.
It is worth noticing that in the particular case of split real groups and surfaces without boundary, the set of maximal representations contains the Hitchin component [@hitchin]. The Hitchin component has been sistematically studied by serveral mathematicians. For instance Labourie [@labourie] focused his attention on the Asonov property, whereas Fock and Goncharov [@Fock:adv; @fock:hautes] related the Hitchin component with the notion of Lusztig’s positivity.
A crucial point in the proof of [@BIW1 Theorem 5] is that maximal representations are *tight*, that is the seminorm of the pullback class is equal to the norm of the bounded Kähler class. The tightness property has an analytic counterpart in terms of maps between symmetric spaces and Burger, Iozzi and Wienhard [@BIW09] give a complete characterization of tight subgroups of a Lie group of Hermitian type.
Recently the author [@savini3:articolo] together with Moraschini [@moraschini:savini; @moraschini:savini:2] and Sarti [@savini:sarti] has applied bounded cohomology techniques to the study measurable cocycles with an essentially unique boundary map. The existence of a boundary map allows to define a pullback in bounded cohomology as in [@burger:articolo] and hence to develop a theory of numerical invariants, called *multiplicative constants*, also in the context of measurable cocycles.
The main goal of this paper is the study of measurable cocycles of surface groups. Let $\Gamma \leq L$ be a torsion-free lattice of a finite connected covering $L$ of $\operatorname{\textup{PU}}(1,1)$. Consider a standard Borel probability $\Gamma$-space $(\Omega,\mu_\Omega)$ and let $\mathbf{G}$ be a semisimple real algebraic group such that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is of Hermitian type. If a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ admits an essentially unique boundary map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow G$, then we can apply the theoretical background developed in [@moraschini:savini; @moraschini:savini:2] to defined the *Toledo invariant of $\sigma$*. In an analogous way to what happens for representations, the Toledo invariant is constant along $G$-cohomology classes and has absolute value bounded by $\operatorname{rk}(\operatorname{\mathcal{X}})$, the rank of the symmetric space $\operatorname{\mathcal{X}}$ associated to $G$. Thus it makes sense to speak about *maximal measurable cocycles*. This will be a particular example of *tight cocycles* (see Definition \[def:tight:cocycle\]).
Maximality allows to give a characterization of the *algebraic hull* of a measurable cocycle, as stated in the following
\[teor:maximal:alghull\] Let $\Gamma \leq L$ be a torsion-free lattice of a finite connected covering $L$ of $\operatorname{\textup{PU}}(1,1)$ and let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Let $\mathbf{G}$ be a semisimple algebraic $\operatorname{\mathbb{R}}$-group such that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is a Lie group of Hermitian type. Consider a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with essentially unique boundary map. Denote by $\mathbf{H}$ the algebraic hull of $\sigma$ in $\mathbf{G}$ and set $H=\mathbf{H}(\operatorname{\mathbb{R}})^\circ$. If $\sigma$ is maximal, then
1. the algebraic hull $\mathbf{H}$ is reductive
2. the centralizer $Z_G(H)$ is compact;
3. the symmetric space $\operatorname{\mathcal{Y
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Cosmic strings are one-dimensional topological defects which could have been formed in the early stages of our Universe. They triggered a lot of interest, mainly for their cosmological implications: they could offer an alternative to inflation for the generation of density perturbations. It was shown however that cosmic strings lead to inconsistencies with the measurements of the cosmic microwave background temperature anisotropies. The picture is changed recently. It was shown that, on the one hand, cosmic strings can be generically formed in the framework of supersymmetric grand unified theories and that, on the other hand, cosmic superstrings could play the rôle of cosmic strings. There is also some possible observational support. All this lead to a revival of cosmic strings research and this is the topic of my lecture.'
address: 'Department of Physics, King’s College London, Strand, London WC2R 2LS, U.K. '
author:
- 'Mairi Sakellariadou[^1]'
title: The Revival of Cosmic Strings
---
\#1\#2\#3\#4[\#1pt]{}
Introduction
============
Cosmic strings attracted a lot of interest around the eighties and nineties. They offered an alternative mechanism to cosmological inflation for the generation of the primordial density perturbations leading to the large-scale structure formation one observes. However, towards the turn of the century cosmic strings lost their appeal, since it was shown that they lead to inconsistencies with the Cosmic Microwave Background (CMB) measurements. Nevertheless, the story of cosmic strings does not end here. In the last few years there has been a remarkable revival of the theoretical and observational activity.
In this lecture, I will discuss the present view on the cosmological rôle of cosmic strings. In Section 2, I will discuss aspects of cosmic strings in the framework of Grand Unified Theories (GUTs). I will first analyse the formation and classification of topological as well as embedded defects. I will then briefly discuss the CMB temperature anisotropies and I will compare the predictions of topological defects models with current measurements. I will then conclude that topological defects in general, and cosmic strings in particular, are ruled out as the unique source of density perturbations leading to the observed structure formation. At this point I do not conclude that cosmic strings are ruled out, but I ask instead which are the implications for the models of high energy physics which we employed to construct our cosmological scenario. The first question is whether cosmic strings are expected to be generically formed. I will address this question in the framework of Supersymmetric Grand Unified Theories (SUSY GUTs). I will show that cosmic strings are indeed expected to be generically formed within a large class of models within SUSY GUTs and therefore one has to use mixed models, consisting in inflation with a sub-dominant partner of cosmic strings. I will then examine whether such mixed models are indeed compatible with the CMB data. I will present two well-studied inflationary models within supersymmetric theories, namely F/D-term hybrid inflation. I will impose constraints on the free parameters of the models (masses and couplings) so that there is agreement between theory and measurements. In Section 3, I will address the issue of cosmic superstrings as cosmic strings candidates, in the context of braneworld cosmologies. In Section 4, I will discuss a candidate of a gravitational lensing by a cosmic string. I will round up with the conclusions in Section 5.
Topological Defects
===================
Topological Defects in GUTs
---------------------------
The Universe has steadily cooled down since the Planck time, leading to a series of Spontaneously Broken Symmetries (SSB). SSB may lead to the creation of topological defects [@td1; @td2], which are false vacuum remnants, such as domain walls, cosmic strings, monopoles, or textures, via the Kibble mechanism [@kibble].
The formation or not of topological defects during phase transitions, followed by SSB, and the determination of the type of the defects, depend on the topology of the vacuum manifold ${\cal M}_n$. The properties of ${\cal M}_n$ are usually described by the $k^{\rm th}$ homotopy group $\pi_k({\cal M}_n)$, which classifies distinct mappings from the $k$-dimensional sphere $S^k$ into the manifold ${\cal
M}_n$. To illustrate that, let me consider the symmetry breaking of a group G down to a subgroup H of G . If ${\cal M}_n={\rm G}/{\rm H}$ has disconnected components, or equivalently if the order $k$ of the nontrivial homotopy group is $k=0$, then two-dimensional defects, called [*domain walls*]{}, form. The spacetime dimension $d$ of the defects is given in terms of the order of the nontrivial homotopy group by $d=4-1-k$. If ${\cal M}_n$ is not simply connected, in other words if ${\cal M}_n$ contains loops which cannot be continuously shrunk into a point, then [*cosmic strings*]{} form. A necessary, but not sufficient, condition for the existence of stable strings is that the first homotopy group (the fundamental group) $\pi_1({\cal M}_n)$ of ${\cal M}_n$, is nontrivial, or multiply connected. Cosmic strings are line-like defects, $d=2$. If ${\cal M}_n$ contains unshrinkable surfaces, then [*monopoles*]{} form, for which $k=1, ~d=1$. If ${\cal
M}_n$ contains noncontractible three-spheres, then event-like defects, [*textures*]{}, form for which $k=3, ~d=0$.
Depending on whether the symmetry is local (gauged) or global (rigid), topological defects are called local or global. The energy of local defects is strongly confined, while the gradient energy of global defects is spread out over the causal horizon at defect formation. Patterns of symmetry breaking which lead to the formation of local monopoles or local domain walls are ruled out, since they should soon dominate the energy density of the Universe and close it, unless an inflationary era took place after their formation. Local textures are insignificant in cosmology since their relative contribution to the energy density of the Universe decreases rapidly with time [@textures].
Even if the nontrivial topology required for the existence of a defect is absent in a field theory, it may still be possible to have defect-like solutions. Defects may be [*embedded*]{} in such topologically trivial field theories [@embedded]. While stability of topological defects is guaranteed by topology, embedded defects are in general unstable under small perturbations.
Cosmic Microwave Background Temperature Anisotropies
----------------------------------------------------
The CMB temperature anisotropies offer a powerful test for theoretical models aiming at describing the early Universe. The characteristics of the CMB multipole moments can be used to discriminate among theoretical models and to constrain the parameters space.
The spherical harmonic expansion of the CMB temperature anisotropies, as a function of angular position, is given by $$\label{dTT}
\frac{\delta T}{T}({\bf n})=\sum _{\ell m}a_{\ell m} {\cal W}_\ell
Y_{\ell m}({\bf n})~\,
\ \ \ \mbox {with}\ \ \
a_{\ell m}=\int {\rm
d}\Omega _{{\bf n}}\frac{\delta T}{T}({\bf n})Y_{\ell m}^*({\bf n})~;$$ ${\cal W}_\ell $ stands for the $\ell$-dependent window function of the particular experiment. The angular power spectrum of CMB temperature anisotropies is expressed in terms of the dimensionless coefficients $C_\ell$, which appear in the expansion of the angular correlation function in terms of the Legendre polynomials $P_\ell$: $$\biggl \langle 0\biggl |\frac{\delta T}{T}({\bf n})\frac{\delta T}{
T}({\bf n}') \biggr |0\biggr\rangle \left|_{{~}_{\!\!({\bf n\cdot
n}'=\cos\vartheta)}}\right. = \frac{1}{4\pi}\sum_\ell(2\ell+1)C_\ell
P_\ell(\cos\vartheta) {\cal W}_\ell^2 ~.
\label{dtovertvs}$$ It compares points in the sky separated by an angle $\vartheta$. Here, the brackets denote spatial average, or expectation values if perturbations are quantised. Equation (\[dtovertvs\]) holds only if the initial state for cosmological perturbations of quantum-mechanical origin is the vacuum [@jm1; @jm2]. The value of $C_\ell$ is determined by fluctuations on angular scales of the order of $\pi/\ell$. The angular power spectrum of anisotropies observed today is usually given by the power per logarithmic interval in $\ell$, plotting $\ell(\ell+1)C_\ell$ versus $\ell$.
The predictions of the defects models regarding the characteristics of the CMB spectrum are:
- Global ${\cal O}(4)$ textures lead to a position of the first acoustic peak at $\ell\simeq 350$ with an amplitude $\sim 1.5$ times higher than the Sachs-Wolfe plateau [@rm].
- Global ${\cal O}(N)$ textures in the large $N$ limit lead to a quite flat spectrum, with a slow decay after $\ell \sim 100$ [@dkm]. Similar are the predictions of other global ${\cal O}(N)$ defects [@clstrings; @num].
- Local cosmic strings predictions are not very well established and range from an almost flat spectrum [@acdkss] to a single wide bump at $\ell \sim 500
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: |
Regions of nested loops are a common feature of High Performance Computing (HPC) codes. In shared memory programming models, such as OpenMP, these structure are the most common source of parallelism. Parallelising these structures requires the programmers to make a static decision on how parallelism should be applied. However, depending on the parameters of the problem and the nature of the code, static decisions on which loop to parallelise may not be optimal, especially as they do not enable the exploitation of any runtime characteristics of the execution. Changes to the iterations of the loop which is chosen to be parallelised might limit the amount of processors that can be utilised.
We have developed a system that allows a code to make a dynamic choice, at runtime, of what parallelism is applied to nested loops. The system works using a source to source compiler, which we have created, to perform transformations to user’s code automatically, through a directive based approach (similar to OpenMP). This approach requires the programmer to specify how the loops of the region can be parallelised and our runtime library is then responsible for making the decisions dynamically during the execution of the code.
Our method for providing dynamic decisions on which loop to parallelise significantly outperforms the standard methods for achieving this through OpenMP (using if clauses) and further optimisations were possible with our system when addressing simulations where the number of iterations of the loops change during the runtime of the program or loops are not perfectly nested.
author:
-
bibliography:
- 'IEEEabrv.bib'
- 'dynamicloop.bib'
title: Dynamic Loop Parallelisation
---
Introduction
============
High Performance Computing (HPC) codes, and in particular scientific codes, require parallel execution in order to achieve a large amount of performance increase. Depending on the underlying parallel platform which is used, programmers use different programming models in order to achieve parallel execution. In distributed memory systems, the message passing programming model is the most commonly used approach for applying parallelism in the codes. In shared memory systems however, an attractive choice for parallel programming is through OpenMP\[16\].
The parallelisation of codes with OpenMP is often achieved with loop parallelisation. As long as the iterations of a loop are independent, they can be distributed to the available processors of the system in order to execute them in parallel. A programmer is required to specify a loop that can be parallelised by placing compiler directives before the loop, resolving any dependency issues between the iterations beforehand. HPC codes often consist of regions with nested loops of multiple levels. In order to parallelise these regions, a choice must be made on how parallelism should be applied on the loops. Even though OpenMP supports a variety of strategies for parallelising nested loops, only a single one can be used to parallelise the code.
A static choice however, cannot exploit any runtime characteristics during the execution of the program. Changes in the input parameters of the executable which affect the iterations of the loops may render the parallelisation decision suboptimal. In addition to this, the iterations of a loop can change at runtime due to the nature of the code. A common feature of HPC codes is to organise the data into hierarchies, for example blocks of multi-dimensional arrays. Depending on the problem, the blocks can have different shapes and sizes. These parameters affect the loops that are responsible for accessing this data. In some situations, a static decision has the potential to impose a limitation on the amount of processors that can be used for the parallel execution of the loops. With the current trend of chip manufactures to increase the number of cores in the processors in each generation leading to larger and larger shared memory system being readily available to computational scientists on the desktop and beyond, a more dynamic approach must be considered for taking such decisions.
This report outlines our investigations into various strategies that can be applied at runtime in order to make a dynamic decision on how to parallelise a region with nested loops. Our approach is to try to automatically perform modifications to users code before compilation in order to enable the code to make these decisions dynamically at runtime. Specifically, we investigated the possibility of having multiple versions of a loop within a region of nested loops in order to make a dynamic choice on whether a loop should be execute sequentially or in parallel.
OpenMP
======
OpenMP[@openmp] is, arguably, the dominant parallel programming model currently used for writing parallel programs for used on shared memory parallel systems. Now at version 3.1, and supported by C and FORTRAN, OpenMP operates using compiler directives. The programmer annotates their code specifying how it should be parallelised. The compiler then transforms the original code into a parallel version when the code is compiled. By providing this higher level of abstraction, OpenMP codes tend to be easier to develop, debug and maintain. Moreover, with OpenMP it is very easy to develop the parallel version of a serial code without any major modifications.
Whilst there are a number of different mechanisms that OpenMP provides for adding parallel functionality to programs, the one that is generally used most often is loop parallelisation. This involves taking independent iterations of loops and distributing them to a group of threads that perform these sets of independent operations in parallel. Since each of the threads can access shared data, it is generally straightforward to parallelise any loop with no structural changes to the program.
Nested Loops
============
HPC codes, and particularly scientific codes, deal with numerical computations based on mathematical formulas. These formulas are often expressed in the form of nested loops, where a set of computations is applied to a large amount of data (generally stored in arrays) and parallelisation can be applied to each loop individually. The arrays often consist of multiple dimensions and the access on the data is achieved with the presence of nested loops. Furthermore it is not uncommon that the arrangement of the data is done in multiple hierarchies, most commonly in blocks with multi-dimensional arrays, where additional loops are require in order to traverse all the data. When such code is presented, a choice must be made on which loop level to parallelise (where the parallelisation should occur)[@Duran04runtimeadjustment]. A summary of the available strategies is presented in Table \[tab:nestedloops\]
[**Name**]{} [**Description**]{}
----------------- ----------------------------------------------------------------
Outermost Loop Parallelisation of the outermost loop
Inner Loop Parallelisation of one of the inner loops
Nested Parallelisation of multiple loops with nested parallel regions
Loop Collapsing Collapsing the loops into a single big loop
Loop Selection Runtime loop selection using if clauses
: Strategies for parallelising nested loop regions[]{data-label="tab:nestedloops"}
Outermost loop
--------------
The most commonly used approach is to parallelise the outermost loop of a nested loop region, as shown in Listing \[alg:outerloop\]. Using this strategy, the iterations of the loop are distributed to the members of the thread team. The threads operate in parallel by executing the portion of iterations they are assigned to them individually. The nested loops of the parallel region are executed in a sequential manner.
#pragma omp parallel for private (j)
for(i = 0; i < I; i++){
for(j = 0; j < J; j++){
work();
}
}
Parallelising the outermost loop is often a good choice, as it minimises the parallel overheads of the OpenMP implementation (such as the initialisation of the parallel region, the scheduling of loop iterations to threads and the synchronisation which takes place at the end of the Parallel loops). More extensive work on the overheads of various OpenMP directives can be found in [@Chen:1990:ISG:325164.325150].
Despite the advantages of the Outermost Loop parallelisation strategy in this context, there are drawbacks of this choice. The maximum amount of available parallelism is limited by the number of iterations of the outerloop loop. Considering the example code in Listing \[alg:outerloop\], it is only possible to have $I$ tasks being executed in parallel. This restricts the number of threads the code can utilise upon execution, and therefore the number of processors or cores that can be exploited.
Inner loop
----------
This is a variant on the outermost loop strategy, with the difference that one of the inner loops of the region is chosen to be parallelised. This approach will only be required or beneficial if the outer loop does not have enough iterations to parallelise efficiently as this variant on the parallelisation strategy introduces parallelisation overheads by requiring the parallelisation to be performed for each loop of the outerloop rather than once for all the loops (as shown in Listing \[alg:innerloop\]). Further nesting of the parallelisation (at deeper loop levels) will further increase the performance problems; the parallel overheads appear a lot more times, whereas the amount of work of each iteration becomes finer.
for(i = 0; i < I; i++){
#pragma omp parallel for shared (i)
for(j = 0; j < J; j++){
work();
}
}
Another issue with this strategy is the scenario where loops are not perfectly nested. In this situation, when there are computations in-between the loops, as shown in Listing \[alg:poorlynestedloop\], parallelising a loop of a deeper level will result in sequential execution of that work. Depending on the amount of the execution time which is now serialised, this approach has the potential to increase the execution time of the code.
for(i = 0; i < I; i++){
somework();
for(j = 0; j < J; j++){
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Flagella are hair-like appendages attached to microorganisms that allow the organisms to traverse their fluid environment. The algae *Volvox* are spherical swimmers with thousands of individual flagella on their surface and their coordination is not fully understood. In this work, a previously developed minimal model of flagella synchronization is extended to the outer surface of a sphere submerged in a fluid. Each beating flagellum tip is modelled as a small sphere, elastically bound to a circular orbit just above the spherical surface and a regularized image system for Stokes flow outside of a sphere is used to enforce the no-slip condition. Biologically relevant distributions of rotors results in a rapidly developing and robust symplectic metachronal wave traveling from the anterior to the posterior of the spherical *Volvox* body.'
author:
- 'Forest O. Mannan'
- Miika Jarvela
- Karin Leiderman
title: A Minimal Model of the Hydrodynamical Coupling of Flagella on a Spherical Body with application to Volvox
---
Cilia and flagella are ubiquitous among eukaryotic cells. These small, hair-like appendages extend from cell membranes and play important roles in locomotion and fluid transport by undergoing a periodic motion. Examples include the transport of foreign particles out of the lungs [@TWSC14], the creation of left-right asymmetry in embryonic development [@essner2002left], and filter feeding [@mayne2017particle].
These biologically relevant flows are generally created through the coordinated collective motion of many cilia or flagella. The origin and means of this large-scale coordination has been a long standing area of research [@taylor1951analysis]. In some scenarios, hydrodynamic coupling alone has successfully explained such coordination [@brumley2014flagellar; @brumley2015metachronal; @vilfan2006hydrodynamic; @goldstein2016elastohydrodynamic; @dillon2000integrative; @yang2008integrative; @guo2018bistability]. Experimental approaches range from investigating colloidal oscillators with optical tweezers [@Kotar7669] to observing synchronization between lone flagellum pairs, emanating from two separate cells and tethered at fixed distances via micro-pipettes [@brumley2014flagellar]. Examples of theoretical approaches include the study of filaments with internal driving forces immersed in a fluid [@mannan2018; @dillon2000integrative; @goldstein2016elastohydrodynamic; @yang2008integrative; @guo2018bistability] and so-called minimal models where the cilia or flagella are represented as oscillating ‘rotors’ immersed in a viscous fluid [@niedermayer2008synchronization; @brumley2015metachronal; @brumley2016long]. This latter approach is what we build on in the current work.
Ensembles of large numbers of cilia often exhibit regular variations in the beating phase of adjacent cilia, which are characterized as metachronal waves (MWs) [@elgeti2013emergence; @brumley2015metachronal; @mitran2007metachronal]. The colonial alga *Volvox carteri* (Volvox) has become a model organism for studying the emergence of MWs [@brumley2015metachronal; @matt2016volvox]; an informative review of these studies can be found elsewhere [@goldstein2015green]. *Volvox* is a multicellular green algae whose surface consists of fairly regularly-spaced biflagellated somatic cells, embedded in the extracellular matrix [@matt2016volvox; @kirk2005volvox]. *Volvox* swimming is mainly due to the coordinated beating of their flagella, which exhibit clear MWs traveling from the anterior to posterior of the spherical *Volvox* body [@brumley2015metachronal; @brumley2012hydrodynamic]. Further, *Volvox* flagella beat towards the posterior of the colony with a small 10-20 degree tilt out of the meridional plane [@hoops1983ultrastructure; @hoops1997motility]. The tilt has long been thought to allow volvox to ‘swirl’, where they rotate during forward progression swimming [@mast1926reactions; @hoops1997motility].
Minimal models of coupled rotors [@niedermayer2008synchronization; @brumley2014flagellar] are particularly amenable to the theoretical study of MW formation on *Volvox* due to the number and spacing of flagella on the *Volvox* surface; one flagellum is close enough to another flagellum to influence its periodic beating (via hydrodynamics) but typically not close enough to make physical contact. To represent a single flagellum with a rotor, the tip of the flagellum is modeled as a small, rigid sphere with a preferred circular orbit. The shape of the orbit is controlled with a system of springs and the motion is due to a prescribed driving force. The fluid flow induced by one rotor on another rotor can then be well approximated by a single Stokeslet [@brumley2014flagellar]. Additionally, the leading-order far-field flow induced by a rigid sphere is precisely given by a Stokeslet [@nasouri2016hydrodynamic]. Thus, a model rotor (oscillator) captures both the phase of the beating flagellum and well approximates its corresponding, induced far-field flow.
Previous studies of *Volvox* flagella with minimal models of coupled oscillators were able to reproduce semi-quantitative characteristics of the average metachronal dynamics and the emergence of MWs [@brumley2015metachronal; @brumley2012hydrodynamic], while also using simplifying assumptions about *Volvox* geometry, e.g., the surface of *Volvox* was treated as a no-slip plane. One study considered flagellum beating on a spherical body, but was limited by using a single chain of rotors that all beat in the same direction [@nasouri2016hydrodynamic]; flagella on *Volvox* cover the entire surface and beat from the anterior to the posterior. In this study, we extend these minimal models of coupled oscillators to investigate biologically-relevant distributions of beating flagella on the surface of a sphere.
[0.49]{}
Following the studies by Brumley *et al.* [@brumley2015metachronal; @brumley2012hydrodynamic], each rotor is a rigid sphere of radius $a$, elastically bound in a circular trajectory of radius $r_0$, about a prescribed center point located a distance $d$ above the spherical *Volvox* body, as depicted in Figure \[fig:RotorSchematics\](a). The preferred plane of orbit is defined by the center of rotation and a vector normal to the plane of rotation, ${{\bf n}}$. The orbit of the rotor is driven by a constant tangential driving force $f^{dr}$ in the ${{\bf e}}_\phi$ direction and the preferred trajectory is elastically enforced through a radial spring and a transverse spring normal to the plane.
To evolve the positions of the rotors in time, the velocity of each rotor is determined from a system of coupled, force-balance equations, one for each rotor. The forces acting on each rotor are the elastic spring forces that resist stretching, the net hydrodynamic drag force, and the prescribed constant driving force.
In the case of one single rotor, the hydrodynamic drag force is assumed to be equal and opposite to the driving force and spring forces yielding the force balance: $$\label{eq:SingleRotorForceBalance}
{{\bf \gamma}}({{\bf x}}){{\bf v}}= -\lambda(r-r_0){{\bf e}}_r - \eta \zeta {{\bf e}}_\zeta + f^{dr}{{\bf e}}_\phi,$$ where $\lambda$ and $\eta$ prescribe the stiffness of the radial and transverse springs and ${{\bf \gamma}}$ is the friction tensor. For simplicity, as in previous studies, we let ${{\bf \gamma}}= \gamma_0 {\bf I}$ where $\gamma_0 = 6\pi\mu a$, the drag on a sphere in free space, and $\mu$ is the dynamic viscosity of the fluid [@nasouri2016hydrodynamic]. With the parameters used in this study we compared this free-space drag to the case if the rotor was above the actual spherical body, and estimated a relative difference of about $2.7\%$, see the Supplementary Material for details [@SupplementaryMaterial].
{width="\textwidth"}
When considering a single lone rotor, there is no imposed external fluid flow and thus the hydrodynamic drag on the rotor depends only on the rotor’s own velocity. To evolve $N$ coupled rotors in time, a net drag force on each individual rotor must be considered that includes the effects of the external flow induced by all the other rotors. The external fluid flow imposed on a single rotor by all other rotors is calculated using a far-field approximation with regularized Stokeslets [@cortez2001method; @cortez2005method]. Letting $G$ be the regularized Green’s function in the presence of a no-slip sphere [@wrobel2016regularized], $\{{{\bf x}}_i \}_{i=1}^N$ be the rotor locations, and ${{\bf F}}^\text{ext}_j$ be the external forces acting on the $j^\text{th}$ rotor then the net hydrodynamic force on the $i^\text{th}$ rotor is ${\bf F_i} = -{{\bf \gamma}}({{\bf x
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The year of 2005 marks the 75th anniversary since Trumpler (1930) provided the first definitive proof of interstellar grains by demonstrating the existence of general absorption and reddening of starlight in the galactic plane. This article reviews our progressive understanding of the nature of interstellar dust.'
address: 'Department of Physics and Astronomy, University of Missouri, Columbia, MO 65211, USA'
author:
- Aigen Li
title: 'Interstellar Grains – The 75$^{\rm TH}$ Anniversary'
---
Introduction: A Brief History for the Studies of Interstellar Dust
==================================================================
In 1930 – exactly 75 years ago, the existence of solid dust particles in interstellar space was by the first time firmly established, based on the discovery of color excesses (Trumpler 1930). But the history of the interstellar dust-related studies is a much longer and complex subject, and can be dated back to the late 18th century when Herschel (1785) described the dark markings and patches in the sky as “holes in the heavens”. Below is a summary of the highlights of this history. For a more detailed record of the historical development of dust astronomy, I refer the interested readers to Aiello & Cecchi-Pestellini (2000), Dorschner (2003), Li & Greenberg (2003), and Verschuur (2003).
\
- As early as 1785, Sir William Herschel noticed that the sky looks patchy with stars unevenly distributed and some regions are particularly “devoid” of stars. He described these dark regions (“star voids”) as “[**holes in the heavens**]{}”.
- At the beginning of the 20th century, astronomers started to recognize that the “starless holes” were real physical structures in front of the stars, containing [**dark obscuring masses of matter**]{} able to absorb starlight (Clerke 1903; Barnard 1919), largely thanks to the new technology of photography which made the photographic survey of the dark markings possible. Sir Harold Spencer Jones (1914) also attributed the dark lanes seen in photographs of edge-on spiral galaxies to obscuring matter. Whether the dark lanes in the Milky Way is caused by obscuring material was one of the points of contention in the Curtis-Shapley debate (Shapley & Curtis 1921).
- Wilhelm Struve (1847) noticed that the apparent number of stars per unit volume of space declines in all directions receding from the Sun. He attributed this effect to [**interstellar absorption**]{}.[^1] From his analysis of star counts he deduced an visual extinction of ${{\sim\,}}1{\,{\rm mag\, kpc}^{-1}}$. Many years later, Jacobus Kapteyn (1904) estimated the interstellar absorption to be ${{\sim\,}}1.6{\,{\rm mag\, kpc}^{-1}}$, in order for the observed distribution of stars in space to be consistent with his assumption of a constant stellar density. This value was amazingly close to the current estimates of ${{\sim\,}}1.8{\,{\rm mag\, kpc}^{-1}}$. Max Wolf (1904) demonstrated the existence of discrete clouds of interstellar matter by comparing the star counts for regions containing obscuring matter with those for neighbouring undimmed regions.
- In 1912, Vesto Slipher discovered [**reflection nebulae**]{} from an analysis of the spectrum of the nebulosity in the Pleiades cluster which he found was identical to that of the illuminating stars. It was later recognized that the nebulosity was created by the scattering of light from an intrinsically luminous star by the dust particles in the surrounding interstellar medium (ISM).
- Henry N. Russell (1922) argued that [**dark clouds accounted for the obscuration and this obscuring matter had to be in the form of millimeter-sized fine dust**]{}. Anton Pannekoek (1920) recognized that the obscuration can not be caused by the Rayleigh scattering of gas, otherwise one would require unrealistically high masses for the dark nebulae. He also noticed that, as suggested by Willem de Sitter, the cloud mass problem can be vanished if the extinction is due to dust grains with a size comparable to the wavelength of visible light.
- In 1922, Mary L. Heger observed two broad absorption features at 5780${\,{\rm \AA}}$ and 5797${\,{\rm \AA}}$, conspicuously broader than atomic interstellar absorption lines. The interstellar nature of these absorption features was established 12 years later by Paul W. Merrill (1934). These mysterious lines – known as the [**diffuse interstellar bands (DIBs)**]{}, still remain unidentified.
- In 1930, a real breakthrough was made by [**Robert J. Trumpler who provided the first unambiguous evidence for interstellar absorption and reddening which led to the general establishment of the existence of interstellar dust**]{}. Trumpler (1930) based this on a comparison between the photometric distances and geometrical distances of 100 open clusters.[^2] If there was no interstellar absorption, the two distances should be in agreement. However, Trumpler (1930) found that the photometric distances are systematically larger than the geometrical distances, indicating that the premise of a transparent ISM was incorrect.[^3] Using this direct and compelling method he was able to find both absorption and selective absorption or color excess with increasing distance.[^4] Trumpler (1930) also concluded that [**the observed color excess could only be accounted for by “fine cosmic dust”**]{}.
- In 1932, Jan H. Oort demonstrated that the space between the stars must contain a considerable amount of matter. He derived an [**upper limit (“Oort limit”) on the total mass of the matter (including both stars and interstellar matter)**]{} in the solar neighbourhood from an analysis of the motions of K giants perpendicular to the plane of the Galaxy (the $z$-direction). An upper limit of ${{\sim\,}}$$1.0\times 10^{-23}{\,{\rm g}}{\,{\rm cm}}^{-3}$ on the total mass density was obtained from measuring the gravitational acceleration in the $z$-direction. The Oort limit has important implications: (1) [**there has to be more material in the galactic plane than could be seen in stars**]{} since the mass density of known stars is only ${{\sim\,}}$$4.0\times 10^{-24}{\,{\rm g}}{\,{\rm cm}}^{-3}$; and (2) [**the upper limit of ${{\sim\,}}$$6.0\times 10^{-24}{\,{\rm g}}{\,{\rm cm}}^{-3}$ on the mass density of the interstellar matter in the solar neighbourhood places severe restrictions on the source of the obscuration**]{}: what kind of material distributed with this density with what mass absorption coefficient could give rise to the observed visual extinction of about $1{\,{\rm mag\, kpc}^{-1}}$? apparently, only with small dust grains could so much extinction by so little mass (and the $\lambda^{-1}$ wavelength dependence; see below) be explained.
- In 1936, Rudnick by the first time measured the wavelength dependence of extinction in the wavelength range 4000–6300${\,{\rm \AA}}$ based on differential spectrophotometric observations of reddened and unreddened stars of the same spectral type. Rudnick (1936) found that the measured [**extinction curve was inconsistent with Rayleigh scattering**]{} (which has a $\lambda^{-4}$ wavelength dependence). This so-called “[**pair-match**]{}” method remains the most common way of deriving an interstellar extinction curve.
- By the end of the 1930s, a [**$\lambda^{-1}$ extinction law in the wavelength range 1–3${\,{\rm \mu m^{-1}}}$**]{} had been well established (Hall 1937; Greenstein 1938; Stebbins, Huffer, & Whitford 1939), thanks to the advent of the photoelectric photometry, excluding free electrons, atoms, molecules, and solid grains much larger or much smaller than the wavelength of visible light, leaving solids with a size comparable to the wavelength as the sole possibility.
- In 1936, Struve & Elvey demonstrated the scattering of general starlight by interstellar clouds based on a series of observations of the dark cloud Barnard 15, the core of which is appreciably darker than the rim, although the latter is about opaque as the former. They attributed the increased brightness of the outer region to [**interstellar scattering**]{}.
- In 1941, Henyey & Greenstein confirmed the existence of [**diffuse interstellar radiation**]{} (which was originally detected by van Rhijn \[1921\]) in the photographic wavelength region. They interpreted the observed intensity of diffuse light as scattered stellar radiation by [**interstellar grains which are strongly forward scattering and have a high albedo (higher than ${{\sim\,}}$0.3)**]{}.
- In 1943, with the advent of the six-colour photometry (at 3530${\,{\rm \AA}}$$<$$\lambda$$<$10300${\,{\rm \AA}}$) Stebbins & Whitford found that the extinction curve exhibits curvature at the near infrared (IR; $\lambda \approx 1.03\,\mu$m) and ultraviolet (UV; $\lambda \approx 0.35\,\mu$m) regions, [**deviating from the simple $\lambda^{-1}$ law**]{}.
- In 1953, Morgan, Harris, & Johnson estimated the ratio of total visual extinction to color excess to be $A_V/E(B-V)\approx 3.0\pm 0.2$. This was supported by a more detailed study carried out by Whitford (1958), who argued that there appeared to be a “very close approach to uniformity of the reddening law” in most directions. [**A uniform extinction curve with a constant $A_V/E(B-V)$**]{} was welcomed by the astronom
|
{
"pile_set_name": "ArXiv"
}
| null | null |
UUITP-20/08\
[**Chain inflation and the imprint of fundamental physics in the CMBR\
**]{}
[Diego Chialva$^1$ and Ulf H. Danielsson$^2$]{}\
Institutionen för fysik och astronomi\
Uppsala Universitet, Box 803, SE-751 08 Uppsala, Sweden
[diego.chialva@fysast.uu.se ulf.danielsson@fysast.uu.se\
]{}
[**Abstract**]{}
In this work we investigate characteristic modifications of the spectrum of cosmological perturbations and the spectral index due to chain inflation. We find two types of effects. First, modifications of the spectral index depending on interactions between radiation and the vacuum, and on features of the effective vacuum potential of the underlying fundamental theory. Second, a modulation of the spectrum signaling new physics due to bubble nucleation. This effect is similar to those of transplanckian physics. Measurements of such signatures could provide a wealth of information on the fundamental physics at the basis of inflation.
September 2008
Introduction
============
The current most popular models for inflation are based on chaotic inflation. In these models a scalar field rolls slowly subject to Hubble friction in a shallow potential. In [@Chialva:2008zw] we proposed an alternative scenario[^1] that shares many of the features of slow roll chaotic inflation, but also differs in several important aspects. Our model is based on chain inflation and makes use of a series of several first order phase transitions. More precisely, we imagine a potential with a large number of small barriers that separate local, metastable minima. The barriers prevent the field from rolling, and without quantum mechanical tunneling the inflaton is stuck in a local minimum. By appropriately choosing the heights and widths of the barriers, one can obtain tunneling probabilities such that the field effectively rolls slowly down the potential through repeated tunneling events. In this way we can achieve a slow roll in the sense of having a slow change in $H^{2}\sim\rho^{V}$ ($\rho^{V}$ being the vacuum energy density), even if the potential for the fields is steep. The details of this process were worked out in [@Chialva:2008zw], and it was also shown that suitable potentials might be find in flux compactified string theory.
The main features of the model introduced in [@Chialva:2008zw] is as follows. We assume that the bubbles, after their formation through tunneling, rapidly percolate and collide. The energy difference between two subsequent minima is temporarily stored in the bubble walls, and we assume that this energy is rapidly converted into radiation as the bubbles collide. In this way we obtain a coarse grained picture where the main effect of the barriers and the tunneling is to introduce a source term for radiation in the Friedmann equations.
A scalar field can be understood as a fluid consisting of two components: a component corresponding to the kinetic energy, $T$, and a component corresponding to the potential energy, $V$. In the case of slow roll we have $T\sim\varepsilon V\ll V,$ where $\varepsilon$ is the slow roll parameter, and as a consequence the dynamics is dominated by the potential energy leading to accelerated expansion and inflation. In our version of chain inflation the kinetic component is further suppressed relative to the potential energy. On the other hand, radiation is produced through the tunneling leading to $\rho_{rad}\sim\varepsilon V$. As a result we effectively have, to first order in $\varepsilon$, a model consisting of a decaying cosmological constant and a coupled component of radiation. For the case of chaotic inflation, it is important to understand that it is the sub-dominant kinetic energy that determines the spectrum of the fluctuations. The kinetic energy corresponds to a hydrodynamical fluid with an effective speed of sound that is equal to the speed of light. In contrast, the potential energy does not correspond to a hydrodynamical fluid and lacks a well defined speed of sound.
The amplitude of the primordial fluctuations are set by the speed of sound. The general result is $$P\sim\frac{H^{2}}{c_{s}\varepsilon},$$ where $c_{s}$is the speed of sound of the hydrodynamical component. For chaotic inflation $c_{s}=1$. In our model for chain inflation, the role of the kinetic energy is taken over by the radiation where $c_{s}=\frac{1}{\sqrt{3}}$. As a result,the primordial spectrum is corrected to $$P\sim\sqrt{3}\frac{H^{2}}{\varepsilon}.$$ The result differs from chaotic inflation through a simple factor of $\sqrt
{3}$. While this simple argument captures the main physics of the model, there are many important points of the derivation that are carefully discussed in [@Chialva:2008zw].
In the present paper we discuss the possibility of further effects on the primordial spectrum from various sources.[^2] In the first part of the paper we will discuss the modifications to the spectrum of cosmological perturbations due to the presence of the non-negligible interaction between radiation and vacuum energy. We will discuss how they arise and appear to be specific to our model of chain inflation. In the second part of the paper we will instead consider how bubble nucleation affect the spectrum of perturbations, and in particular we will study the imprint of the size of the bubbles on the CMB.
Effects due to interactions
===========================
In [@Chialva:2008zw] we derived a system of equations that determine the evolution of the comoving curvature perturbations during a period of chain inflation. The approach was based on the traditional analysis of scalar perturbations in field (slow-roll/chaotic) models, as presented in [@kinflGarrMuk]. We start with a brief review of the approach used in [@Chialva:2008zw], and show that the method of [@kinflGarrMuk] is not the most convenient one in the case of chain inflation. We will therefore propose another way of analyzing and re-writing the system of equations that is better suited for our model.
We start from equation (97) in [@Chialva:2008zw], $$\left\{
\begin{aligned} \dot\xi & = {a(\rho+p) {\over }H^2}\mathcal{R} \\ \dot{\mathcal{R}} & = {1 {\over }3} {H^2 {\over }a^3 (\rho+p)}\big(-k^2 -a^24\pi G {Q_V {\over }3H}\big)\xi -{4 {\over }3} H S_{V, r} \end{aligned}\right.
, \label{systpert}$$ where
- $a$ is the cosmological scale factor, $H$ is the Hubble rate, $\phi$ is the gravitational potential, and $G$ is Newton’s constant
- $Q_{V}$ is the energy-momentum transfer vector[^3]
- $a\phi= 4\pi GH\xi$,
- $\mathcal{R}$ is the comoving curvature perturbation
- $\rho,\,p$ are the total energy and pressure density
- $\varepsilon$ is the slow-roll parameter
- $k$ is the comoving wavenumber for the perturbation
- $S_{V,r}$ is the relative entropy perturbation between vacuum ($V$) and radiation ($r$).
In the following we will neglect the term proportional to $S_{V,r}$. As a result our conclusions apply only to models with negligible contributions from isoentropic perturbations, or, alternatively, just to the adiabatic component of the whole spectrum of perturbations.
Comparing equations (\[systpert\]) with the analogous equations in [@kinflGarrMuk] (in flat space), we see the importance of interactions in our multicomponent system, as represented by the term $-a^{2}4\pi
G{Q_{V}\ov3H}\xi$. Note also that this can be conveniently re-written as $-a^{2}4\pi G{Q_{V}\ov3H}=a^{2}H^{2}\varepsilon$. Following the literature (see for example [@Mukhanov:1990me]), it is customary to define a standard quantization variable $\varsigma=z\mathcal{R}$ where $$z\equiv{a(\rho+p)^{1\ov2}{\over }{1 {\over }3}H}\left( {\hat{O}\ov(-k^{2}+a^{2}H^{2}\varepsilon)}\right) ^{1\ov2}.$$ The singularity at $k^{2}=a^{2}H^{2}\varepsilon$ is just an artifact of the choice of variables, as is evident from (\[systpert\])[^4]. However, in order to better understand the implications for the spectrum of perturbations due to the new term, we choose to follow an alternative route using another change of variables. The (first order) action inferred from the equations of motion is given by[^5] $$S=\int\big[\dot{\xi}\hat{O}\mathcal{R}+{1\ov2}{H^{2}c_{s}^{2}{\over }a^{3}(\rho+p)}\xi(\Delta+a^{2}H^{2}\varepsilon)\hat{O}\xi-{1\ov2}{a^{2}(\rho+p){\over }H^{2}}\mathcal{R}\hat{O}\mathcal{R}\big]dt\,d^{3}x,$$ where $\hat{O}$ is a time-independent factor which, by comparison with known cases, was found to be[^6
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The growing amount of intermittent renewables in power generation creates challenges for real-time matching of supply and demand in the power grid. Emerging ancillary power markets provide new incentives to consumers (e.g., electrical vehicles, data centers, and others) to perform demand response to help stabilize the electricity grid. A promising class of potential demand response providers includes energy storage systems (ESSs). This paper evaluates the benefits of using various types of novel ESS technologies for a variety of emerging smart grid demand response programs, such as regulation services reserves (RSRs), contingency reserves, and peak shaving. We model, formulate and solve optimization problems to maximize the net profit of ESSs in providing each demand response. Our solution selects the optimal power and energy capacities of the ESS, determines the optimal reserve value to provide as well as the ESS real-time operational policy for program participation. Our results highlight that applying ultra-capacitors and flywheels in RSR has the potential to be up to 30 times more profitable than using common battery technologies such as LI and LA batteries for peak shaving.'
author:
-
title: |
Optimizing Energy Storage Participation\
in Emerging Power Markets
---
Acknowledgment {#acknowledgment .unnumbered}
==============
This paper is supported by the NSF Grant 1464388.
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Using the observational properties of Einstein’s gravitational field it is shown that a minimum of four non-coplanar mass probes are necessary for the Michelson and Morley interferometer to detect gravitational waves within the context of General Relativity. With fewer probes, some alternative theories of gravitation can also explain the observations. The conversion of the existing gravitational wave detectors to four probes is also suggested.'
author:
- |
Ivan S. Ferreira\
University of Brasilia, Institute of Physics, Brasilia, DF Brasilia, DF 70910-900, ivan@fis.unb.br,\
C. Frajuca\
National Institute for Space Research, Sao Jose dos Campos, 12227-010, frajuca@gmail.com,\
Nadja S. Magalhaes\
Physics Department, Sao Paulo Federal University, SP09913-030, Brazil, nadjasm@gmail.com,\
M. D. Maia\
University of Brasilia, Institute of Physics, Brasilia, DF70910-900, maia@unb.br,\
Claudio M. G. Sousa\
Federal University of Para, Santarem, PA 68040-070, claudiogomes@ufpa.br.
title: The Laser Gravitational Compass
---
The Observable Gravitational Wave
=================================
Einstein’s prediction of gravitational waves (gw) was originally derived from arbitrarily small perturbations of the Minkowski metric $g_{\mu\nu}= \eta_{\mu\nu} + h_{\mu\nu}$, such that Einstein’s equations reduce to a linear wave equation, written in a special (de Donder) coordinate gauge, as (Except when explicitly stated, Greek indices run from 0 to 3 and small case Latin indices run from 1 to 3). $$\Box^2 \Psi_{\mu\nu} = 0, \;\; \; \; \Psi_{\mu\nu}= h_{\mu\nu}-\frac{1}{2}h \eta_{\mu\nu},\;\; \;\: h=\eta^{\mu\nu}h_{\mu\nu}. \label{eq:waves}$$ The currently operating laser gw observatories are inspired by the Michelson & Morley (M&M) interferometers[@Giazotto; @Pitkin; @Eardley], where the data acquired by the Fabri-Perót interferometer (an etalon) is used to generate a numerical simulation, thus producing a template from which the most probable source is estimated[@Abbot2016]. The purpose of this note is to show that in order to detect gw described by General Relativity (GR) with an M&M interferometer requires a minimum of four non-coplanar mass probes.
The observables of Einstein’s gravitational field are given by the eigenvalues of the Riemann curvature[@Zakharov; @Pirani1956; @Pirani1957], defined by $R(U,V)W = [\nabla_U, \nabla_V]W - \nabla_{[U,V]}W$, whose components in any given basis $ \{e_\mu \} $ are $R(e_\mu,e_\nu)e_\rho = R_{\mu\nu\rho\sigma}e^\sigma$. Then we find that there are at most six independent eigenvectors $X_{\mu\nu}$ and six eigenvalues $\lambda$, solutions of the eigenvalue equation $ R_{\mu\nu\rho\sigma}X^{\rho\sigma}=\lambda X_{\mu\nu}$, including the zero eigenvalue, corresponding to the absence of gravitation[@Pirani1962; @Sachs1962a; @Pirani1967]. Thus, using the language of field theory, Einstein’s gravitation is said to have five non-trivial observables or degrees of freedom $(dof)$. The spin-statistics theorem relates the $dof$ to the helicity or the orbital spin of the field as $s=(dof-1)/2$, so that Einstein’s gravitation is also said to be a spin-2 field.
Alternative theories of gravitation may have distinct definitions of observables (not necessarily related to curvature) and their gravitational waves, if they exist, may require different methods of observations. Well known examples include: the spin-1 gauge theories of gravitation (there are several of them), characterized by dof=3; topological gravitation in three dimensions; projective theories of gravitation; F(R) theories; among many others. Therefore, in order to understand the observation of a gw it is essential to specify the observables of the theory on which the experiment is based.
The most general massless spin-2 field $h_{\mu\nu}$ was defined in the Minkowski space-time by Fierz and Pauli[@FierzPauli], as a trace-free field $h=h_\mu{}^\mu=0$, satisfying the field equations $\Box^2 h_{\mu\nu}=0$. This is a linear field not to be confused with Einstein’s gravitation. Since in the case of the present observations of gw, the supporting theory is Einstein’s gravitation, then the observational signature to be sought is that of a $dof=5$ or of a spin-2 field, characterized by the observable curvature.
The use of the Fierz-Pauli field $h_{\mu\nu}$ as the perturbation of the Minkowski metric makes it possible to free Eq. (\[eq:waves\]) of coordinate gauges so that its solutions can be written as as a superposition of plane polarized gravitational waves, characterized by the Traceless-Transverse-Plane-Polarized (TTPP) gauge conditions[@Giazotto]: $$h=0,\;\; h_{i0}=0, \;\; \Box^2 h_{\mu\nu}=0,\;\; h_{\mu\nu;\rho}=0. \label{eq:TTPP}$$ Then, these conditions are used to simulate a template, from which the source source of the gw observations is estimated.
The Equivalence Principle and the M&M gw Detector
=================================================
The use of an M&M detectors for gw is based on the principle of equivalence of GR: Given 2 masses A and B with attached mirrors, under the exclusive action of a known gravitational field, they propagate (or “free fall”) along time-like geodesics, with unit tangent vectors $T_A$ and $T_B$ respectively, satisfying geodesic equations $\nabla_{T_{A}} T_A=0$ and $\nabla_{T_{B}} T_B=0$ . Eventually, probe A sends a light signal with velocity $P$ to particle B along the light geodesic with equation $\nabla_P P=0$. After a while, probe A receives back the reflected signal, so that these geodesics describe a closed parallelogram, with the closing condition $\nabla_T P=\nabla_P T$.
The curvature tensor calculated in that parallelogram is $ R(T,P)T = [\nabla_T ,\nabla_P ]T = \nabla_T (\nabla_T P) = \nabla_T a$, where $a = \nabla_T P$ is the acceleration of the signal along the fall. Defining a basis by $\{e_0 =T , e_i= P \} $, where $e_i$ denotes any space-like direction, we obtain the geodesic deviation equation in components $\frac{d a_i}{cdt}=R_{0i0i}$, where $t$ denotes the time parameter of the time-like geodesic. As we can see, the motion of the probes generates a 2-dimensional world-sheet, whose curvature $R_{0i0i}$, is translated directly into the the variation of the fall acceleration.
In the currently operating and some future planned M&M gw detectors, three mass-probes are used, defining a space-like plane in space-time, whose motion under the action of a gw generates a 3-dimensional world-volume, whose curvature is measured by a $2 \times 2$ acceleration array $a_{ij}$, obeying the geodesic deviation equation $$\frac{d a_{ij}}{cdt}= \!\!R_{0i0j}, \; i,j =1,2. \label{eq:GDE2}$$ Therefore, such detectors are capable to measure only 3 curvature components $R_{0101},R_{0102},R_{0202}$, so that at most three degrees of freedom of Einstein’s gravitational field can be obtained. Since the observed gw are very weak, it has been assumed that the missing degrees of freedom in one detector may be complemented by the data collected on another detector located somewhere else. Such understanding is supported by the numerical simulation, from which an estimate of the wave source capable of reproducing the same observation in two separate detectors is obtained. Although it is possible to parallelly transport the curvature tensor from one detector to another, the left hand side content of Eq. (\[eq:GDE2\]) represents a locally measured quantity which cannot be transported to another detector, under the penalty of breaching the principle of equivalence.
The definitive solution for the missing dof problem can be obtained by a direct measure of the curvature tensor using the geodesic deviation equations Eq. (\[eq:GDE2\]), extended to $i,j =1,2,3$. One such detector, the “gravitational compass” was conceived by P. Szekeres. It consists of four non-coplanar mass probes
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
bibliography:
- 'references.bib'
title: A Symbolic Decision Procedure for Symbolic Alternating Finite Automata
---
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Spatial modulations in the distribution of observed luminosities (computed using redshifts) of $\sim 5\times 10^5$ galaxies from the SDSS Data Release $7$, probe the cosmic peculiar velocity field out to $z\sim 0.1$. Allowing for luminosity evolution, the $r$-band luminosity function, determined via a spline-based estimator, is well represented by a Schechter form with $ M^{\star}(z)-5{\rm log_{10}} h=-20.52-1.6(z-0.1)\pm 0.05$ and $\alpha^{\star}=-1.1\pm 0.03$. Bulk flows and higher velocity moments in two redshift bins, $0.02 < z < 0.07$ and $0.07 < z < 0.22$, agree with the predictions of the $\Lambda$CDM model, as obtained from mock galaxy catalogs designed to match the observations. Assuming a $\Lambda$CDM model, we estimate $\sigma_{8}\approx 1.1\pm 0.4$ for the amplitude of the linear matter power spectrum, where the low accuracy is due to the limited number of galaxies. While the low-$z$ bin is robust against coherent photometric uncertainties, the bias of results from the second bin is consistent with the $\sim1$% magnitude tilt reported by the SDSS collaboration. The systematics are expected to have a significantly lower impact in future datasets with larger sky coverage and better photometric calibration.'
author:
- Martin Feix
- Adi Nusser
- Enzo Branchini
bibliography:
- 'bulk\_ref.bib'
title: Tracing the cosmic velocity field at from galaxy luminosities in the SDSS DR7
---
Introduction {#sec:int}
============
In recent years, the amount of available extragalactic data has helped to establish a comprehensive picture of our Universe and its evolution [e.g. @Percival2010; @Riess2011; @Hinshaw2013; @Planck2013]. These data, by and large, have enforced the standard cosmological paradigm where initial perturbations in the mass density field grow via gravitational instability and eventually form the cosmic structure we observe today. The clustering process is inevitably associated with peculiar motions of matter, namely deviations from a pure Hubble flow. On large scales, these motions exhibit a coherent pattern, with matter generally flowing from underdense to overdense regions. If galaxies indeed move much like test particles, they should appropriately reflect the underlying peculiar velocity field which contains valuable information and, in principle, could be used to constrain and discriminate between different cosmological models.
Usually relying on galaxy peculiar velocities estimated from measured redshifts and distance indicators, most approaches in the literature have focused on extracting this information within local volumes of up to $100h^{-1}$ Mpc and larger centered on the Milky Way [e.g., @Riess1995; @Dekel1999; @Zaroubi2001; @Hudson2004; @Sarkar2007; @Lavaux2010; @feldwh10; @ND11; @Turnbull2012; @Feindt2013]. Common distance indicators are based on well-established relations between observable intrinsic properties of a given astronomical object, where one of them depends on the object’s distance. A typical example is the Tully-Fisher relation [@TF77] between rotational velocities of spiral galaxies and their absolute magnitudes. Due to observational challenges, the number of galaxies in distance catalogs is relatively small compared to that of redshift catalogs, limiting the possibility of exploring the cosmological peculiar velocity field to low redshifts $z\sim$ 0.02–0.03. Moreover, all known distance indicators are potentially plagued by systematic errors [@lyn88; @Strauss1995] which could give rise to unwanted biases in the inferred velocities and thus renders their use for cosmological purposes less desirable.
To probe the flow of galaxies at deeper redshifts, one needs to resort to non-traditional distance indicators. One method, for instance, exploits the kinetic Sunyaev-Zel’dovich effect to measure the cosmic bulk flow, i.e. the volume average of the peculiar velocity field, out to depths of around 100–500$h^{-1}$ Mpc [e.g, @Haehnelt1996; @Osborne2011; @Lavaux2013; @planck_bf]. Another strategy is based on the apparent anisotropic clustering of galaxies in redshift space which is commonly described as redshift-space distortions. This effect is a direct consequence of the additional displacement from distances to redshifts due to the peculiar motions of galaxies, and it yields reliable constraints on the amplitude of large-scale coherent motions and the growth rate of density perturbations [e.g., @Hamilton1998; @Peacock2001; @Scoccimarro2004; @Guz08].
Galaxy peculiar motions also affect luminosity estimates based on measured redshifts, providing another way of tackling the problem. Since the luminosity of a galaxy is independent of its velocity, systematic biases in the estimated luminosities of galaxies can be used to explore the peculiar velocity field. The idea has a long history. It was first adopted to constrain the velocity of the Virgo cluster relative to the Local Group by correlating the magnitudes of nearby galaxies with their redshifts [@TYS]. Although in need of very large galaxy numbers to be effective, methods based on this idea use only measured galaxy luminosities and their redshifts to derive bounds on the large-scale peculiar velocity field. Therefore, these methods do not require the use of traditional distance indicators and they are also independent of galaxy bias. Using the nearly full-sky 2MASS Redshift Survey (2MRS) [@Huchra2012], for example, this approach has recently been adopted to constrain bulk flows in the local Universe within $z\sim 0.01$ [@Nusser2011; @Branchini2012]. Furthermore, it has been used to determine the current growth rate of density fluctuations by reconstructing the full linear velocity field from the clustering of galaxies [@Nusser1994; @Nusser2012].
Here we seek to apply this luminosity-based approach to obtain peculiar velocity information from galaxy redshifts and apparent magnitudes of the Sloan Digital Sky Survey (SDSS) [@York2000]. The goals of our analysis are:
- A demonstration of the method’s applicability to datasets with large galaxy numbers.
- An updated estimate of the $r$-band luminosity function of SDSS galaxies at $z\sim 0.1$, accounting for evolution in galaxy luminosities.
- Novel bounds on bulk flows and higher-order moments of the peculiar velocity field at redshifts $z\sim 0.1$.
- First constraints on the angular peculiar velocity power spectrum and cosmological parameters without additional input such as galaxy clustering information.
The paper is organized as follows: we begin with introducing the luminosity method and its basic equations in section \[section2\]. In section \[section3\], we then describe the SDSS galaxy sample used in our analysis, together with a suite of mock catalogs which will allow us to assess uncertainties and known systematics inherent to the data. After a first test of the method, we attempt to constrain peculiar motions in section \[section4\], assuming a redshift-binned model of the velocity field. Because of the mixing between different velocity moments arising from the SDSS footprint, bulk flow measurements are interpreted with the help of galaxy mocks. Including higher-order velocity moments, we proceed with discussing constraints on the angular velocity power in different redshift bins and their implications. As an example of cosmological parameter estimation, we further infer the quantity $\sigma_{8}$, i.e. the amplitude of the linear matter power spectrum on a scale of $8h^{-1}$ Mpc, and compare the result to the findings from the corresponding mock analysis. Other potential issues and caveats related to our investigation are addressed at the section’s end. In section \[section5\], we finally summarize our conclusions and the method’s prospects in the context of next-generation surveys. For clarity, some of the technical material is separately given in an appendix. Throughout the paper, we adopt the standard notation, and all redshifts are expressed in the rest frame of the cosmic microwave background (CMB) using the dipole from ref. [@Fixsen1996].
Methodology {#section2}
===========
Variation of observed galaxy luminosities {#section2a}
-----------------------------------------
In an inhomogeneous universe, the observed redshift $z$ of an object (a galaxy) is generally different from its cosmological redshift $z_{c}$ defined for the unperturbed background. To linear order in perturbation theory, one finds the well-known expression [@SW] $$\begin{split}
\frac{z-z_{c}}{1+z} &= \frac{V(t,r)}{c} - \frac{\Phi(t,r)}{c^2}\\
&{ } - \frac{2}{c^2}\int_{t(r)}^{t_0}\dd t \frac{\partial\Phi\left\lbrack\hvr r(t),t\right\rbrack}{\partial t}\approx \frac{V(t,r)}{c},
\end{split}
\label{eq:sw}$$ where $V$ is the physical radial peculiar velocity of of the object, $\Phi$ denotes the gravitational potential and $\hvr$ is a unit vector along the line of sight to the object. The last step explicitly assumes low redshifts where the velocity $V$ makes the dominant contribution.[^1] Note that all fields are considered relative to their present-day values at a comoving radius of $r(t=t_{0})$ and that we have substituted $z$ for $z_{c}$ in the denominator on the left-hand side of eq. ,
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Proton-proton correlations were observed for the two-proton decays of the ground states of $^{19}$Mg and $^{16}$Ne. The trajectories of the respective decay products, $^{17}$Ne+p+p and $^{14}$O+p+p, were measured by using a tracking technique with microstrip detectors. These data were used to reconstruct the angular correlations of fragments projected on [ planes transverse]{} to the precursor momenta. The measured three-particle correlations reflect a genuine three-body decay mechanism and allowed us to obtain spectroscopic information on the precursors with valence protons in the $sd$ shell.'
author:
- 'I. Mukha'
- 'L. Grigorenko'
- 'K. Sümmerer'
- 'L. Acosta'
- 'M. A. G. Alvarez'
- 'E. Casarejos'
- 'A. Chatillon'
- 'D. Cortina-Gil'
- 'J. Espino'
- 'A. Fomichev'
- 'J. E. García-Ramos'
- 'H. Geissel'
- 'J. Gómez-Camacho'
- 'J. Hofmann'
- 'O. Kiselev'
- 'A. Korsheninnikov'
- 'N. Kurz'
- 'Yu. Litvinov'
- 'I. Martel'
- 'C. Nociforo'
- 'W. Ott'
- 'M. Pfützner'
- 'C. Rodríguez-Tajes'
- 'E. Roeckl'
- 'M. Stanoiu'
- 'H. Weick'
- 'P. J. Woods'
title: 'Proton-proton correlations observed in two-proton decay of $^{19}$Mg and $^{16}$Ne'
---
The recently discovered two-proton (2p) radioactivity is a specific type of genuine three-particle nuclear decays. It occurs when a resonance in any pair of fragments is located at higher energies than in the initial three-body (p+p+“core”) nucleus, and thus simultaneous emission of [ two]{} protons is the only decay channel. Three-body systems have more degrees of freedom in comparison with two-body systems, hence additional observables appear. In the case of 2p emission, the energy spectra of single protons become continuous, and proton-proton (p–p) correlations are available, which makes them a prospective probe for nuclear structure or/and [ the]{} decay mechanism. For example, the first p–p correlations observed in the 2p radioactivity of $^{94m}$Ag have revealed strong proton yields either in the same or opposite directions which called for a theory of 2p emission from deformed nuclei [@mukh06]. Two-proton emission can also occur from short-lived nuclear resonances or excited states (see, [ e.g., ]{} [@boch89; @o12; @o14]). Though [ in this case]{} the mechanism of 2p emission may depend on the reaction populating the parent state, such nuclei [ can be easily studied in-flight]{}. [ E.g., the cases of $^{6}$Be [@boch89; @dan87] and $^{16}$Ne [@korsh_ne16]were studied by analyzing their p–p correlations in the framework of a three-body partial-wave analysis developed for three-particle decays of light nuclei]{}. In particular, the study of $^6$Be revealed the existence of three-particle p+p+$\alpha$ correlations [@boch89] which matched the three-body components found [ theoretically]{} in the $p$-shell structure of $^6$Be [@thomp00]. Very recently, p–p correlations were also observed in 2p radioactivity of $^{45}$Fe [@giov07; @mier07] where both the [ lifetime and p–p correlations were found to reflect the]{} structure of $pf$-shell 2p precursors [@mier07]. Such a way [ of obtaining spectroscopic information is a]{} novel feature compared to studies of two-particle decays.
[ In the present paper, we study for the first time]{} the p–p correlations in $sd$ shell nuclei [ via]{} examples of the 2p decays of $^{19}$Mg and $^{16}$Ne. These nuclei with very different half [ lives]{} ($T_{1/2} \!
\approx \! 4 \! \cdot \! 10^{-9}$ s [@mukh_mg19] and $T_{1/2} \! \approx \! 4 \! \cdot \! 10^{-19}$ s [@kek_ne16], respectively) and presumably different spectroscopic properties may serve as [ reference cases illuminating the nuclear structure of other possible 2p emitters with $sd$-wave]{} configuration.
The decay properties of the $^{16}$Ne and $^{19}$Mg ground states and the related resonances in $^{15}$F and $^{18}$Na are shown in Fig. \[fig0\] which compiles the data from Refs. [@kek_ne16; @ne16; @ne16_sc; @f15_pet; @f15_gol; @18Na; @mukh_mg19] and this work. The ground states of both isotopes decay only [ by simultaneous]{} 2p emission while their excited states are open for sequential 1p decays [ via intermediate]{} unbound states in $^{15}$F and $^{18}$Na.
![\[fig0\] States observed in $^{16}$Ne, $^{19}$Mg and the corresponding intermediate systems $^{15}$F, $^{18}$Na. Decay energies (in keV) are given relative to the respective p and 2p thresholds. Most values have been taken from the literature (Refs. [@mukh_mg19; @kek_ne16; @ne16; @ne16_sc; @f15_pet; @f15_gol; @18Na]), those in bold print are from the present work.](fig1_grig1a.eps){width="48.00000%"}
The quantum-mechanical theory of 2p radioactivity which uses a three-body model [@grig00; @grig01; @grig03], predicts the p–p correlations [ to be]{} strongly influenced by nuclear structure together with Coulomb and three-body centrifugal barriers. In particular, the newly discovered 2p-radioactivity of $^{19}$Mg [@mukh_mg19] was predicted to be characterized by p–p correlations which reflect [the $sd$]{} configurations of the valence protons [@grig03a]. A similar effect is found in $^{16}$Ne, where [ the]{} $s$-wave configuration was predicted to dominate, contrary to its mirror $^{16}$C, thus breaking isospin symmetry [@grig_ne16]. [ A]{} complementary approach in describing 2p decays is [ the]{} mechanism of sequential emission of protons via an intermediate state (see e.g., [@lane]). It includes also the traditional quasi-classical di-proton model with emission of a $^2$He cluster, [ assuming]{} extremely strong p–p correlations [@gold60; @baz72]. The predictions of these models differ dramatically [ with respect to]{} the p–p correlations, suggesting them as [ a]{} sensitive probe of the 2p-decay mechanism (see the detailed predictions below).
[ Our experiment to investigate 2p-emission from $^{19}$Mg and $^{16}$Ne]{} was performed by using a 591[*A*]{} MeV beam of $^{24}$Mg accelerated by the SIS facility at GSI, Darmstadt. The radioactive beams of $^{20}$Mg and $^{17}$Ne were produced at the Projectile-Fragment Separator (FRS) [@frs] with average intensities of 400 and 800 ions s$^{-1}$ and energies of 450[*A*]{} and 410[*A*]{} MeV, respectively. The secondary [ 1-n-removal]{} reactions ($^{20}$Mg, $^{19}$Mg) and ($^{17}$Ne, $^{16}$Ne) occurred at the mid-plane of FRS in a secondary 2 g/cm$^{2}$ $^{9}$Be target. Special magnetic optics settings were applied, the first FRS half being tuned in an achromatic mode using a wedge-shaped degrader, while its second half was set for identification of the heavy ions (HI) with high acceptance in angle and momentum.
[ A sketch of the experimental set-up at the FRS midplane has been shown in Fig.1 of Ref. [@mukh_mg19] and explained in detail there. A microstrip detector array [@si_proposal] consisting of 4 large-area (7x4 cm$^{2}$) double-sided silicon detectors (with a pitch of 0.1 mm)was positioned downstream of the secondary target.]{} [ This array was]{} used to measure energy loss and position of coincident hits of two protons and a heavy fragment, thus [ allowing us to]{} reconstruct all decay-product trajectories and derive the coordinates of the reaction vertex and the angular p–p and proton-HI correlations. The conditions [ to select]{} the true HI+p+p events were: (i) [ a minimal distance between the proton and heavy ion trajectories of less than 150 $\mu$m]{}, and (ii) [ a difference between the two longitudinal coordinates
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Matrix completion and extrapolation (MCEX) are dealt with here over reproducing kernel Hilbert spaces (RKHSs) in order to account for prior information present in the available data. Aiming at a fast and low-complexity solver, the task is formulated as one of kernel ridge regression. The resultant MCEX algorithm can also afford online implementation, while the class of kernel functions also encompasses several existing approaches to MC with prior information. Numerical tests on synthetic and real datasets show that the novel approach is faster than widespread methods such as alternating least-squares (ALS) or stochastic gradient descent (SGD), and that the recovery error is reduced, especially when dealing with noisy data.'
author:
- |
Pere Giménez-Febrer$^1$, Alba Pagès-Zamora$^1$, and Georgios B. Giannakis$^2$\
$^1$SPCOM Group, Universitat Politècnica de Catalunya-Barcelona Tech, Spain\
$^2$Dept. of ECE and Digital Technology Center, University of Minnesota, USA
title: 'Matrix completion and extrapolation via kernel regression [^1]'
---
Matrix completion, extrapolation, RKHS, kernel ridge regression, graphs, online learning
Introduction
============
With only a subset of its entries available, matrix completion (MC) amounts to recovering the unavailable entries by leveraging just the low-rank attribute of the matrix itself [@candes]. The relevant task arises in applications as diverse as image restoration [@ji2010robust], sensor networks [@yi2015partial], and recommender systems [@koren2009matrix]. To save power for instance, only a fraction of sensors may collect and transmit measurements to a fusion center, where the available spatio-temporal data can be organized in a matrix format, and the unavailable ones can be eventually interpolated via MC [@yi2015partial]. Similarly, collaborative filtering of ratings given by users to a small number of items are stored in a sparse matrix, and the objective is to predict their ratings for the rest of the items [@koren2009matrix].
Existing MC approaches rely on some form of rank minimization or low-rank matrix factorization. Specifically, [@candes] proves that when MC is formulated as the minimization of the nuclear norm subject to the constraint that the observed entries remain unchanged, exact recovery is possible under mild assumptions; see also [@candes2010matrix] where reliable recovery from a few observations is established even in the presence of additive noise. Alternatively, [@koren2009matrix] replaces the nuclear norm by two low-rank factor matrices that are identified in order to recover the complete matrix.
While the low-rank assumption can be sufficient for reliable recovery, prior information about the unknown matrix can be also accounted to improve the completion outcome. Forms of prior information can include sparsity [@yi2015partial], local smoothness [@cheng2013stcdg], and interdependencies encoded by graphs [@kalofolias2014matrix; @chen; @rao2015collaborative; @ma2011recommender]. These approaches exploit the available similarity information or prior knowledge of the bases spanning the column or row spaces of the unknown matrix. In this regard, reproducing kernel Hilbert spaces (RKHSs) constitute a powerful tool for leveraging available prior information thanks to the kernel functions, which measure the similarity between pairs of points in an input space. Prompted by this, [@abernethy2006low; @bazerque2013; @zhou2012; @stock2018comparative] postulate that columns of the factor matrices belong to a pair of RKHSs spanned by their respective kernels. In doing so, a given structure or similarity between rows or columns is effected on the recovered matrix. Upon choosing a suitable kernel function, [@yi2015partial] as well as [@cheng2013stcdg; @kalofolias2014matrix; @chen; @rao2015collaborative; @ma2011recommender] can be cast into the RKHS framework. In addition to improving MC performance, kernel-based approaches also enable extrapolation of rows and columns, even when all their entries are missing - a task impossible by the standard MC approaches in e.g. [@candes] and [@koren2009matrix].
One major hurdle in MC is the computational cost as the matrix size grows. In its formulation as a rank minimization task, MC can be solved via semidefinite programming [@candes], or proximal gradient minimization [@ma2011fixed; @cai2010singular; @chen; @gimenez], which entails a singular value decomposition of the recovered matrix per iteration. Instead, algorithms with lower computational cost are available for the bi-convex formulation based on matrix factorization [@koren2009matrix]. These commonly rely on iterative minimization schemes such as alternating least-squares (ALS) [@hastie2015matrix; @jain2013low] or stochastic gradient descent (SGD) [@gemulla2011large; @zhou2012]. With regard to kernel-based MC, the corresponding algorithms rely on alternating convex minimization and semidefinite programming [@abernethy2006low], block coordinate descent [@bazerque2013], and SGD [@zhou2012]. However, algorithms based on alternating minimization only converge to the minimum after infinite iterations. In addition, existing kernel-based algorithms adopt a specific sampling pattern or do not effectively make use of the Representer Theorem for RKHSs that will turn out to be valuable in further reducing the complexity, especially when the number of observed entries is small.
The present contribution offers an RKHS-based approach to MCEX that also unifies and broadens the scope of MC approaches, while offering reduced complexity algorithms that scale well with the data size. Specifically, we develop a novel MC solver via kernel ridge regression as a convex alternative to the nonconvex factorization-based formulation that offers a closed-form solution. Through an explicit sampling matrix, the proposed method offers an encompassing sampling pattern, which further enables the derivation of upper bounds on the mean-square error. Moreover, an approximate solution to our MCEX regression formulation is developed that also enables online implementation using SGD. Finally, means of incorporating prior information through kernels is discussed in the RKHS framework.
The rest of the paper paper is organized as follows. Section II outlines the RKHS formulation and the kernel regression task. Section III unifies the existing methods for MC under the RKHS umbrella, while Section IV introduces our proposed Kronecker kernel MCEX (KKMCEX) approach. Section V develops our ridge regression MCEX (RRMCEX) algorithm, an accelerated version of KKMCEX, and its online variant. Section VI deals with the construction of kernel matrices. Finally, Section VII presents numerical tests, and Section VIII concludes the paper.
**Notation.** Boldface lower case fonts denote column vectors, and boldface uppercase fonts denote matrices. The $(i,j)$th entry of matrix $\A$ is $\A_{i,j}$, and the $i^{\text{th}}$ entry of vector $\bm a$ is $\bm a_i$. Superscripts $^T$ and $^\dagger$ denote transpose and pseudoinverse, respectively; while hat $\,\hat{}\,$ is used for estimates. Matrix $\F\in\mathcal{H}$ means that its columns belong to a vector space $\mathcal{H}$. The symbols $\I$ and $\bm 1$ stand for the identity matrix and the all-ones vector of appropriate size, specified by the context. The trace operator is $\text{Tr}(\cdot)$, the function eig($\A$) returns the diagonal eigenvalue matrix of $\A$ ordered in ascending order, and $\lambda_k(\A)$ denotes the $k^{\text{th}}$ eigenvalue of $\A$ with $\lambda_k(\A)\leq\lambda_{k+1}(\A)$.
Preliminaries {#sec:background}
=============
Consider a set of $N$ input-measurement pairs $\{(x_i,m_i)\}^N_{i=1}$ in $\mathcal{X}\times\mathbb{R}$, where $\mathcal{X}:=\{x_1,\ldots,x_N\}$ is the input space, $\mathbb{R}$ denotes the set of real numbers, and measurements obey the model $$\label{eq:sigmodel}
m_i = f(x_i) + e_i$$ where $f:\mathcal{X}\rightarrow\mathbb{R}$ is an unknown function and $e_i\in\mathbb{R}$ is noise. We assume this function belongs to an RKHS $$\label{eq:hilb}
\mathcal{H}_x:=\{f:f(x_i) = \sum^N_{j=1} \alpha_j \kapx(x_i,x_j), \:\:\: \alpha_j \in \mathbb{R}\}$$ where $\kapx:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ is the kernel function that spans $\mathcal{H}_x$, and $\{\alpha_i\}^N_{i=1}$ are weight coefficients. An RKHS is a complete linear space endowed with an inner product that satisfies the reproducing property [@shawe]. If $\langle f,f'\rangle_{\mathcal{H}_x}$ denotes the inner product in $\mathcal{H}_x$ between functions $f$ and $f'$, the reproducing property states that $f(x_i) = \langle f,\kapx(\cdot,x_i)\rangle_{\mathcal{
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Low-metallicity galaxies exhibit different properties of the interstellar medium (ISM) compared to nearby spiral galaxies. Obtaining a resolved inventory of the various gas and dust components of massive star forming regions and diffuse ISM is necessary to understand how those differences are driven. We present a study of the infrared/submillimeter (submm) emission of the massive star forming complex N158-N159-N160 located in the Large Magellanic Cloud. Combining observations from the [[*Spitzer*]{}]{} Space Telescope (3.6-70 [$\mu$m]{}), the [[*Herschel*]{}]{} Space Observatory (100-500 [$\mu$m]{}) and LABOCA (on APEX, 870 [$\mu$m]{}) allows us to work at the best angular resolution available now for an extragalactic source (a few parsecs for the LMC). We observe a remarkably good correlation between the [[*Herschel*]{}]{} SPIRE and LABOCA emission and resolve the low surface brightnesses emission. We use the [[*Spitzer*]{}]{} and [[*Herschel*]{}]{} data to perform a resolved Spectral Energy Distribution (SED) modelling of the complex. Using modified blackbodies, we derive an average “effective" emissivity index of the cold dust component $\beta$$_c$ of 1.47 across the complex. If $\beta$$_c$ is fixed to 1.5, we find an average temperature of $\sim$27K (maximum of $\sim$32K in N160). We also apply the @Galliano2011 SED modelling technique (using amorphous carbon to model carbon dust) to derive maps of the star formation rate, the grain temperature, the mean starlight intensity, the fraction of Polycyclic Aromatic Hydrocarbons (PAH) or the dust mass surface density of the region. We observe that the PAH fraction strongly decreases in the H[ii]{} regions we study. This decrease coincides with peaks in the mean radiation field intensity map. The dust surface densities follow the far-infrared distribution, with a total dust mass of 2.1 $\times$ 10$^4$ [$M_\odot$]{} (2.8 times less than if carbon dust was modelled by standard graphite grains) in the resolved elements we model. We also find a non-negligible amount of dust in the region called “N159 South", a molecular cloud that does not show massive star formation. We also investigate the drivers of the [[*Herschel*]{}]{}/PACS and SPIRE submm colours and find that the submm ratios correlate strongly with the radiation field intensity and with the near and mid-IR surface brightnesses equally well. Comparing our dust map to H[i]{} and CO observations in N159, we then investigate variations in the gas-to-dust mass ratio (G/D) and the CO-to-H$_2$ conversion factor X$_{CO}$. A mean value of G/D$\sim$356 is derived when using X$_{CO}$ = 7$\times$10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ [@Fukui2009]. If a constant G/D across N159 is assumed, we derive a X$_{CO}$ conversion factor of 5.4$\times$10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$. We finally model individual regions to analyse variations in the SED shape across the complex and the 870 [$\mu$m]{} emission in more details. No measurable submm excess emission at 870 [$\mu$m]{} seems to be detected in these regions.'
author:
- '\'
bibliography:
- '/Users/maudgalametz/Documents/Work/Papers/mybiblio.bib'
title: 'The thermal dust emission in N158-N159-N160 (LMC) star forming complex mapped by Spitzer, Herschel and LABOCA'
---
galaxies: ISM – galaxies:dwarf– galaxies:SED model – ISM: dust – submillimeter: galaxies
Introduction
============
As potential templates of primordial environments, low-metallicity galaxies are keystones to understand how galaxies evolve through cosmic time. Studying the Interstellar Medium (ISM) of low-metallicity galaxies is a necessary step to get a handle on the interplay between star formation and the ISM under conditions characteristic of the early universe. Low-metallicity galaxies also have quite different infrared (IR) Spectral Energy Distributions (SEDs) than solar or metal-rich environments. For instance, their aromatic features are diminished compared to dustier galaxies [@Madden2006; @Engelbracht2008]. The paucity of aromatic features is usually attributed to the hardness of the radiation field in low-metallicity environments, destructive processes such as supernova-driven shocks effects [@Galliano2003; @Galliano2005] or delayed injection of carbon dust by asymptotic giant branch (AGB) stars [@Dwek1998; @Galliano_Dwek_Chanial_2008] in dwarf galaxies. More relevant to the present study, the SEDs of low-metallicity galaxies often exhibit a flattening of their submillimeter (submm) slope or a submm excess [@Bottner2003; @Galliano2003; @Galliano2005; @Marleau2006; @Bendo2006; @Galametz2009; @Galametz2011], namely a higher emission beyond 500 [$\mu$m]{} than that extrapolated from IR observations and standard dust properties (Milky Way dust for instance). The origin of this excess is still highly debated. These results highlight the importance of a complete coverage of the thermal dust emission of low-metallicity objects to get a handle on the overall dust population distribution and properties in these environments. Combining gas and dust tracers will also allow us to understand how the matter cycles and the star formation processes evolve with galaxy properties.
The Large Magellanic Cloud (LMC) is our nearest low-metallicity neighbour galaxy [$\sim$ 50kpc; @Feast1999], well studied at all wavelengths. This proximity enables us to study in detail the physical processes at work in the ISM of the galaxy and to individually resolve its bright star-forming regions. These structures can unbiasedly be isolated due to the almost face-on orientation [23-37$^{o}$ @Subramanian2012]. Furthermore, the low interstellar extinction along the line of sight facilitates the interpretation of the physical conditions of these star-forming regions compared to Galactic studies strongly affected by this extinction. Thus, the irregular morphology and low-metallicity of the LMC make it a perfect laboratory to study the evolution in the physical properties of the ISM of galaxies and the influence of metal enrichment on their star-forming activity.
Before the [[*Spitzer*]{}]{} [*Space Telescope*]{} ([[*Spitzer*]{}]{}) observations, the infrared studies on the LMC suffered from a lack of wavelength coverage or spatial resolution to quantify crucial physical parameters of the ISM properties such as the equilibrium temperature of the big grains, the spatial distribution of the dust grain populations or the interstellar radiation field. A study of the extended infrared emission in the ISM of the LMC was performed by @Bernard2008 as part of the [[*Spitzer*]{}]{} Legacy Program SAGE [Surveying the Agents of a Galaxy’s Evolution, @Meixner2006] project using [[*Spitzer*]{}]{} observations. They found disparities between the overall shape of the SED of the LMC and that of the Milky Way (MW), namely a different mid-infrared (MIR) shape. They also found departures from the linear correlation between the FIR optical depth and the gas column density. Using [[*Spitzer*]{}]{} FIR spectra ($\lambda$ = 52-93[$\mu$m]{}), the studies of @VanLoon2010b also allowed comparisons of compact sources in the LMC and its neighbor, the Small Magellanic Cloud (SMC), that has an even lower metallicity. Their results indicate that while the dust mass differs in proportion to metallicity, the oxygen mass seems to differ less. The photo-electric effect is indistinguishably efficient to heat the gas in both clouds. The SMC finally presents evidence of reduced shielding and reduced cooling.
At submm wavelengths, the LMC was observed by @Aguirre2003 with the TopHat instrument, a balloon-borne telescope [@Silverberg2003] from 470 [$\mu$m]{} to 1.2 mm. They constrain the FIR regime with DIRBE (Diffuse Infrared Background Experiment) observations at 100, 140 and 240 [$\mu$m]{} and estimated an average dust temperature for the LMC of T=25.0$\pm$1.8K. Using DIRBE and ISSA (IRAS Sky Survey Atlas) observations, @Sakon2006 found that the submm emission power law index (often referred to as $\beta$) is smaller in the LMC than in the MW and that the 140 and 240 [$\mu$m]{} fluxes seem to deviate from the model predictions, in particular on the periphery of supergiant shells. This excess was modelled by a very cold dust component with temperatures $<$ 9K, even if their lack of submm constraints prevented them to unbiasedly conclude that cold dust was the explanation for the excess. Using revised DIRBE, WMAP and COBE maps, @israel2010 construct the global SED of the LMC to confirm a pronounced excess emission at millimeter and submm wavelengths, i.e. more emission than expected from a submm slope with $\beta$=2. Different hypotheses for this global excess
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'It is shown that signal energy is the only available degree-of-freedom ([DOF]{}) for fiber-optic transmission as the input power tends to infinity. With $n$ signal [DOFs]{}at the input, $n-1$ [DOFs]{} are asymptotically lost to signal-noise interactions. The main observation is that, nonlinearity introduces a multiplicative noise in the channel, similar to fading in wireless channels. The channel is viewed in the spherical coordinate system, where signal vector ${\underaccent{\bar}{X}}\in{\mathbb{C}}^n$ is represented in terms of its norm ${\left|{\underaccent{\bar}{X}}\right|}$ and direction $\hat{{\underaccent{\bar}{X}}}$. The multiplicative noise causes signal direction $\hat{{\underaccent{\bar}{X}}}$ to vary randomly on the surface of the unit $(2n-1)$-sphere in ${\mathbb{C}}^{n}$, in such a way that the effective area of the support of $\hat {{\underaccent{\bar}{X}}}$ does not vanish as ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$. On the other hand, the surface area of the sphere is finite, so that $\hat{{\underaccent{\bar}{X}}}$ carries finite information. This observation is used to show several results. Firstly, let ${{\mathcal{C}}}({{\mathcal{P}}})$ be the capacity of a discrete-time periodic model of the optical fiber with distributed noise and frequency-dependent loss, as a function of the average input power ${{\mathcal{P}}}$. It is shown that asymptotically as ${{\mathcal{P}}}\rightarrow\infty$, ${{\mathcal{C}}}=\frac{1}{n}\log\bigl(\log{{\mathcal{P}}}\bigr)+c$, where $n$ is the dimension of the input signal space and $c$ is a bounded number. In particular, $\lim_{{{\mathcal{P}}}\rightarrow\infty}{{\mathcal{C}}}({{\mathcal{P}}})=\infty$ in finite-dimensional periodic models. Secondly, it is shown that capacity saturates to a constant in infinite-dimensional models where $n=\infty$. An expression is provided for the constant $c$, by showing that, as the input ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$, the action of the discrete periodic stochastic nonlinear Schrödinger equation tends to multiplication by a random matrix (with fixed distribution, independent of input). Thus, perhaps counter-intuitively, noise simplifies the nonlinear channel at high powers to a *linear* multiple-input multiple-output fading channel. As ${{\mathcal{P}}}\rightarrow\infty$ signal-noise interactions gradually reduce the slope of the ${{\mathcal{C}}}({{\mathcal{P}}})$, to a point where increasing the input power returns diminishing gains. Nonlinear frequency-division multiplexing can be applied to approach capacity in optical networks, where linear multiplexing achieves low rates at high powers.'
author:
- 'Mansoor I. Yousefi'
title: 'The Asymptotic Capacity of the Optical Fiber[^1] '
---
Introduction
============
Several decades since the introduction of the optical fiber, channel capacity at high powers remains a vexing conundrum. Existing achievable rates saturate at high powers because of linear multiplexing and treating the resulting interference as noise in network environments [@yousefi2012nft1; @yousefi2012nft2; @yousefi2012nft3]. Furthermore, it is difficult to estimate the capacity via numerical simulations, because channel has memory.
Multi-user communication problem for (an ideal model of) optical fiber can be reduced to single-user problem using the nonlinear frequency-division multiplexing (NFDM) [@yousefi2012nft1; @yousefi2012nft3]. This addresses deterministic distortions, such as inter-channel and inter-symbol interference (signal-signal interactions). The problem is then reduced to finding the capacity of the point-to-point optical fiber set by noise.
There are two effects in fiber that impact Shannon capacity in point-to-point channels. (1) Phase noise. Nonlinearity transforms additive noise to phase noise in the channel. As the amplitude of the input signal tends to infinity, the phase of the output signal tends to a uniform random variable in the zero-dispersion channel [@yousefi2011opc Section IV]. As a result, phase carries finite information in the non-dispersive fiber. (2) Multiplicative noise. Dispersion converts phase noise to amplitude noise, introducing an effect which at high powers is similar to fading in wireless channels. Importantly, the conditional entropy grows strongly with input signal.
In this paper, we study the asymptotic capacity of a discrete-time periodic model of the optical fiber as the input power tends to infinity. The role of the nonlinearity in point-to-point discrete channels pertains to signal-noise interactions, captured by the conditional entropy.
The main result is the following theorem, describing capacity-cost function in models with constant and non-constant loss; see Definition \[def:loss\].
Consider the discrete-time periodic model of the NLS channel described in Section \[sec:mssfm\], with non-zero dispersion. Capacity is asymptotically
[rCl]{} C(P)=
(P)+c, & [non-constant loss]{.nodecor},\
P+c, & [constant loss]{.nodecor},
where $n$ is dimension of the input signal space, ${{\mathcal{P}}}\rightarrow\infty$ is the average input signal power and $c{\stackrel{\Delta}{=}}c(n,{{\mathcal{P}}})<\infty$. In particular, $\lim\limits_{{{\mathcal{P}}}\rightarrow\infty} {{\mathcal{C}}}({{\mathcal{P}}})=\infty$ in finite-dimensional models. Intensity modulation and direct detection (photon counting) is nearly capacity-achieving in the limit ${{\mathcal{P}}}\rightarrow\infty$, where capacity is dominated by the first terms in ${{\mathcal{C}}}({{\mathcal{P}}})$ expressions. \[thm:main\]
From the Theorem \[thm:main\] and [@yousefi2011opc Theorem 1], the asymptotic capacity of the dispersive fiber is much smaller than the asymptotic capacity of (the discrete-time model of) the zero-dispersion fiber, which is $\frac{1}{2}\log{{\mathcal{P}}}+c$, $c<\infty$. Dispersion reduces the capacity, by increasing the conditional entropy. With $n$ [DOFs]{} at the input, $n-1$ [DOFs]{} are asymptotically lost to signal-noise interactions, leaving signal energy as the only useful [DOF]{} for transmission.
There are a finite number of [DOFs]{} in all computer simulations and physical systems. However, as a mathematical problem, the following Corollary holds true.
Capacity saturates to a constant $c<\infty$ in infinite-dimensional models, including the continuous-time model. \[cor:inf\]
The power level where signal-noise interactions begin to appreciably impact the slope of the ${{\mathcal{C}}}({{\mathcal{P}}})$ is not determined in this paper. Numerical simulations indicate that the conditional entropy does not increase with input in the nonlinear Fourier domain, for a range of power larger than the optimal power in wavelength-division multiplexing [@yousefi2016nfdm Fig. 9 (a)]. In this regime, signal-noise interactions are weak and the capacity is dominated by the (large) number $c$ in the Theorem \[thm:main\]. A numerical estimation of the capacity of the point-to-point fiber at input powers higher than those in Fig. \[fig:nfdm\] should reveal the impact of the signal-dependent noise on the asymptotic capacity.
The contributions of the paper are presented as follows. The continuous-time model is discretized in Section \[sec:mssfm\]. The main ingredient is a modification of the split-step Fourier method (SSFM) that shows noise influence more directly compared with the standard SSFM. A *unit* is defined in the modified SSFM (MSSFM) model that plays an important role throughout the paper. The MSSFM and units simplify the information-theoretic analysis.
Theorem \[thm:main\] and Corollary \[cor:inf\] are proved in Section \[sec:proof1\]. The main ingredient here is an appropriate partitioning of the [DOFs]{} in a suitable coordinate system, and the proof that the achievable rate of one group of [DOFs]{} is bounded in input. No assumption is made on input power in this first proof.
Theorem \[thm:main\] is proved again in Section \[sec:proof2\] by considering the limit ${{\mathcal{P}}}\rightarrow\infty$, which adds further intuition. Firstly, it is shown that, as the input ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$, the action of the discrete periodic stochastic nonlinear Schrödinger (NLS) equation tends to multiplication by a random matrix (with fixed probability distribution function (PDF), independent of the input). As a result, perhaps counter-intuitively, as ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$ noise simplifies the nonlinear channel to a *linear* multiple-input multiple-output (non-coherent) fading channel. Secondly, the asymptotic capacity is computed, without calculating the conditional PDF of the channel, entropies, or solving the capacity optimization problem. Because of the multiplicative noise, the asymptotic rate depends only on the knowledge that whether channel random operator has any
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We study algebraically infinitely many infinitray extensions of predicate intuitionistic logic. We prove several representation theorems that reflect a (weak) Robinson’s joint consistency theorem for the extensions studied with and without equality. In essence a Henkin-Gabbay construction, our proof uses neat embedding theorems and is purely algebraic. Neat embedding theorems, are an algebraic version of Henkin constructions that apply to various infinitary extensions of predicate first order logics; to the best of our knowledge, they were only implemented in the realm of intuitionistic logic in the article ’Amalgamation of polyadic Heyting algebras’ Studia Math Hungarica, in press. [^1]'
author:
- Tarek Sayed Ahmed
title: 'Representability, and amalgamation for various reducts of Heyting polyadic algebras'
---
Introduction
============
Background and History
----------------------
It often happens that a theory designed originally as a tool for the study of a problem, say in computer science, came subsequently to have purely mathematical interest. When such a phenomena occurs, the theory is usually generalized beyond the point needed for applications, the generalizations make contact with other theories (frequently in completely unexpected directions), and the subject becomes established as a new part of pure mathematics. The part of pure mathematics so created does not (and need not) pretend to solve the problem from which it arises; it must stand or fall on its own merits.
A crucial addition to the collection of mathematical catalysts initiated at the beginning of the 20 century, is formal logic and its study using mathematical machinery, better known as metamathematical investigations, or simply metamathematics. Traced back to the works of Frege, Hilbert, Russel, Tarski, Godel and others; one of the branches of pure mathematics that metamathematics has precipitated to is algebraic logic.
Algebraic logic is an interdisciplinary field; it is the art of tackling problems in formal logic using universal algebraic machinery. It is similar in this respect to several branches in mathematics, like algebraic geometry, where algebraic machinery is used guided by geometric intuition. In algebraic logic, the intuition underlying its constructions is inspired from (mathematical) logic.
The idea of solving problems in various branches of logic by first translating them to algebra, then using the powerful methodology of algebra for solving them, and then translating the solution back to logic, goes back to Leibnitz and Pascal. Such a methodology was already fruitfully applied back in the 19th century with the work of Boole, De Morgan, Peirce, Schröder, and others on classical logic. Taking logical equivalence rather than truth as the primitive logical predicate and exploiting the similarity between logical equivalence and equality, those pioneers developed logical systems in which metalogical investigations take on a plainly algebraic character. The ingenious transfer of ”logical equivalence“ to ” equations” turned out immensely useful and fruitful.
In particular, Boole’s work evolved into the modern theory of Boolean algebras, and that of De Morgan, Peirce and Schröder into the well-developed theory of relation algebras, which is now widely used in such diverse areas, ranging from formalizations of set theory to applications in computer science.
From the beginning of the contemporary era of logic, there were two approaches to the subject, one centered on the notion of logical equivalence and the other, reinforced by Hilbert’s work on metamathematics, centered on the notions of assertion and inference.
It was not until much later that logicians started to think about connections between these two ways of looking at logic. Tarski gave the precise connection between Boolean algebra and the classical propositional calculus, inspired by the impressive work of Stone on Boolean algebras. Tarski’s approach builds on Lindenbaum’s idea of viewing the set of formulas as an algebra with operations induced by the logical connectives. When the Lindenbaum-Tarski method is applied to the predicate calculus, it lends itself to cylindric and polyadic algebras rather than relation algebras.
In the traditional mid -20th century approach, algebraic logic has focused on the algebraic investigation of particular classes of algebras like cylindric, polyadic and relation algebras. When such a connection could be established, there was interest in investigating the interconnections between various metalogical properties of the logical system in question and the algebraic properties of the coresponding class of algebras (obtaining what are sometimes called “bridge theorems”). This branch has now evolved into the relatively new field of universal algebraic logic, in analogy to the well established field of universal algebra.
For example, it was discovered that there is a natural relation between the interpolation theorems of classical, intuitionistic, intermediate propositional calculi, and the amalgamation properties of varieties of Heyting algebras, which constitute the main focus of this paper. The variety of Heyting algebras is the algebraic counterpart of propositional intuitionistic logic. We shall deal with Heyting algebras with extra (polyadic) operations reflecting quantifiers. Those algebras are appropriate to study (extensions) of predicate intuitionistic logic. Proving various interpolation theorems for such extensions, we thereby extend known amalgamation results of Heyting algebras to polyadic expansions.
A historic comment on the development of intuitioinistic logic is in order. It was Brouwer who first initiated the programme of intuitionism and intuitionistic logic is its rigorous formalization developed originaly by Arend Heyting. Brouwer rejected formalism per se but admitted the potential usefulness of formulating general logical principles expressing intuitionistically correct constructions, such as modus ponens. Heyting realized the importance of formalization, being fashionable at his time, with the rapid development of mathematics. Implementing intuitionistic logic, turned out useful for diffrent forms of mathematical constructivism since it has the existing property. Philosophically, intuitionism differs from logicism by treating logic as an independent branch of mathematics, rather than as the foundations of mathematics, from finitism by permitting intuitionistic reasoning about possibly infinite collections; and from platonism by viewing mathematical objects as mental constructs rather than entities with an independent objective existence. There is also analogies between logisicm and intuitionism; in fact Hilbert’s formalist program, aiming to base the whole of classical mathematics on solid foundations by reducing it to a huge formal system whose consistency should be established by finitistic, concrete (hence constructive) means, was the most powerful contemporary rival to Brouwer’s and Heyting’s intuitionism.
Subject Matter
--------------
Connections between interpolation theorems in the predicate calculus and amalgamation results in varieties of cylindric and polyadic algebras, were initiated mainly by Comer, Pigozzi, Diagneault and Jonsson.
As it happened, during the course of the development of algebraic logic, dating back to the work of Boole, up to its comeback in the contemporary era through the pioneering work of Halmos, Tarski, Henkin, Monk, Andréka, and Németi, it is now established that the two most famous widely used algebraisations of first order logic are Tarski’s cylindric algebras [@HMT1], [@HMT2], and Halmos’ polyadic algebras [@Halmos]. Each has its advantages and disadvantages. For example, the class of representable cylindric algebras, though a variety, is not finitely axiomatizable, and this class exhibits an inevitable degree of complexity in any of its axiomatizations [@Andreka]. However, its equational theory is recursive. On the other hand, the variety of (representable) polyadic algebras is axiomatized by a finite schema of equations but its equational theory is not recursively enumerable [@NS]. There have been investigations to find a class of algebras that enjoy the positive properties of both. The key idea behind such investigations is to look at (the continuum many) reducts of polyadic algebras [@AUamal], [@S] searching for the desirable finitely axiomatizable variety among them.
Indeed, it is folkore in algebraic logic that cylindric algebras and polyadic algebras belong to different paradigms, frequently manifesting contradictory behaviour. The paper [@S] is a unification of the positive properties of those two paradigms in the Boolean case, and one of the results of this paper can be interpreted as a unification of those paradigms when the propositional reducts are Heyting algebras.
A polyadic algebra is typically an instance of a transformation system. A transformation system can be defined to be a quadruple of the form $(\A, I, G, {\sf S})$ where $\A$ is an algebra of any similarity type, $I$ is a non empty set (we will only be concerned with infinite sets), $G$ is a subsemigroup of $(^II,\circ)$ (the operation $\circ$ denotes composition of maps) and ${\sf S}$ is a homomorphism from $G$ to the semigroup of endomorphisms of $\A$ $(End(\A))$. Elements of $G$ are called transformations.
The set $I$ is called the dimension of the algebra, for a transformation $\tau$ on $I$, ${\sf S}({\tau})\in End(\A)$ is called a substitution operator, or simply a substitution. Polyadic algebras arise when $\A$ is a Boolean algebra endowed with quantifiers and $G={}^II$. There is an extensive literature for polyadic algebras dating back to the fifties and sixties of the last century, [@Halmos], [@J70], [@D], [@DM], [@AUamal], [@S]. Introduced by Halmos, the theory of polyadic algebras is now picking up again; indeed it’s regaining momentum with pleasing progress and a plathora of results, see the references [@MLQ], [@Fer1], [@Fer2],
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We use pseudodifferential calculus and heat kernel techniques to prove a conjecture by Chamseddine and Connes on rationality of the coefficients of the polynomials in the cosmic scale factor $a(t)$ and its higher derivatives, which describe the general terms $a_{2n}$ in the expansion of the spectral action for general Robertson-Walker metrics. We also compute the terms up to $a_{12}$ in the expansion of the spectral action by our method. As a byproduct, we verify that our computations agree with the terms up to $a_{10}$ that were previously computed by Chamseddine and Connes by a different method.'
author:
- |
$ $\
Farzad Fathizadeh, Asghar Ghorbanpour, Masoud Khalkhali
title: 'Rationality of Spectral Action for Robertson-Walker Metrics'
---
Department of Mathematics, Western University\
London, Ontario, Canada, N6A 5B7 [^1]\
0.1cm
[**Mathematics Subject Classification (2010).**]{} 81T75, 58B34, 58J42.
0.1 cm
[**Keywords.**]{} Robertson-Walker metrics, Dirac operator, Spectral action, Heat kernel, Local invariants, Pseudodifferential calculus.
Introduction
============
Noncommutative geometry in the sense of Alain Connes [@ConBook] has provided a paradigm for geometry in the noncommutative setting based on spectral data. This generalizes Riemannian geometry [@ConReconstruct] and incorporates physical models of elementary particle physics [@ConGravity; @ConMixing; @ChaConMarGS; @ConMarBook; @ChaConConceptual; @ChaConWhy; @GraIocSch; @Sit; @Sui1; @Sui2]. An outstanding feature of the spectral action defined for noncommutative geometries is that it derives the Lagrangian of the physical models from simple noncommutative geometric data [@ConMixing; @ChaConSAP; @ChaConMarGS]. Thus various methods have been developed for computing the terms in the expansion in the energy scale $\Lambda$ of the spectral action [@ChaConUFNCG; @ChaConGravity; @ChaConUncanny; @ChaConRW; @IocLevVasGlobal; @IocLevVasTorsion]. Potential applications of noncommutative geometry in cosmology have recently been carried out in [@KolMar; @Mar; @MarPie; @MarPieTeh2012; @MarPieTeh; @NelOchSal; @NelSak1; @NelSak2; @EstMar].
Noncommutative geometric spaces are described by spectral triples $(\mathcal{A}, \mathcal{H}, D)$, where $\mathcal{A}$ is an involutive algebra represented by bounded operators on a Hilbert space $\mathcal{H}$, and $D$ is an unbounded self-adjoint operator acting in $\mathcal{H}$ [@ConBook]. The operator $D$, which plays the role of the Dirac operator, encodes the metric information and it is further assumed that it has bounded commutators with elements of $\mathcal{A}$. It has been shown that if $\mathcal{A}$ is commutative and the triple satisfies suitable regularity conditions then $\mathcal{A}$ is the algebra of smooth functions on a spin$^c$ manifold $M$ and $D$ is the Dirac operator acting in the Hilbert space of $L^2$-spinors [@ConReconstruct]. In this case, the Seeley-de Witt coefficients $a_{n}(D^2) = \int_M a_n (x, D^2) \,dv(x)$, which vanish for odd $n$, appear in a small time asymptotic expansion of the form $$\textnormal{Tr}(e^{-t D^2}) \sim t^{- \textnormal{dim} (M)/2} \sum_{n\geq 0} a_{2n} (D^2) t^n \qquad (t \to 0).$$ These coefficients determine the terms in the expansion of the spectral action. That is, there is an expansion of the form $$\textnormal{Tr} f(D^2/\Lambda^2) \sim \sum_{n \geq 0} f_{2n}\, a_{2n} (D^2/\Lambda^2),$$ where $f$ is a positive even function defined on the real line, and $f_{2n} $ are the moments of the function $f$ [@ChaConSAP; @ChaConUFNCG]. See Theorem 1.145 in [@ConMarBook] for details in a more general setup, namely for spectral triples with simple dimension spectrum.
By devising a direct method based on the Euler-Maclaurin formula and the Feynman-Kac formula, Chamseddine and Connes have initiated in [@ChaConRW] a detailed study of the spectral action for the Robertson-Walker metric with a general cosmic scale factor $a(t)$. They calculated the terms up to $a_{10}$ in the expansion and checked the agreement of the terms up to $a_6$ against Gilkey’s universal formulas [@GilBook1; @GilBook2].
The present paper is intended to compute the term $a_{12}$ in the spectral action for general Robertson-Walker metrics, and to prove the conjecture of Chamseddine and Connes [@ChaConRW] on rationality of the coefficients of the polynomials in $a(t)$ and its derivatives that describe the general terms $a_{2n}$ in the expansion. In passing, we compare the outcome of our computations up to the term $a_{10}$ with the expressions obtained in [@ChaConRW], and confirm their agreement.
In terms of the above aims, explicit formulas for the Dirac operator of the Robertson-Walker metric and its pseudodifferential symbol in Hopf coordinates are derived in §\[DiracinHopf\]. Following a brief review of the heat kernel method for computing local invariants of elliptic differential operators using pseudodifferential calculus [@GilBook1], we compute in §\[Termsupto10\] the terms up to $a_{10}$ in the expansion of the spectral action for Robertson-Walker metrics. The outcome of our calculations confirms the expressions obtained in [@ChaConRW]. This forms a check in particular on the validity of $a_8$ and $a_{10}$, which as suggested in [@ChaConRW] also, seems necessary due to the high complexity of the formulas. In §\[Term12\], we record the expression for the term $a_{12}$ achieved by a significantly heavier computation, compared to the previous terms. It is checked that the reduction of $a_{12}$ to the round case $a(t)=\sin t $ conforms to the full expansion obtained in [@ChaConRW] for the round metric by remarkable calculations that are based on the Euler-Maclaurin formula. In order to validate our expression for $a_{12}$, parallel but completely different computations are performed in spherical coordinates and the final results are confirmed to match precisely with our calculations in Hopf coordinates.
In §\[ProofofConjecture\], we prove the conjecture made in [@ChaConRW] on rationality of the coefficients appearing in the expressions for the terms of the spectral action for Robertson-Walker metrics. That is, we show that the term $a_{2n}$ in the expansion is of the form $Q_{2n}\big(a(t),a'(t),\dots,a^{(2n)}(t)\big)/a(t)^{2n-3}$, where $Q_{2n}$ is a polynomial with rational coefficients. We also find a formula for the coefficient of the term with the highest derivate of $a(t)$ in $a_{2n}$. It is known that values of Feynman integrals for quantum gauge theories are closely related to multiple zeta values and periods in general and hence tend to be transcendental numbers [@MarBook]. In sharp distinction, the rationality result proved in this paper is valid for all scale factors $a(t)$ in Robertson-Walker metrics. Although it might be exceedingly difficult, it is certainly desirable to find all the terms $a_{2n}$ in the spectral action. The rationality result is a consequence of a certain symmetry in the heat kernel and it is plausible that this symmetry would eventually reveal the full structure of the coefficients $a_{2n}$. This is a task for a future work. Our main conclusions are summarized in §\[Conclusions\].
The Dirac Operator for Robertson-Walker Metrics {#DiracinHopf}
===============================================
According to the spectral action principle [@ConGravity; @ChaConSAP], the spectral action of any geometry depends on its Dirac operator since the terms in the expansion are determined by the high frequency behavior of the eigenvalues of this operator. For spin manifolds, the explicit computation of the Dirac operator in a coordinate system is most efficiently achieved by writing its formula after lifting the Levi-Civita connection on the cotangent bundle to the spin connection on the spin bundle. In this section, we summarize this formalism and compute the Dirac operator of the Robertson-Walker metric in Hopf coordinates. Throughout this paper we use Einstein’s summation convention without any further notice.
Levi-Civita connection.
-----------------------
The spin connection of any spin manifold $M$ is the lift of the Levi-Civita connection for the cotangent bundle $T^*M$ to the spin bundle. Let us, therefore, recall the following recipe for computing the Levi-Civita connection and thereby the spin connection of $M$. Given an orthonormal frame $\{\theta_\alpha\}$ for the tangent bundle $TM$ and
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'A code for the numerical evaluation of hyperelliptic theta-functions is presented. Characteristic quantities of the underlying Riemann surface such as its periods are determined with the help of spectral methods. The code is optimized for solutions of the Ernst equation where the branch points of the Riemann surface are parameterized by the physical coordinates. An exploration of the whole parameter space of the solution is thus only possible with an efficient code. The use of spectral approximations allows for an efficient calculation of all quantities in the solution with high precision. The case of almost degenerate Riemann surfaces is addressed. Tests of the numerics using identities for periods on the Riemann surface and integral identities for the Ernst potential and its derivatives are performed. It is shown that an accuracy of the order of machine precision can be achieved. These accurate solutions are used to provide boundary conditions for a code which solves the axisymmetric stationary Einstein equations. The resulting solution agrees with the theta-functional solution to very high precision.'
address:
- 'Institut für Astronomie und Astrophysik, Universität Tübingen, Auf der Morgenstelle 10, 72076 Tübingen, Germany'
- 'LUTh, Observatoire de Paris, 92195 Meudon Cedex, France'
author:
- 'J. Frauendiener'
- 'C. Klein'
title: ' Hyperelliptic Theta-Functions and Spectral Methods '
---
Introduction
============
Solutions to integrable differential equations in terms of theta-functions were introduced with the works of Novikov, Dubrovin, Matveev, Its, Krichever, …(see [@DubNov75; @ItsMat75; @Kric78; @algebro]) for the Korteweg-de Vries (KdV) equation. Such solutions to e.g. the KdV, the Sine-Gordon, and the Non-linear Schrödinger equation describe periodic or quasi-periodic solutions, see [@dubrovin81; @algebro]. They are given explicitly in terms of Riemann theta-functions defined on some Riemann surface. Though all quantities entering the solution are in general given in explicit form via integrals on the Riemann surface, the work with theta-functional solutions admittedly has not reached the importance of soliton solutions.
The main reason for the more widespread use of solitons is that they are given in terms of algebraic or exponential functions. On the other hand the parameterization of theta-functions by the underlying Riemann surface is very implicit. The main parameters, typically the branch points of the Riemann surface, enter the solutions as parameters in integrals on the Riemann surface. A full understanding of the functional dependence on these parameters seems to be only possible numerically. In recent years algorithms have been developed to establish such relations for rather general Riemann surfaces as in [@tretkoff84] or via Schottky uniformization (see [@algebro]), which have been incorporated successively in numerical and symbolic codes, see [@seppala94; @hoeij94; @gianni98; @deconinck01; @deconinck03] and references therein (the last two references are distributed along with Maple 6, respectively Maple 8, and as a Java implementation at [@riemann]). For an approach to express periods of hyperelliptic Riemann surfaces via theta constants see [@enoric2003].
These codes are convenient to study theta-functional solutions of equations of KdV-type where the considered Riemann surfaces are ‘static’, i.e., independent of the physical coordinates. In these cases the characteristic quantities of the Riemann surface have to be calculated once, just the comparatively fast summation in the approximation of the theta series via a finite sum as e.g.in [@deconinck03] has to be carried out in dependence of the space-time coordinates.
The purpose of this article is to study numerically theta-functional solutions of the Ernst equation [@ernst] which were given by Korotkin [@Koro88]. In this case the branch points of the underlying hyperelliptic Riemann surface are parameterized by the physical coordinates, the spectral curve of the Ernst equation is in this sense ‘dynamical’. The solutions are thus not studied on a single Riemann surface but on a whole family of surfaces. This implies that the time-consuming calculation of the periods of the Riemann surface has to be carried out for each point in the space-time. This includes limiting cases where the surface is almost degenerate. In addition the theta-functional solutions should be calculated to high precision in order to be able to test numerical solutions for rapidly rotating neutron stars such as provided e.g. by the spectral code `LORENE` [@Lorene]. This requires a very efficient code of high precision.
We present here a numerical code for hyperelliptic surfaces where the integrals entering the solution are calculated by expanding the integrands with a Fast Cosine Transformation in MATLAB. The precision of the numerical evaluation is tested by checking identities for periods on Riemann surfaces and by comparison with exact solutions. The code is in principle able to deal with general (non-singular) hyperelliptic surfaces, but is optimized for a genus 2 solution to the Ernst equation which was constructed in [@prl2; @prd3]. We show that an accuracy of the order of machine precision ($\sim 10^{-14}$) can be achieved at a space-time point in general position with 32 polynomials and in the case of almost degenerate surfaces which occurs e.g., when the point approaches the symmetry axis with at most 256 polynomials. Global tests of the numerical accuracy of the solutions to the Ernst equation are provided by integral identities for the Ernst potential and its derivatives: the equality of the Arnowitt-Deser-Misner (ADM) mass and the Komar mass (see [@komar; @wald]) and a generalization of the Newtonian virial theorem as derived in [@virial]. We use the so determined numerical data for the theta-functions to provide ‘exact’ boundary values on a sphere for the program library `LORENE` [@Lorene] which was developed for a numerical treatment of rapidly rotating neutron stars. `LORENE` solves the boundary value problem for the stationary axisymmetric Einstein equations with spectral methods. We show that the theta-functional solution is reproduced to the order of $10^{-11}$ and better.
The paper is organized as follows: in section \[sec:ernsteq\] we collect useful facts on the Ernst equation and hyperelliptic Riemann surfaces, in section \[sec:spectral\] we summarize basic features of spectral methods and explain our implementation of various quantities. The calculation of the periods of the hyperelliptic surface and the non-Abelian line integrals entering the solution is performed together with tests of the precision of the numerics. In section \[sec:integrals\] we check integral identities for the Ernst potential. The test of the spectral code `LORENE` is presented in section \[sec:lorene\]. In section \[sec:concl\] we add some concluding remarks.
Ernst equation and hyperelliptic Riemann surfaces {#sec:ernsteq}
=================================================
The Ernst equation for the complex valued potential $\mathcal{E}$ (we denote the real and the imaginary part of $\mathcal{E}$ with $f$ and $b$ respectively) depending on the two coordinates $(\rho,\zeta)$ can be written in the form $$\Re \mathcal{E}\left(\mathcal{E}_{\rho\rho}+\frac{1}{\rho}
\mathcal{E}_{\rho}+\mathcal{E}_{\zeta\zeta}\right)=
\mathcal{E}_{\rho}^{2}+\mathcal{E}_{\zeta}^{2}
\label{ernst1}.$$ The equation has a physical interpretation as the stationary axisymmetric Einstein equations in vacuum (see appendix and references given therein). Its complete integrability was shown by Maison [@maison] and Belinski-Zakharov [@belzak]. For real Ernst potential, the Ernst equation reduces to the axisymmetric Laplace equation for $\ln \mathcal{E}$. The corresponding solutions are static and belong to the so called Weyl class, see [@exac].
Algebro-geometric solutions to the Ernst equation were given by Korotkin [@Koro88]. The solutions are defined on a family of hyperelliptic surfaces $\mathcal{L}(\xi,\bar{\xi})$ with $\xi=\zeta-i\rho$ corresponding to the plane algebraic curve $$\mu^{2}=(K-\xi)(K-\bar{\xi})\prod_{i=1}^{g}(K-E_{i})(K-F_{i})
\label{hyper1},$$ where $g$ is the genus of the surface and where the branch points $E_{i}$, $F_{i}$ are independent of the physical coordinates and for each $n$ subject to the reality condition $E_{n}=\bar{F}_{n}$ or $E_{n},F_{n}\in \mathbb{R}$.
Hyperelliptic Riemann surfaces are important since they show up in the context of algebro-geometric solutions of various integrable equations as KdV, Sine-Gordon and Ernst. Whereas it is a non-trivial problem to find a basis for the holomorphic differentials on general surfaces (see e.g. [@deconinck01]), it is given in the hyperelliptic case (see e.g. [@algebro]) by $$d\nu_k = \left( \frac{dK}{\mu}, \frac{KdK}{\mu},\ldots,
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The control of the spatial distribution of micrometer-sized dust particles in capacitively coupled radio frequency discharges is relevant for research and applications. Typically, dust particles in plasmas form a layer located at the sheath edge adjacent to the bottom electrode. Here, a method of manipulating this distribution by the application of a specific excitation waveform, i.e. two consecutive harmonics, is discussed. Tuning the phase angle $\theta$ between the two harmonics allows to adjust the discharge symmetry via the Electrical Asymmetry Effect (EAE). An adiabatic (continuous) phase shift leaves the dust particles at an equilibrium position close to the lower sheath edge. Their levitation can be correlated with the electric field profile. By applying an abrupt phase shift the dust particles are transported between both sheaths through the plasma bulk and partially reside at an equilibium position close to the upper sheath edge. Hence, the potential profile in the bulk region is probed by the dust particles providing indirect information on plasma properties. The respective motion is understood by an analytical model, showing both the limitations and possible ways of optimizing this sheath-to-sheath transport. A classification of the transport depending on the change in the dc self bias is provided, and the pressure dependence is discussed.'
address: |
$^1$ Institute for Plasma and Atomic Physics, Ruhr University Bochum, 44780 Bochum, Germany\
$^2$ Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, H-1525 Budapest POB 49, Hungary\
$^3$ Department of Electronics, Kyushu University, 819-0395 Fukuoka, Japan
author:
- 'Shinya Iwashita$^1$, Edmund Schüngel$^1$, Julian Schulze$^1$, Peter Hartmann$^2$, Zoltán Donkó$^2$, Giichiro Uchida$^3$, Kazunori Koga$^3$, Masaharu Shiratani$^3$, Uwe Czarnetzki$^1$'
title: 'Transport control of dust particles via the Electrical Asymmetry Effect: experiment, simulation, and modeling'
---
Introduction {#Introduction}
============
Dusty plasmas exhibit interesting physical phenomena [@DustyPlasmasBasic; @FortovPysRep2005] such as the interaction of the plasma sheath [@ParticleSheath1; @ParticleSheath2; @ParticleSheath3; @Melzer] and bulk [@ParticleBulk] with the dust particles, the occurrence of waves [@ParticleWaves1] and instabilities [@ParticleInstab1; @ParticleInstab2; @ParticleInstab3], phase transitions [@Phase1; @Phase2; @Phase3; @Phase4; @Phase5], and the formation of Coulomb crystals [@Thomas; @Chu; @Hayashi; @Arp]. They have drawn a great attention for industrial application because dust particles in plasmas play various roles: on one hand the accumulation of dust particles is a major problem for device operation in fusion plasma reactors as well as for semiconductor manufacturing [@Bonitz; @Shukla; @Bouchoule; @Krasheninnikov; @Selwyn], i.e. they are impurities to be removed. On the other hand, they are of general importance for deposition purposes [@DustDepo1; @DustDepo2] and it is well known that an enhanced control of such dust particles in plasmas has the potential to realize the bottom up approach of fabricating novel materials, e.g., microelectronic circuits, medical components, and catalysts [@ShirataniJPD11; @Koga; @Wang; @Yan; @Fumagalli; @Kim]. In all cases the manipulation of dust particles, which is realized by controlling forces exerted on them such as electrostatic, thermophoretic, ion drag, and gravitational forces, or externally applied ones, e.g., created by a laser beam [@Nosenkoa; @MorfillPoP10; @Laser1; @Laser2], is crucially important. Furthermore, the use of dust particles as probes of these forces revealing plasma properties is a current topic of research [@MorfillPRL04; @DustProbes2].\
We have developed a novel method to control the transport of dust particles in a capacitively coupled radio frequency (CCRF) discharge by controlling the electrical symmetry of the discharge [@Iwashita]. Alternative dust manipulation methods using electrical pulses applied to wires have also been reported [@SamsonovPRL2002; @PustylnikPRE2006; @KnapekPRL2007; @PustylnikPoP2009]. Our dust manipulation method is based on the Electrical Asymmetry Effect (EAE) [@Heil]. The EAE allows to generate and control a dc self bias, $\eta$, electrically even in geometrically symmetric discharges. It is based on driving one electrode with a particular voltage waveform, $\phi_{\sim}(t)$, which is the sum of two consecutive harmonics with an adjustable phase shift, $\theta$: $$\label{EQappvol}
\phi_{\sim}(t)=\frac{1}{2}\phi_0[\cos(2\pi f t+\theta)+\cos(4 \pi f t) ].$$ Here, $\phi_0$ is the identical amplitude of both harmonics. In such discharges, $\eta$ is an almost linear function of $\theta$. In this way, separate control of the ion mean energy and flux at both electrodes is realized in an almost ideal way. At low pressures of a few Pa, the EAE additionally allows one to control the maximum sheath voltage and width at each electrode by adjusting $\theta$ [@Heil], resulting in the control of forces exerted on dust particles, such as electrostatic and ion drag forces. In contrast to the pulsing methods mentioned above, the change in the phase angle does not require a change in the applied power or RF amplitude. Furthermore, it is a radio frequency technique, i.e. no DC voltage is applied externally and the EAE is, therefore, applicable to capacitive discharge applications with dielectric electrode surfaces, without the need for additional electrodes or power supplies for the pulsing. The EAE can be optimized with respect to the control range of the dc self-bias by choosing non-equal voltage amplitudes for the individual harmonics [@EAE7] or by adding more consecutive harmonics to the applied voltage waveform [@EAE11; @BoothJPD12]. In this study we intend to describe the basic mechanisms of the manipulation of the dust particle distribution in electrically asymmetric CCRF discharges. Thus, we restrict ourselves to the simplest case described by Eq. (\[EQappvol\]). It is important for the analysis carried out in this work that the dust density is sufficiently low so that the plasma parameters are not disturbed by the dust particles. A large concentration of dust particles disturbs the electron density and can cause a significant change of the dc self bias when distributed asymmetrically between the sheaths [@Boufendi2011; @Watanabe1994; @EddiJPD2013]. The critical parameter for the disturbance is Havnes’ value: $P = 695 T_e r_d n_d / n_i$, where $T_e$, $r_d$, $n_d$ and $n_i$ are electron temperature, radius of dust particles, their number density and ion density, respectively [@Thomas; @Havnes1990]. $P$ is basically the ratio of the charge density of dust particles to that of ions. The concentration of dust particles disturbs the electron density for $P > 1$, while it does not for $P << 1$. In the critical region ${P_c} = 0.1-1$ the charge of the dust particles becomes significant in the total charge balance [@Havnes1990]. We calculate $P \approx 10^{-3}$ for our experiment, which is well below the $P_c$. For this estimation, direct images of dust particles were analyzed and a mean distance between particles of about 1 mm is determined. Thus, the concentration of dust particles is quite low in this study and they do not disturb the plasma.\
This paper is structured in the following way: this introduction is followed by a description of the methods used in this work. There, information on the experimental setup as well as the numerical simulation method is provided, and the analytical approaches on the RF sheath driven by non-sinusoidal voltage waveforms and the motion of dust particles in the plasma bulk region are explained. The results, which are presented and discussed in the third section, include the control of the dc self bias in dusty plasmas via the EAE, the change of the dust levitation position when changing the phase angle adiabatically (continuously), the motion of dust particles through the plasma bulk when tuning the phase angle abruptly, and a classification of the dust particle transport depending on the change in the dc self bias and the discharge conditions. Finally, concluding remarks are given in section four.
Methods
=======
Experiment
----------
![Sketch of the experimental setup.[]{data-label="FIGsetup"}](fig1.eps)
Figure \[FIGsetup\] shows the experimental setup. The experiments are carried out using a CCRF discharge operated in argon gas at $p$ = 2 - 13 Pa, excited by applying $\phi_{\sim}(t)$ according to Eq. (\[EQappvol\]) with $f$ = 13.56 MHz and $\phi_0$ = 200 - 240 V. The applied voltage and the dc self bias are measured using a high voltage probe. Details of the electrical circuit have been provided in previous papers [@Julian; @Iwashita]. The lower (powered) and upper (grounded) electrodes of 100
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
title: '**Exclusion of measurements with excessive residuals**'
---
\
\
Sobolev Astronomical Institute of St. Petersburg State University,\
Universitetskij Prospekt 28, Staryj Peterhof, St. Petersburg 198504, Russia\
\*Email: nii@astro.spbu.ru\
An adjustable algorithm of exclusion of conditional equations with excessive residuals is proposed. The criteria applied in the algorithm use variable exclusion limits which decrease as the number of equations goes down. The algorithm is easy to use, it possesses rapid convergence, minimal subjectivity, and high degree of generality.
*Keywords:* Estimation of model parameters; Conditional equations; Large residuals; Criteria of exclusion\
**1. Introduction** {#introduction .unnumbered}
===================
In many astronomical (and not only astronomical) problems of estimation of model parameters, it is important reasonably to exclude unreliable data which produce large residuals, i.e., deviations, $\varepsilon$, of measurements from the accepted model: $|\varepsilon_j|/\sigma_j\gg 1$, where $\sigma_j$ is the standard deviation for $j$-th measurement, $j=1,\:\ldots,\:N$, $N$ is the number of measurements, i.e., of conditional equations. The occurrence of large residuals (“blunders”) contradicts the basic assumption of least-squares fitting on the normal distribution of measurement errors and can cause strong biases of parameter estimates. The common “$3\sigma$” criterion to exclude blunders $$\label{3s}
\frac{|\varepsilon_j|}{\sigma_j}>k=3$$ does not allow for the probability of accidental occurrence of residual (\[3s\]) to increase with $N$ and become not negligible already at $N$ of order several tens.
In this paper, a more adjustable algorithm of exclusion of equations with excessive residuals on the basis of a variable criterion limit is elaborated.
**2.0.5emAlgorithm of excluding measurements with excessive residuals** {#emalgorithm-of-excluding-measurements-with-excessive-residuals .unnumbered}
=======================================================================
1.0.5emFor a given $N$, a value of $\kappa$ which satisfies the equation $$\left[1-\psi( \kappa)\right]N=1, \qquad \psi(z)\equiv\sqrt{\frac2\pi}\int^z_0 e^{-\frac{1}{2}t^2}dt,$$ where $\psi(z)$ is the probability integral, is found. The expectation value for the number of conditional equations with residuals $$\label{k1s}
{|\varepsilon_j|/\sigma_j}>\kappa,$$ equals one, if residuals are normally distributed. A larger number of equations with such residuals may be considered as probably excessive.
2.0.5emThe number $L$ of equations satisfying the criterion (\[k1s\]) is determined. 3.0.5emIf $L>1$, $L-L'$ equations with the largest values of $|\varepsilon_j|/\sigma_j$ are excluded from consideration. Here, $L'\ge 1$ is a parameter of the algorithm.
4.0.5emThe criterion (\[3s\]) with $k$ depending on $N$ is applied to the remaining equations, in particular if $L=1$: $$\label{kg}
{|\varepsilon_j|/\sigma_j}>k_\gamma(N),$$ where $k_\gamma$ is the root of the equation $$\label{kg(N)}
1-\left[\psi( k_\gamma)\right]^N=\gamma.$$ Here, $\gamma$ is an accepted confidence level. For low $\gamma$, i.e., for low $1-\psi(k_\gamma)$, in lieu of (\[kg(N)\]) an approximate equation can be used: $$\label{kg(N)1}
[1-\psi( k_\gamma)]N=\gamma.$$
5.0.5emFollowing the exclusion of equations with excessive residuals, a new solution of the problem is found from the remaining equations. Thereupon points 1–4 of this algorithm are applied again with new estimates of parameters and $\sigma_j$. The iterations are interrupted if no further exclusion happens.
The probability ${\cal P}(L)$ of accidental occurrence of $L$ residuals satisfying (\[k1s\]) can be approximately evaluated with the Poisson distribution, which is ${\cal P}(L)={e^{-1}/L!}$ in this case. This approach gives $${\cal P}(L\ge 2) \approx 0.264,\qquad
{\cal P}(L\ge 3) \approx 0.080,\qquad
{\cal P}(L\ge 4) \approx 0.019.$$ Thus numbers of $L=3$ and $4$ can be considered as excessive, i.e., $L'=2$ or 3 can be correspondingly accepted. However, if unbiased parameters are more important than an unbiased residual variance, $L'=1$ is also allowed.
Point 4 of the algorithm is essential in the case of only a single (or few) very large blunder(s), when point 3 can not come into action. A level of $\gamma=0.05$, being the standard one in many statistical criteria, can be accepted.
**Acknowledgments** {#acknowledgments .unnumbered}
===================
The work is partly supported by the Russian Foundation for Basic Research grant 08-02-00361 and the Russian President Grant for State Support of Leading Scientific Schools of Russia no. NSh-1323.2008.2.
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
- |
\
[calin.barbat@web.de]{}
title: Dualization of projective algebraic sets by using Gröbner bases elimination techniques
---
Introduction
============
I read about the duality principle in the book [@gie] and saw some examples of dualization in the books [@gie], [@kom] for quadrics and in the more recent introductory book on plane curves [@GF94] for plane curves. This last book gives some particular examples how the dualization is carried out but no general method. Some authors mention that variables have to be eliminated from a system. For plane curves the system is derived nicely in [@bri]. For one hypersurface I recently found [@pw] p. 104f. But for intersections of hypersurfaces the only example that I found was the intersection of two hypersurfaces in [@kom] as a general example, without being specific. So I derived in this article the system for the intersection case.
In what follows I recommend to read [@GP] for the theoretical background about projective space, homogeneous polynomial, ideal, projective variety, etc., which is not covered in the present work. For an introduction to Gröbner bases see [@cox] or [@fro]. I used the methods derived in this article to dualize some examples and also checked with the examples given in [@hr].
Motivation with plane curves
============================
In what follows here, we assume that the denominators do not vanish. Think of the inversion radius $r$ as having value $i=\sqrt{-1}$. (Other values are also permitted, e.g. $1$.) We consider different representations of plane curves.
Parametric
----------
We consider a parametrically given plane curve $c(t)=(x(t), y(t))^t$. Then we can define the pedal curve of $c(t)$ with respect to the origin as $$p(c(t))= \frac{y(t)\,x'(t) - x(t)\,y'(t)}{{x'(t)}^2 + {y'(t)}^2} \left( \begin{array}{c}-y'(t)\\x'(t)\end{array} \right)$$ (the pedal is the locus of the feet of perpendiculars from the origin to the tangents of the curve $c(t)$). We can also define what it means to invert $c(t)$ with respect to the circle of radius $r$ around the origin: $$\iota(c(t))=\frac{r^2}{{x(t)}^2 + {y(t)}^2} \left( \begin{array}{c}x(t)\\y(t)\end{array} \right)$$ By composing the two maps given above we get the dual of $c(t)$ as the inverse of the pedal: $$d(c(t))=\iota(p(c(t)))=\frac{r^2}{y(t)\,x'(t) - x(t)\,y'(t)} \left( \begin{array}{c}-y'(t)\\x'(t)\end{array} \right)$$ This is best explained by a commutative diagram: $$\xymatrix{ & c(t) \ar[dl]_d \ar[d]^p \\
\iota(p(c(t))) \ar[ur] \ar[r]^\iota & p(c(t)) \ar[l] }$$
Complex
-------
Now we do the same for a curve $z(t)$ in the complex plane. The pedal is $$p(z(t)) = \frac{\overline{z'(t)}\,z(t) - \overline{z(t)}\,z'(t)}{2\,\overline{z'(t)}}$$ The inverse is $$\iota(z(t))=\frac{r^2}{\overline{z(t)}}$$ and the dual is (again by composition) $$d(z(t))=\iota(p(z(t)))=\frac{2\,r^2\,z'(t)}{\overline{z(t)}\,z'(t)-\overline{z'(t)}\,z(t)}$$ A similar commutative diagram as in the parametric case holds.
Implicit
--------
For implicitly given curves $f(x, y)=0$ we cannot give explicit formulas for the pedal curve but we give a method for computing it. We need the gradient $\nabla f(x, y) = \left(\frac{\partial f}{\partial x}(x, y), \frac{\partial f}{\partial y}(x, y)\right)^t$. Assume $p=(x,y)$ is a point of $f$ and $P=(X,Y)$ is a point of the pedal of $f$. Then, by the definition of the pedal, following must hold:
1. $p=(x,y)$ is a point of $f$: $f(x,y)=0$.
2. $P$ lies on the tangent at $f$ through $p$: $(P-p)^t \nabla f(x, y) =0$.
3. $P$ is orthogonal to tangent at $f$ through $p$: $P^t \left(\frac{\partial f}{\partial y}(x, y), -\frac{\partial f}{\partial x}(x, y)\right)^t =0$
By eliminating $(x,y)$ from these three equations we get an equation in $(X,Y)$ which is the pedal curve. For convenience we substitute $(X,Y)\mapsto (x,y)$. In what follows, we will see how the elimination can be done with Gröbner bases.
The inverse of $f(x, y)=0$ is $f\left(\frac{r^2\,x}{x^2 + y^2},\frac{r^2\,y}{x^2 + y^2}\right)=0$. The dual of $f$ is the composition of inversion and pedal as constructed above.
Theory
======
Case of one homogeneous polynomial
----------------------------------
First we consider the following projective algebraic set $$V(p) = \{{\bf x} \in \mathbb{K}^{n+1}\mid p({\bf x})=0\}$$ with $\mathbb{K}$ an algebraic closed field, ${\bf x}=(x_0, x_1, \ldots, x_n)$ a point of $\mathbb{K}^{n+1}$ and $p$ a homogeneous polynomial from $\mathbb{K}[x_0, x_1, \ldots, x_n]$. $V(p)$ consists of all roots ${\bf x}=(x_0, x_1, \ldots, x_n)$ of $p$ and is a hypersurface in the projective space ${\mathbb P}^n$.
Let ${\bf u} = (u_0, u_1, \ldots, u_n)$ be a normal vector to $V(p)$ in a regular point ${\bf x} \in V(p)$. On the other side we know that the gradient $$\nabla p({\bf x}) = \left(\frac{\partial p}{\partial x_0}({\bf x}), \frac{\partial p}{\partial x_1}({\bf x}), \ldots, \frac{\partial p}{\partial x_n}({\bf x})\right)^t$$ is normal to $V(p)$ in $\bf x$. Therefore ${\bf u}$ and $\nabla p({\bf x})$ are linearly dependent. This can be written as ${\bf u}=\lambda\nabla p({\bf x})$ with a factor $\lambda$. We can form the following system $$\begin{aligned}
\left\{\begin{aligned}
\begin{split}
p({\bf x}) &= 0 \\
{\bf u} - \lambda \nabla p({\bf x}) &= {\bf 0} \\
\end{split}
\end{aligned}\right.\label{ds1}\end{aligned}$$
We define the set $V^*(p)=\{{\bf u} \in \mathbb{K}^{n+1}\mid p({\bf x}) = 0, \, {\bf u} - \lambda \nabla p({\bf x}) = 0 \}$ of partial solutions to the system (\[ds1\]) to be the dual of $V(p)$.
Note that we are not interested in a complete solution of (\[ds1\]), but only in the partial solutions which I call here the $\bf u$-part of the solution. The $\bf u$-part of the solution of this system is the result of applying the Gauß map to $p({\bf x}) = 0$, where the Gauß map (see [@s], p. 103) is given – in Chow coordinates (see [@l]) – by $$\gamma : {\bf x} \mapsto {\bf u} = \lambda \nabla p({\bf x})$$ Now we want to construct an equivalent system to (\[ds1\]) but simpler in structure, describing the same dual algebraic set $V^*(p)$.
There exists a system $B$ of polynomials in $\mathbb{K}[u_0, u_1, \ldots, u_n]$ with the same solution set $V^*(p)$ as the system (\[ds1\]).
We start with the system (\[ds1\]) viewed as system of polynomials $$q_j \in \mathbb{K}[x_0, x_1, \ldots, x_n, \lambda, u_0, u_1, \ldots, u_n]$$ and eliminate the first
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
- Yu Hu
- James Trousdale
- Krešimir Josić
- 'Eric Shea-Brown'
title: Motif Statistics and Spike Correlations in Neuronal Networks
---
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Chris Hoffman and Brent Doiron for their helpful insights. This work was supported by NSF grants DMS-0817649, DMS-1122094, and a Texas ARP/ATP award to KJ, and by a Career Award at the Scientific Interface from the Burroughs Wellcome Fund and NSF Grants DMS-1056125 and DMS-0818153 to ESB.
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We present optical candidates for 75 X-ray sources in a $\sim 1$ deg$^2$ overlapping region with the medium deep ROSAT survey (Molthagen et al. 1997). These candidates are selected using the multi-color CCD imaging observations made for the T329 field of the Beijing-Arizona-Taipei-Connecticut (BATC) Sky Survey, which utilizes the NAOC 0.6/0.9m Schmidt telescope with 15 intermediate-band filters covering the wavelength range 3360-9745 Å. These X-ray sources are relatively faint (CR $<< 0.2 s^{-1}$) and thus mostly are not included in the RBS catalog, they also remain as X-ray sources without optical candidates in a previous identification program carried out by the Hamburg Quasar Survey. Within their position-error circles, almost all the X-ray sources are observed to have one or more spatially associated optical candidates within them down to the magnitude $m_V \sim 23.1$. We have classified 149 of 156 detected optical candidates with 73 of the 75 X-ray sources with a new method which predicts a redshift for non-stellar objects, which we have termed the SED-based Object Classification Approach (SOCA). These optical candidates include: 31 QSOs, 39 stars, 37 starburst galaxies, 42 galaxies, and 7 “just” visible objects. Twenty-eight X-ray error circles have only one visible object in them: 9 QSOs, 3 normal galaxies, 8 starburst galaxies, 6 stars, and two of the “just” visible objects. We have also cross-correlated the positions of these optical objects with NED, the FIRST radio source catalog and the 2MASS catalog. Separately, we have also SED-classified the remaining 6011 objects in our field of view. Optical objects are found at the $6.5\sigma$ level above what one would expect from a random distribution, only QSOs are over-represented in these error circles at greater than 4$\sigma$ frequency. We estimate redshifts for all extragalactic objects, and find a good correspondence of our predicted redshift with the measured redshift (a mean error of 0.04 in $\Delta z$. There appears to be a supercluster at z $\sim$ 0.3-0.35 in this direction, including many of the galaxies in the X-ray error circles are found in this redshift range.'
author:
- |
Haotong Zhang, Suijian Xue, David Burstein,\
Xu Zhou, Zhaoji Jiang, Hong Wu, Jun Ma, Jiansheng Chen,\
and Zhenlong Zou
title: 'Multicolor Photometric Observations of Optical Candidates to Faint ROSAT X-ray Sources in a 1 deg$^2$ field of the BATC Survey'
---
1.5cm
[**keywords:**]{} X-rays: galaxies - galaxies: active - catalog: surveys
Introduction
============
Combined optical and X-ray data allow one to obtain information about the luminosity functions of various types of X-ray sources as well as their evolution with redshift. In turn, this information can be used to further constrain models for the production of the X-ray background at different flux levels. Much effort has so far been made on the optical identification of the X-ray sources in the ROSAT/Bright Source (RBS) catalog (e.g., Voges et al. 1999; Rutledge et al. 2000) as well as of X-ray sources in some individual ROSAT deep survey observations (e.g., Lehmann et al. 2001).
Yet, it is often unknown how much of the detections that occur in X-ray error circles are real associations of optical counterparts to these X-ray sources, and how much of these associations are due to random chance. To do this, one needs to have detected and identified all optical objects in a given image, and then see what percentages of these objects (QSOs, galaxies, stars) are then found near or within the areas covered by the X-ray error circles. This is precisely the kind of data we have for 75 X-ray sources detected with the ROSAT PSPC to a flux limit $S_x({\rm 0.1-2.4
keV}) \geq 5.3\times10^{-14}$ , in a 1 deg$^2$ field of view, as this field of view was also observed in our multicolor images for the Beijing-Arizona-Taipei-Connecticut Sky Survey (BATC survey). The relevant X-ray and optical data are presented in § 2. Details of the object classification procedure as well as selection of the X-ray candidates are given in § 3. Associated information that can be gleaned from these data are given in § 4. We summarize our results in § 5.
The data and analysis
=====================
The X-ray data
--------------
The X-ray data come from a catalog obtained from a medium deep ROSAT survey in the HQS field HS 47.5/22 (Molthagen, Wendker, & Briel, 1997). The survey consists of 48 overlapping ROSAT PSPC pointings which were added up to produce a final catalog containing 574 X-ray sources with broad band (0.1-2.4 keV) count rates between $\sim3\times10^{-3}\rm cts\ s^{-1}$ and $\sim0.2\rm
cts\ s^{-1}$, in a field of view (FOV) of $\sim 2.3\rm\ deg^2$. Molthagen et al. adopt an X-ray error circle of 2$\sigma$ + 10$''$ in radius, with the value of $\sigma$ coming from their observations. This is the X-ray error circle used in the present analysis.
There was a preliminary identification of these X-ray sources with the HQS plates (Molthagen et al, 1997). Only a few objects, all brighter than $m_B\approx18^m.5$, have recognizable spectra. At $m_{\rm B}>18^m.5$, many objects are generally classified as weak and extremely blue, blue or red. For many X-ray sources no spectral classification was possible, the optical object simply being visible or the field of view empty. 75 of the 574 HQS sources fall on one program field of the BATC survey, T329, centered at 09:56:24.46, +47:35:08.4 (J2000), forming a subsample of the ROSAT medium deep survey in a 1 deg$^2$ field. (One-third of the BATC fields are located with a known quasar in its center. For field T329, this quasar is PC0953+4749 with z = 4.46, originally discovered by Schneider, Schmidt & Gunn 1991. Ironically, this QSO is not an X-ray source in the HQS field.) The X-ray brightness distribution of these 75 sources is shown in Fig.\[f1\]. The distribution of these 75 sources in our field of view in shown in Fig.\[x\_dist\].
Molthagen et al. associate 25 optical candidates for these 75 X-ray sources, or a frequency of 1/3: 6 QSOs or active galaxies; 7 QSO/active galaxy candidates (classified as such or extremely blue); 1 star; 8 stellar candidates; 1 galaxy candidate; 2 faint red objects; 5 unidentified spectra (including overlaps); 39 visible on the HQS direct plate only; and 6 empty fields (i.e., no counterpart on the HQS plate).
The BATC optical data
---------------------
Optical observations of BATC field T329 were carried out from 1996-1999 as part of the BATC Survey. Our survey utilizes the 0.6/0.9m Schmidt telescope of the Xinglong Observing Station of the National Astronomical Observatory of China (NAOC), equipped with 15 intermediate-band filters covering the wavelength range 3360-9745Å. With this facility our survey is designed to do multi-color CCD ($2048\times2048$) imaging of 500 selected, $\sim
1$ deg$^2$ fields-of-view for multiple scientific purposes (cf. Fan et al. 1996; Shang et al. 1998; Zheng et al. 1999; Zhou et al. 1999; Yan et al. 2000; Kong et al. 2000; Ma et al. 2002; Wu et al. 2002).
The dataset for T329 consists of a number of individual direct CCD images in each of the 15 BATC passbands. These images are first treated individually (bias, dark and flat-fielding corrections) and then combined to comprise deep images. Information on the passbands used for the present study, including filter parameters, total exposure time, number of flux calibration images obtained, and the magnitude limit for that passband are given in Table \[table1\]. Details on the BATC flux calibration procedure are given in several previous papers (Fan et al. 1996; Zhou et al. 1999; Yan et al. 2000) and the reader is referred to those papers for this information. Further discussion of the observations made in field T329 that are separate from the X-ray identification issue are given in Zhou et al. (2003). The final product of the BATC observations of field T329 is a catalog of 6160 point-like optical objects in our 58 arcmin$^2$ field of view, with astrometry and photometry in 15 colors.
SED classification
------------------
We are in the process of developing a SED-based Object Classification Approach (termed SOCA
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: |
Experience collected in mesoscopic dynamic modeling of externally driven systems indicates absence of potentials that could play role of equilibrium or nonequilibrium thermodynamic potentials yet their\
thermodynamics-like modeling is often found to provide a good description, good understanding, and predictions that agree with results of experimental observations. This apparent contradiction is explained by noting that the dynamic and the thermodynamics-like investigations on a given mesoscopic level of description are not directly related. Their relation is indirect. They both represent two aspects of dynamic modeling on a more microscopic level of description. The thermodynamic analysis arises in the investigation of the way the more microscopic dynamics reduces to the mesoscopic dynamics (reducing dynamics) and the mesoscopic dynamic analysis in the investigation of the result of the reduction (reduced dynamics).
author:
- |
Miroslav Grmela[^1]\
École Polytechnique de Montréal,\
C.P.6079 suc. Centre-ville Montréal, H3C 3A7, Québec, Canada
title: ' **Externally driven macroscopic systems: Dynamics versus Thermodynamics**'
---
Introduction {#Intr}
============
Boussinesq equation is a well known example of mathematical formulation of mesoscopic dynamics of externally driven macroscopic systems. The mesoscopic level on which the physics is regarded in this example is the level of fluid mechanics, the system itself is a horizontal layer of fluid heated from below (Rayleigh-Bénard system), and the external driving forces are the gravitational force and imposed temperature gradient. Analysis of solutions of the Boussinesq equations reveals properties observed in experiments (e.g. the observed passage from less organized to a more organized behavior presents itself as a bifurcation in solutions). Many other examples of this type can be found for instance in [@Halp]. One of the common features of the dynamical equations that arise in the examples (the feature that has been noted in [@Halp]) is that there does not seem to be possible, at least in general, to associate them with a potential whose landscape would provide a pertinent information about their solutions \[ *“... there is no evidence for any global minimization principles controlling the structure ...” - see the last paragraph of Conclusion in [@Halp]*\]. Since potential (or potentials) of this type are essential in any type of thermodynamics, the observed common feature seems to point to the conclusion that there is no thermodynamics of externally driven systems.
On the other hand, there is a long tradition (starting with Prigogine in [@Prigogine]) of investigating externally driven systems with methods of thermodynamics. Roughly speaking, responses of macroscopic systems to external forces are seen as adaptations minimizing their resistance. The thermodynamic potentials involved in this type of considerations (i.e. potentials used to characterize the “resistance”) are usually various versions of the work done by external forces and the entropy production. There are many examples of very successful and very useful considerations of this type (see e.g. [@Umb]). In Section \[EX5\] we illustrate the thermodynamic analysis in the context of an investigation of the morphology of immiscible blends. Specifically, we show how the thermodynamic argument provides an estimate of concentrations at the point of phase inversion, i.e. at the point at which the morphology of a mixture of two immiscible fluids changes in such a way that the roles of being encircled and encircling changes (i.e. the continuous phase and the dispersed phase exchange their roles).
The experience collected in investigations of externally driven systems can be thus summed up by saying that mesoscopic dynamical modeling indicates an impossibility of using thermodynamics-like arguments yet this type of arguments are often found to be very useful and pertinent. There are in fact well known examples [@Keizer] in which both dynamic and thermodynamic approaches were developed and the potentials used in the thermodynamic analysis are proven to play no significant role in the dynamic analysis. Our objective in this paper is to suggest an explanation of this apparent contradiction. We show that the dynamic and the thermodynamic analysis made on a given mesoscopic level of description are not directly related. Their relation is indirect. They are both two aspects of a single dynamic analysis made on a more microscopic (i.e. involving more details) level of description. An investigation of the way the microscopic dynamics is reducing to the mesoscopic dynamics provides the mesoscopic thermodynamics (Section \[RD\]) and the investigation of the final result of the reduction provides the mesoscopic dynamics.
It is important to emphasize that we are using in this paper the term “thermodynamics” in a general sense (explained in Section \[RD\]). While the classical equilibrium thermodynamics and the Gibbs equilibrium statistical mechanics are particular examples of the general thermodynamics presented in Section \[RD\], they are not the ones that are the most pertinent for discussing externally driven systems.
Multiscale Mesoscopic Models {#MMM}
=============================
Given an externally driven system (or a family of such systems), how do we formulate its dynamical model? The most common way to do it (called hereafter a direct derivation) proceeds in the following three steps. First, behavior of the externally driven macroscopic systems under consideration is observed experimentally in certain types of measurements called hereafter *meso-measurements*. In the second step, the experience collected in the meso-measurements together with an insight into the physics taking place in the observed systems leads to the choice of the level of description, i.e. the choice of state variables (we shall denote them by the symbol $y$), and equations $$\label{Gdyn}
\dot{y}=g(y,\zeta, \mathcal{F}^{meso})$$ governing their time evolution. By $\zeta$ we denotes the material parameters (i.e. the parameters through which the individual nature of the physical systems under consideration is expressed) and $\mathcal{F}^{meso}$ denotes the external forces. In the third step, the governing equations (\[Gdyn\]) are solved and the solutions are compared with results of observations. If the comparison is satisfactory, the model represented by (\[Gdyn\]) is called a well established mesoscopic dynamical model (e.g. the Boussinesq model is a well established model of the Rayleigh-Bénard systems). The choice of state variables $y$ in the second step is usually made by trying to formulate the simplest possible model in the sense that the chosen state variables are related as close as possible to the quantities observed in the *meso* measurements. The original derivation of the Boussinesq equations constituting the dynamic model of the Rayleigh-Bénard system provides a classical example of the direct derivation. The chosen mesoscopic level is in this example the level of fluid mechanics (the classical hydrodynamic fields serve as state variables $y$). The comparison made in the third step shows indeed agreement between predictions of the model and results of experimental observations. Hereafter, we shall refer to the collection of *meso* measurements and the mathematical model (\[Gdyn\]) as a *meso level* description.
We now pick one well established mesoscopic model (e.g. the Boussinesq model). There are immediately two conclusions that we can draw. The first one is that there exist more microscopic levels (i.e. levels involving more details, we shall call them *MESO levels*) on which the physical system under investigation can be described. This is because the chosen *meso level* (e.g. the level of fluid mechanics) ignores many microscopic details that appear to be irrelevant to our interests (determined by meso-measurements and also by intended *meso* applications). We recall that there always exists at least one well established *MESO level* on which states are described by position vectors and velocities of $\sim 10^{23}$ particles composing the macoscopic systems under consideration (provided we remain in the realm of classical physics). Such ultimately microscopic model will be hereafter denoted as *MICRO* model.
The second conclusion is that if we choose a *MESO level* and we found it to be well established (i.e. its predictions agree with results of more detailed *MESO* measurements), then we have to be able to see in solutions to its governing equations the following two types of dynamics: (i) reducing dynamics describing approach to the *MESO* dynamics to the *meso* dynamics, and (ii) reduced *MESO* dynamics that is the *meso* dynamics. This is because both the original *meso* model and the more microscopic *MESO* model have been found to be well established. Following further the second conclusion, we see that we have now an alternative way to derive the governing equations of our original *meso* model. In addition to its direct mesoscopic derivation described above in the first paragraph, we can derive it also by constructing first a more microscopic *MESO* model and then recognizing the *meso* model as a pattern in solutions to its governing equations. This new way of deriving the *meso* model seems to be complicated and indeed, it is rarely used. Nevertheless, it is important that this alternative way of derivation exists and that, by following it, we arrive at least at two new results: (a) the material parameters $\zeta$ through which the individual nature of macroscopic systems is expressed in the *meso* model (\[Gdyn\]) appear as functions of the material parameters playing the same role in the more microscopic *MESO* model, and (b) the reducing dynamics, giving rise to thermodynamics (as we show in Section \[RD\]).
The above consideration motivates us to start our investigation of externally forced macroscopic systems with two mesoscopic models instead of with only one such model (\[Gdyn\]). The second model (*MESO* model) is formulated on a more microscopic
|
{
"pile_set_name": "ArXiv"
}
| null | null |
epsf
[**AMS, a particle spectrometer in space [^1]**]{}
[M. Buénerd]{}\
\
[for the AMS collaboration]{}
Introduction
============
Accurate measurements of particle fluxes close to earth have been performed recently by the AMS experiment, bringing a body of excellent new data on the particle populations in the low altitude terrestrial environment. These results should rejuvenate the long standing interest of a broad community of scientists for the interactions between the cosmic ray (CR) flux and the atmosphere and for the dynamics of particles in the earth neighborhood. They certainly open new prospects for accurate studies of these phenomena to investigate the interaction mechanisms generating the observed populations.
The AMS experiment took its first data during a precursor flight on june 2-12, 1998, on the Space Shuttle DISCOVERY. The flight was primitively intended as a qualification test for the spectrometer instrumentation. The orbit altitude was close to 370 km. During 100 Hours of counting, about 10$^8$ events were recorded providing new results of high quality on the particle distributions at the altitude of the detector. Some of these results were rather unexpected. They illustrate the discovery potential of the experiment in its future steps.
This contribution is devoted to a general presentation of the project, of the results obtained during this first experimental test and of their interpretation, and of the goals and plans of the forthcoming phase II of the experimental program. The first part will deal with a description of the measurements performed and questions raised on the dynamics of the detected particles in the earth environment, by the results obtained. The second part will describe a phenomenological approach based on a simulation to account for the observed distributions. The third and last part will consist of a description of the phase II AMS spectrometer which will begin on the International Space Station as of next october 2003, which will be very different from the version flown on the shuttle, and of its physics program.
The AMS01 precursor flight
==========================
The spectrometer operation during the flight has been very successful with only a few instrumental defects, having no significant consequence on the quality of the measurements achieved.
The spectrometer
----------------
Figure \[AMS01\] shows a cut view in perspective of the spectrometer which was flown on the shuttle. The apparatus included a cylindrical permanent magnet generating a 0.15 Tesla dipole field perpendicular to the axis of the cylinder inside its volume [@AIMANT]. The inner volume was mapped with a tracker consisting of 6 planes of silicon microstrips partially equipped at this stage, allowing the reconstruction of particle trajectories [@TRACK] . The tracker planes also provided dE/dX measurements of the particles. Above and below the magnet, two double planes of scintillator hodoscopes with perpendicular orientations of their paddles, provided both a measurement of the particle time of flight (TOF) and of their specific energy loss (dE/dX). The paddle location and the position sensitivity inside the paddles also provided a complementary determination of the particle hit coordinates, useful for background rejection. A skirt of scintillators around the tracker was used to veto on particles outside the fiducial angular acceptance of the counter. At the bottom of the device a threshold Cherenkov counter equipped with n=1.035 aerogel material allowed $p/e^+$ and $\bar{p}$/e$^-$ discrimination below the $p(\bar{p})$ threshold around 4 GeV/c particles [@ATC].
Results
-------
Some of the results have already been published [@HEBAR; @PROT1; @PROT2; @LEPT; @HE]. The measured data are still under analysis however, and the physics issues addressed by the experiment are being actively investigated. Some of the latters are discussed on in the following.
### Search for antimatter
The first claimed objective of the experiment is the search for primordial antimatter in space. It was then very important to investigate the capability of the spectrometer to identify antiparticles with Z$\geq$2, and to identify and reject background events.\
$\bullet$ [**Antihelium [[@HEBAR]]{} -** ]{}
Figure \[AHE\] shows the spectral distribution of Z=2 particles as a function of their rigidity, i.e., momentum/charge, the sign of the charge being measured by the sign of the trajectory curvature in the tracker. Positive rigidities correspond to He particles, whereas antiHeliums are expected on the negative side. A few fake antiheliums due to soft interactions in the detector, were rejected by means of appropriate cuts on the energy deposit in the tracker planes.
Finally the experiment has allowed to set a new lower limit on the $\overline{He}$/$He$ fraction in cosmic rays, of 1.1 10$^{-6}$. See [@BESS] for recent results from the BESS experiment.\
$\bullet$ [**Antimatter nuclei Z$>$2 -** ]{} The particle identification capabilities of the spectrometer have been used to search also for antimatter nuclei with Z$>$2. This search has been negative so far. The limit obtained will be reported in a future publication.
### Protons [[@PROT1; @PROT2]]{}
The CR proton distribution was already very well known from previous experiments before the AMS flight. The measurements were intended to be used for checking and calibrating the experiments, no new result being expected. Figure \[PROTONS\] show the kinetic energy distributions of incoming particles (towards earth) measured by AMS in bins of latitude. The spectra show some expected features like the power law decrease with energy. The geomagnetic cutoff (GC) due to the sweeping away of particles by the earth magnetic field below a critical momentum, is clearly observed in the spectra, decreasing from about 15 GeV around the equator down to zero in the polar region. The spectrum at high latitudes is in good agreement with previous measurements. Although no significant flux was expected below GC, a strong rise of the spectra at low energy is observed at all low latitudes with a strong enhancement in the equatorial region. The Albedo (outgoing particles) spectra at the same latitude do not show as expected the high energy features due to the incoming CR flux, but they display one single component peaked at low energy and overlapping almost perfectly (to within 1%) with the low E component of the incoming flux. These features indicate that we are dealing with a population of trapped particles circling around earth magnetic field lines, exactly as in the Van Allen belts but at much higher energy and much closer to earth. This will be confirmed by the analysis reported below.
### Leptons [[@LEPT]]{}
The flux of leptons has been measured up to about 100 GeV for electrons. It was limited to about 3 GeV for positrons by the $p/e^+$ discrimination range set by the Cherenkov counter threshold for protons.
$\bullet$ [**Electrons**]{} The electron spectra show quite similar features as the proton spectra, with the low energy component of the downgoing flux and the upgoing flux almost perfectly overlapping in the equatorial region. In addition these components of the lepton flux have exactly the same shape to within statistical errors, as for protons, indicating that the particles are likely involved in the same dynamical process.
$\bullet$ [**Positrons**]{} The positron spectra are similar to the electron’s over the range investigated. The surprising feature is that the positron to electron flux ratio is about 4 in the equator region, while in the cosmic flux it is about 0.1, and about one in the atmosphere. The origin of this feature is an open question which is being addressed by the groups of the collaboration.
Figure \[LEPTONS\] shows the distributions of electrons and positrons over the positron ID range in the equatorial region (left) and the distribution of the e$^+$/e$^-$ ratio in latitude.
### Ions
$\bullet$ [**Deuterium [[@DEUT]]{} -** ]{} The flux of deuterium has been measured and some preliminary results are available.
$\bullet$ [**Helium [[@HE]]{} -** ]{}
The measured flux of Helium is in agreement with previous measurements and doesn’t show a strong rise of flux below GC as the proton flux does. However a small flux of $^3$He is found below GC, which originates probably at least partly from the fragmentation of cosmic $^4$He (figure \[HELIUMS\]. A consistent picture of these population based on known nuclear reaction mechanisms is being invetsigated to account for these populations of light nuclei [@DERHE].
$\bullet$ [**Z$>$2 Nuclei -** ]{} Some significant samples of light ions with 2$<$Z$\leq\approx$10 have been measured during this run. They are still being analyzed.
Origin of the measured proton flux [[@DER00]]{}
===============================================
Simulation program
------------------
The inclusive spectrum of protons at the altitude of AMS (390-400km) has been calculated by means of a computer simulation program built to this purpose. CR particles are generated with their natural abundance and momentum distributions. They are propagated inside the earth magnetic field. Particles are allowed to interact with atmospheric nuclei and produce secondary protons with cross sections and multiplicities as discussed below. Each secondary proton is then propagated and allowed to collide as in the previous step. A reaction cascade can thus develop through the atmosphere. The reaction products are counted when they cross the virtual sphere at the altitude of the AMS spectrometer, upward and downward. Particles undergo energy loss by ionisation before and after the interaction. Multiple scattering effects have not been included at this stage. Each event is propagated until the particle disappears by either colliding with a nucleus, or being stopped in the atmosphere,
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Let $\Omega\subset \mathbb{C}^2$ be a bounded pseudoconvex complete Reinhardt domain with a smooth boundary. We study the behavior of analytic structure in the boundary of $\Omega$ and obtain a compactness result for Hankel operators on the Bergman space of $\Omega$.'
address: 'Bowling Green State University, Department of Mathematics and Statistics, Bowling Green, Ohio 43403 '
author:
- 'Timothy G. Clos'
bibliography:
- 'rrrefs.bib'
title: Hankel Operators on the Bergman spaces of Reinhardt Domains and Foliations of Analytic Disks
---
Introduction
============
Let $\Omega\subset \mathbb{C}^n$ for $n\geq 2$ be a bounded domain. We let $dV$ be the (normalized) Lebesgue volume measure on $\Omega$. Then $L^2(\Omega)$ is the space of measurable, square integrable functions on $\Omega$. Let $\mathcal{O}_{\Omega}$ be the collection of all holomorphic (analytic) functions on $\Omega$. Then the Bergman space $A^2(\Omega):=\mathcal{O}_{\Omega}\cap L^2(\Omega)$ is a closed subspace of $L^2(\Omega)$, a Hilbert space. Therefore, there exists an orthogonal projection $P:L^2(\Omega)\rightarrow A^2(\Omega)$ called the Bergman projection. Then the Hankel operator with symbol $\phi\in L^{\infty}(\Omega)$ is defined as $$H_{\phi}f:=(I-P)(\phi f)$$ where $I$ is the identity operator and $f\in A^2(\Omega)$.
Previous Work
=============
Compactness of Hankel operators on the Bergman spaces of bounded domains and its relationship between analytic structure in the boundary of these domains is an ongoing research topic. In one complex dimension, Axler in [@Axler86] completely characterizes compactness of Hankel operators with conjugate holomorphic, $L^2$ symbols. There, the emphasis is on whether the symbol belongs to the little Bloch space. This requires that the derivative of the complex conjugate of the symbol satisfy a growth condition near the boundary of the domain.\
The situation is different in several variables for conjugate holomorphic symbols. In [@clos], the author completely characterizes compactness of Hankel operator with conjugate holomorphic symbols on convex Reinhardt domains in $\mathbb{C}^n$ if the boundary contains a certain class of analytic disks. The proof relied on using the analytic structure in the boundary to show that a compact Hankel operator with a conjugate holomorphic symbol must be the zero operator, assuming certain conditions on the boundary of the domain. In particular, the symbol is identically constant if certain conditions are satisfied. An example of a domain where these conditions are satisfied is the polydisk in $\mathbb{C}^n$ (as seen in [@Le10] and [@clos]).\
In [@CelZey] the authors studied the compactness of Hankel operators with symbols continuous up to the closure of bounded pseudoconvex domains via compactness multipliers. They showed if $\phi\in C(\overline{\Omega})$ is a compactness multiplier then $H_{\phi}$ is compact on $A^2(\Omega)$. The authors of [@CelZey] approached the problem using the compactness estimate machinery developed in [@StraubeBook].\
Hankel operators with symbols continuous up to the closure of the domain is also studied in [@CuckovicSahutoglu09] and [@ClosSahut]. The paper [@CuckovicSahutoglu09] considered Hankel operators with symbols that are $C^1$-smooth up to the closure of bounded convex domains in $\mathbb{C}^2$. The paper [@ClosSahut] considered symbols that are continuous up to the closure of bounded convex Reinhardt domains in $\mathbb{C}^2$. Thus the regularity of the symbol was reduced at the expense of a smaller class of domains.\
Many of these results characterize the compactness of these operators by the behavior of the symbol along analytic structure in the domain. For bounded pseudoconvex domains in $\mathbb{C}^n$, compactness of the $\overline{\partial}$-Neumann operator implies the compactness of Hankel operators with symbols continuous up to the closure of the domain. See [@FuSt] and [@StraubeBook] for more information on compactness of the $\overline{\partial}$-Neumann operator. For example the ball in $\mathbb{C}^n$ has compact $\overline{\partial}$-Neumann operator and hence any Hankel operator with symbol continuous up the closure of the ball is compact on the Bergman space of the ball. The compactness of the $\overline{\partial}$-Neumann operator on the ball in $\mathbb{C}^n$ follows from the convexity of the domain and absence of analytic structure in the boundary of the domain. See [@StraubeBook].\
As shown in [@dbaressential], the existence of analytic structure in the boundary of bounded convex domains is an impediment to the compactness of the $\overline{\partial}$-Neumann operator. It is therefore natural to ask whether the Hankel operator with symbol continuous up to the closure of the domain can be compact if the $\overline{\partial} $-Neumann operator is not compact. As we shall see, the answer is yes. On the polydisk in $\mathbb{C}^n$, [@Le10] showed that the answer to this question is yes, despite the non-compactness of the $\overline{\partial}$-Neumann operator. For bounded convex domains in $\mathbb{C}^n$ for $n\geq 2$, relating the compactness of the Hankel operator with continuously differentiable symbols to the geometry of the boundary is well studied. See [@CuckovicSahutoglu09]. They give a more general characterization than [@Le10] for symbols that are $C^1$-smooth up to the closure of the domain. For symbols that are only continuous up to the closure of bounded convex Reinhardt domains in $\mathbb{C}^2$, there is a complete characterization in [@ClosSahut].\
The Main Result
===============
In this paper we investigate the compactness of Hankel operators on the Bergman spaces of smooth bounded pseudoconvex complete Reinhardt domains. These domains may not be convex as in [@ClosSahut] but are instead almost locally convexifiable. That is, for any $(p_1,p_2)\in b\Omega$ and if $(p_1,p_2)$ are away from the coordinate axes, then there exists $r>0$ so that $$B((p_1,p_2),r):=\{(z_1,z_2)\in \mathbb{C}^2: |z_1-p_1|^2+|z_2-p_2|^2<r^2\}$$ and a biholomorphism $T:B((p_1,p_2),r)\rightarrow \mathbb{C}^2$ so that $ B((p_1,p_2),r)\cap \Omega$ is a domain and $T(B((p_1,p_2),r)\cap \Omega)$ is convex. We will use this fact along with a result in [@CuckovicSahutoglu09] to localize the problem. We then analyze the geometry on analytic structure in the resulting convex domain. Then we perform the analysis on the boundary of this convex domain using the boundary geometry previously established to show the main result.\
\[thmmain\] Let $\Omega\subset\mathbb{C}^2$ be a bounded pseudoconvex complete Reinhardt domain with a smooth boundary. Then $\phi\in C(\overline{\Omega})$ so that $\phi\circ f$ is holomorphic for any holomorphic $f:\mathbb{D}\rightarrow b\Omega$ if and only if $H_{\phi}$ is compact on $A^2(\Omega)$.
We will assume $\phi\circ f$ is holomorphic for any holomorphic function $f:\mathbb{D}\rightarrow b\Omega$ and show that $H_{\phi}$ is compact on $A^2(\Omega)$, as the converse of this statement appears as a corollary in [@CCS].
Analytic structure in the boundary of pseudoconvex complete Reinhardt domains in $\mathbb{C}^2$
===============================================================================================
We fill first investigate the geometry of non-degenerate analytic disks in the boundary of Reinhardt domains. We define the following collection for any bounded domain $\Omega\subset \mathbb{C}^n$. $$\Gamma_{\Omega}:=\overline{\bigcup_{f\in A(\mathbb{D})\cap C(\overline{\mathbb{D}})\, , f \,\text{non-constant}}\{f(\mathbb{D}) |f:\mathbb{D}\rightarrow b\Omega\}}$$
Let $\Omega\subset \mathbb{C}^n$ for $n\geq 2$ be a domain. We say $\Gamma\subset b\Omega$ is an analytic disk if there exists $F:\mathbb{D}\rightarrow \mathbb{C}^n$ so that every component function of $F$ is holomorphic on $\mathbb{D}$ and continuous up to the boundary of $\mathbb{D}$ and $F(\mathbb{D})=\Gamma$.\
One observation is for any Reinhardt domain $\Omega\subset \mathbb{C}^n$, if $F(\mathbb{D})\subset b\Omega$ is an analytic disk where $F(\zeta):=(F_1(\zeta), F_2(\zeta),...,F_n(\zeta))$, then for any $(\theta_1,\theta_2,...,\theta_n)\in \mathbb{R}^
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We discuss the time evolution of quotations of stocks and commodities and show that corrections to the orthodox Bachelier model inspired by quantum mechanical time evolution of particles may be important. Our analysis shows that traders tactics can interfere as waves do and trader’s strategies can be reproduced from the corresponding Wigner functions. The proposed interpretation of the chaotic movement of market prices imply that the Bachelier behaviour follows from short-time interference of tactics adopted (paths followed) by the rest of the world considered as a single trader and the Ornstein-Uhlenbeck corrections to the Bachelier model should qualitatively matter only for large time scales. The famous smithonian invisible hand is interpreted as a short-time tactics of whole the market considered as a single opponent. We also propose a solution to the currency preference paradox.'
author:
- |
Edward W. Piotrowski\
Institute of Theoretical Physics, University of Białystok,\
Lipowa 41, Pl 15424 Białystok, Poland\
e-mail: <ep@alpha.uwb.edu.pl>\
Jan Sładkowski\
Institute of Physics, University of Silesia,\
Uniwersytecka 4, Pl 40007 Katowice, Poland\
e-mail: <sladk@us.edu.pl>
title: Quantum diffusion of prices and profits
---
Introduction
============
We have formulated a new approach to quantum game theory [@1]-[@3] that is suitable for description of market transactions in term of supply and demand curves [@4]-[@8]. In this approach quantum strategies are vectors (called states) in some Hilbert space and can be interpreted as superpositions of trading decisions. Tactics or moves are performed by unitary transformations on vectors in the Hilbert space (states). The idea behind using quantum games is to explore the possibility of forming linear combination of amplitudes that are complex Hilbert space vectors (interference, entanglement [@3]) whose squared absolute values give probabilities of players actions. It is generally assumed that a physical observable (e.g energy, position), defined by the prescription for its measurement, is represented by a linear Hermitian operator. Any measurement of an observable produces an eigenvalue of the operator representing the observable with some probability. This probability is given by the squared modulus of the coordinate corresponding to this eigenvalue in the spectral decomposition of the state vector describing the system. This is often an advantage over classical probabilistic description where one always deals directly with probabilities. The formalism has potential applications outside physical laboratories [@4]. Strategies and not the apparatus or installation for actual playing are at the very core of the approach. Spontaneous or institutionalized market transactions are described in terms of projective operation acting on Hilbert spaces of strategies of the traders. Quantum entanglement is necessary (non-trivial linear combinations of vectors-strategies have to be formed) to strike the balance of trade. This approach predicts the property of undividity of attention of traders (no cloning theorem) and unifies the English auction with the Vickrey’s one attenuating the motivation properties of the later [@5]. Quantum strategies create unique opportunities for making profits during intervals shorter than the characteristic thresholds for an effective market (Brown motion) [@5]. On such market prices correspond to Rayleigh particles approaching equilibrium state. Although the effective market hypothesis assumes immediate price reaction to new information concerning the market the information flow rate is limited by physical laws such us the constancy of the speed of light. Entanglement of states allows to apply quantum protocols of super-dense coding [@6] and get ahead of “classical trader”. Besides, quantum version of the famous Zeno effect [@4] controls the process of reaching the equilibrium state by the market. Quantum arbitrage based on such phenomena seems to be feasible. Interception of profitable quantum strategies is forbidden by the impossibility of cloning of quantum states.
There are apparent analogies with quantum thermodynamics that allow to interpret market equilibrium as a state with vanishing financial risk flow. Euphoria, panic or herd instinct often cause violent changes of market prices. Such phenomena can be described by non-commutative quantum mechanics. A simple tactics that maximize the trader’s profit on an effective market follows from the model: [*accept profits equal or greater than the one you have formerly achieved on average*]{} [@7].\
The player strategy $|\psi\rangle$[^1] belongs to some Hilbert space and have two important representations $\langle
q|\psi\rangle\negthinspace$ (demand representation) and $\langle
p|\psi\rangle\negthinspace$ (supply representation) where $q$ and $p$ are logarithms of prices at which the player is buying or selling, respectively [@4; @8]. After consideration of the following facts:
- error theory: second moments of a random variable describe errors
- M. Markowitz’s portfolio theory
- L. Bachelier’s theory of options: the random variable $q^{2} + p^{2}$ measures joint risk for a stock buying-selling transaction ( and Merton & Scholes works that gave them Nobel Prize in 1997)
we have defined canonically conjugate Hermitian operators (observables) of demand $\mathcal{Q}_k$ and supply $\mathcal{P}_k$ corresponding to the variables $q$ and $p$ characterizing strategy of the k-th player. This led us to the definition of the observable that we call [*the risk inclination operator*]{}: $$H(\mathcal{P}_k,\mathcal{Q}_k):=\frac{(\mathcal{P}_k-p_{k0})^2}{2\,m}+
\frac{m\,\omega^2(\mathcal{Q}_k-q_{k0})^2}{2}\,,
\label{hamiltonian}$$ where $p_{k0}\negthinspace:=\negthinspace\frac{
\phantom{}_k\negthinspace\langle\psi|\mathcal{P}_k|\psi\rangle_k }
{\phantom{}_k\negthinspace\langle\psi|\psi\rangle_k}\,$, $q_{k0}\negthinspace:=\negthinspace\frac{
\phantom{}_k\negthinspace\langle\psi|\mathcal{Q}_k|\psi\rangle_k }
{\phantom{}_k\negthinspace\langle\psi|\psi\rangle_k}\,$, $\omega\negthinspace:=\negthinspace\frac{2\pi}{\theta}\,$. $
\theta$ denotes the characteristic time of transaction [@7; @8] which is, roughly speaking, an average time spread between two opposite moves of a player (e. g. buying and selling the same commodity). The parameter $m\negthinspace>\negthinspace0$ measures the risk asymmetry between buying and selling positions. Analogies with quantum harmonic oscillator allow for the following characterization of quantum market games. One can introduce the constant $h_E$ that describes the minimal inclination of the player to risk, $
[\mathcal{P}_k,\mathcal{Q}_k]=\frac{i}{2\pi}h_E$. As the lowest eigenvalue of the positive definite operator $H$ is $\frac{1}{2}\frac{h_E}{2\pi} \omega$, $h_E$ is equal to the product of the lowest eigenvalue of $H(\mathcal{P}_k,\mathcal{Q}_k) $ and $2\theta$. $2\theta $ is in fact the minimal interval during which it makes sense to measure the profit. Let us consider a simple market with a single commodity $\mathfrak{G}$. A consumer (trader) who buys this commodity measures his/her profit in terms of the variable $\mathfrak{w}\negthinspace=\negthinspace-\mathfrak{q}$. The producer who provides the consumer with the commodity uses $\mathfrak{w}\negthinspace=\negthinspace-\mathfrak{p}$ to this end. Analogously, an auctioneer uses the variable $\mathfrak{w}\negthinspace=\negthinspace\mathfrak{q}$ (we neglect the additive or multiplicative constant brokerage) and a middleman who reduces the store and sells twice as much as he buys would use the variable $\mathfrak{w}\negthinspace=\negthinspace-2\hspace{.1em}\mathfrak{p}-\mathfrak{q}$. Various subjects active on the market may manifest different levels of activity. Therefore it is useful to define a standard for the “canonical” variables $\mathfrak{p}$ and $\mathfrak{q}$ so that the risk variable [@8] takes the simple form $\tfrac{\mathfrak{p}^2}{2}\negthinspace+\negthinspace\tfrac{\mathfrak{q}^2}{2}$ and the variable $\mathfrak{w}$ measuring the profit of a concrete market subject dealing in the commodity $\mathfrak{G}$ is given by $$u\,\mathfrak{q}+v\,\mathfrak{p}+\mathfrak{w}(u,v)=0\,,
\label{rowprosrad}$$ where the parameters $u$ and $v$ describe the activity. The dealer can modify his/her strategy $|\psi\rangle$ to maximize the profit but this should be done within the specification characterized by $u$ and $v$. For example, let us consider a fundholder who restricts himself to purchasing realties. From his point of view, there is no need nor opportunity of modifying the supply representation of his strategy because this would not increase the financial gain from the purchases. One can easily show by recalling the explicit form of the probability amplitude $|\psi\rangle\negth
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'In a recent paper, a new parametrization for the dark matter (DM) speed distribution $f(v)$ was proposed for use in the analysis of data from direct detection experiments. This parametrization involves expressing the *logarithm* of the speed distribution as a polynomial in the speed $v$. We present here a more detailed analysis of the properties of this parametrization. We show that the method leads to statistically unbiased mass reconstructions and exact coverage of credible intervals. The method performs well over a wide range of DM masses, even when finite energy resolution and backgrounds are taken into account. We also show how to select the appropriate number of basis functions for the parametrization. Finally, we look at how the speed distribution itself can be reconstructed, and how the method can be used to determine if the data are consistent with some test distribution. In summary, we show that this parametrization performs consistently well over a wide range of input parameters and over large numbers of statistical ensembles and can therefore reliably be used to reconstruct both the DM mass and speed distribution from direct detection data.'
author:
- 'Bradley J. Kavanagh'
bibliography:
- 'Model\_Indep.bib'
title: 'Parametrizing the local dark matter speed distribution: a detailed analysis'
---
Introduction
============
The dark matter (DM) paradigm has enjoyed much success in explaining a wide range of astronomical observations (for a review, see e.g. Ref. [@Bertone:2005]). As yet no conclusive evidence has been provided for the identity of particle DM, though there are a variety of candidates, including the supersymmetric neutralino [@Jungman:1996], sterile neutrinos [@Dodelson:1994], axions [@Duffy:2009] and the lightest Kaluza-Klein particle [@Kolb:1984]. Here, we focus on the search for particles which belong to the generic class of Weakly Interacting Massive Particles (WIMPs). Direct detection experiments [@Goodman:1985; @Drukier:1986] aim to measure the energies of nuclear recoils induced by WIMP DM in the Galactic halo. Under standard assumptions about the DM halo, this data can be used to extract the WIMP mass and interaction cross section, allowing us to check for consistency with other search channels (such as indirect detection [@Lavalle:2012] and collider experiments [@Battaglia:2010]) and to probe underlying models of DM.
Direct detection experiments are traditionally analyzed within the framework of the Standard Halo Model (SHM), in which WIMPs are assumed to have a Maxwell-Boltzmann speed distribution in the Galactic frame. The impact of uncertainties in the WIMP speed distribution has been much studied (see e.g. Refs. [@Green:2010; @Peter:2011; @Fairbairn:2012]), leading to the conclusion that such uncertainties may introduce a bias into any reconstruction of the WIMP mass from direct detection data. As yet, the speed distribution is unknown, while a number of proposals have been put forward for its form, including analytic parametrizations (e.g. Ref. [@Lisanti:2010]) and distributions reconstructed from the potential of the Milky Way [@Bhattacharjee:2012] or from N-body simulations [@Vogelsberger:2009; @Kuhlen:2010; @Kuhlen:2012; @Mao:2012]. Recent results from N-body simulations which attempt to include the effects of baryons on structure formation also report the possible presence of a dark disk in the Milky Way [@Read:2009; @Read:2010; @Kuhlen:2013]. With such a wide range of possibilities, we should take an agnostic approach to the speed distribution, not only to avoid introducing bias into the analysis of data, but also with the hope of measuring the speed distribution and thereby probing the formation history of the Milky Way.
Several methods of evading these uncertainties have been proposed. These include simultaneously fitting the parameters of the SHM and dark matter properties [@Strigari:2009; @Peter:2009], fitting to empirical forms of the speed distribution (e.g. Ref. [@Pato:2011]) and fitting to a self-consistent distribution function [@Pato:2013]. However, these methods typically require that the speed distribution can be well fitted by a particular functional form. More model-independent methods, such as fitting the moments of the speed distribution [@Drees:2007; @Drees:2008] or using a step-function speed distribution [@Peter:2011], have also been presented. However, these methods can still introduce a bias into the measurement of the WIMP mass and perform less well with the inclusion of realistic experimental energy thresholds.
In a recent paper [@Kavanagh:2013a] (hereafter referred to as Paper 1), a new parametrization of the speed distribution was presented, which allowed the WIMP mass to be extracted from hypothetical direct detection data without prior knowledge of the speed distribution itself. Paper 1 demonstrated this for a WIMP of mass 50 GeV, using several underlying distribution functions. In the present paper, we extend this analysis to a wider range of masses. We also aim to demonstrate the statistical properties of the method and show how realistic experimental parameters affect its performance. Finally, we will also elaborate on some of the technical details of the method and assess its ability to reconstruct the underlying WIMP speed distribution.
Section \[sec:DDRate\] of this paper explains the direct detection event rate formalism and presents the parametrization of the speed distribution introduced in Paper 1. In Sec. \[sec:ParameterRecon\], the methodology for testing the parametrization is outlined. In Section \[sec:Parametrization\], we consider the choice and number of basis functions for the method. We then study the performance of the method as a function of input WIMP mass (Sec. \[sec:mass\]) and when Poisson fluctations in the data are taken into account (Sec. \[sec:stats\]). In Sec. \[sec:Recon\], we demonstrate how the speed distribution can be extracted from this parametrization and examine whether or not different distribution functions can be distinguished. Finally, we summarize in Sec. \[sec:Conclusions\] the main results of this paper.
Direct detection event rate {#sec:DDRate}
===========================
Dark matter direct detection experiments aim to measure the energies $E$ of nuclear recoils induced by interactions with WIMPs in the Galactic halo. Calculation of the event rate at such detectors has been much studied (e.g. Refs. [@Goodman:1985; @Drukier:1986; @Lewin:1996; @Jungman:1996]). For a target nucleus with nucleon number $A$, interacting with a WIMP of mass $m_\chi$, the event rate per unit detector mass is given by: $$\label{eq:Rate}
\frac{\textrm{d}R}{\textrm{d}E} = \frac{\rho_0 \sigma_p}{2 m_\chi \mu_{\chi p}^2} A^2 F^2(E) \eta(v_\textrm{min})\,,$$ where $\rho_0$ is the local dark matter mass density, $\sigma_p$ is the WIMP-proton spin-independent cross section and the reduced mass is defined as $\mu_{A B} = m_A m_B/(m_A + m_B)$. The Helm form factor $F^2(E)$ [@Helm:1956] describes the loss of coherence of spin-independent scattering due to the finite size of the nucleus. A wide range of possible interactions have been considered in the literature, including inelastic [@Smith:2001], isospin-violating [@Kurylov:2003] and more general non-relativistic interactions [@Fan:2010; @Fitzpatrick:2012; @Fitzpatrick:2013]. We focus here on the impact of the WIMP speed distribution on the direct detection event rate. We therefore restrict ourselves to considering only spin-independent scattering, which is expected to dominate over the spin-dependent contribution for heavy nuclei, due to the $A^2$ enhancement in the rate.
Information about the WIMP velocity distribution $f(\textbf{v})$ is encoded in the function $\eta$, sometimes referred to as the mean inverse speed, $$\label{eq:eta}
\eta(v_\textrm{min}) = \int_{v > v_\textrm{min}} \frac{f(\textbf{v})}{v} \, \textrm{d}^3\textbf{v}\,,$$ where $\textbf{v}$ is the WIMP velocity in the reference frame of the detector. The integration is performed only over those WIMPs with sufficient speed to induce a nuclear recoil of energy $E$. The minimum required speed for a target nucleus of mass $m_N$ is $$\label{eq:v_min}
v_\textrm{min}(E) = \sqrt{\frac{m_N E}{2\mu_{\chi N}^2}}\,.$$
We distinguish between the directionally averaged velocity distribution $$f(v) = \oint f(\textbf{v}) \, \textrm{d}\Omega_{\textbf{v}}\,,$$ and the 1-dimensional speed distribution $$f_1(v) = \oint f(\textbf{v}) v^2 \textrm{d}\Omega_{\textbf{v}}\,.$$ The distribution function should in principle be time-dependent, due to the motion of the Earth around the Sun. However, this is expected to be a percent-level effect (for a review, see e.g. Ref. [@Freese:2013]) and we therefore assume that
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Despite the invertible setting, Anosov endomorphisms may have infinitely many unstable directions. Here we prove, under transitivity assumption, that an Anosov endomorphism on a closed manifold $M,$ is either special (that is, every $x \in M$ has only one unstable direction) or for a typical point in $M$ there are infinitely many unstable directions. Other result of this work is the semi rigidity of the unstable Lyapunov exponent of a $C^{1+\alpha}$ codimension one Anosov endomorphism and $C^1$ close to a linear endomorphism of $\mathbb{T}^n$ for $(n \geq 2).$ In the appendix we give a proof for ergodicity of $C^{1+\alpha}, \alpha > 0,$ conservative Anosov endomorphism.'
address:
- 'Departamento de Matemática, IM-UFAL Maceió-AL, Brazil.'
- 'Departamento de Matemática, ICMC-USP São Carlos-SP, Brazil.'
author:
- 'F. Micena'
- 'A. Tahzibi'
title: On the Unstable Directions and Lyapunov Exponents of Anosov Endomorphisms
---
[^1]
Introduction {#section.preliminaries}
============
In $1970s,$ the works [@PRZ] and [@MP] generalized the notion of Anosov diffeomorphism for non invertible maps, introducing the notion of Anosov endomorphism. We consider $M$ a $C^{\infty}$ closed manifold.
[@PRZ] \[defprz\] Let $f: M \rightarrow M$ be a $C^1$ local diffeomorphism. We say that $f$ is an Anosov endomorphism if there is constants $C> 0$ and $\lambda > 1,$ such that, for every $(x_n)_{n \in \mathbb{Z}}$ an $f-$orbit there is a splitting
$$T_{x_i} M = E^s_{x_i} \oplus E^u_{x_i}, \forall i \in \mathbb{Z},$$
which is preserved by $Df$ and for all $n > 0 $ we have
$$||Df^n(x_i) \cdot v|| \geq C^{-1} \lambda^n ||v||, \;\mbox{for every}\; v \in E^u_{x_i} \;\mbox{and for any} \; i \in \mathbb{Z}$$ $$||Df^n(x_i) \cdot v|| \leq C\lambda^{-n} ||v||, \;\mbox{for every}\; v \in E^s_{x_i} \;\mbox{and for any} \; i \in \mathbb{Z}.$$
Anosov endomorphisms can be defined in an equivalent way ([@MP]):
[@MP] \[defmp\] A $C^1$ local diffeomorphism $f: M \rightarrow M$ is said an Anosov endomorphism if $Df$ contracts uniformly a continuous sub-bundle $E^s \subset TM$ into itself, and the action of $Df$ on $TM/E^s$ is uniformly expanding.
Sakai, in [@SA] proved that, in fact, the definitions $\ref{defprz}$ and $\ref{defmp}$ are equivalent.
A contrast between Anosov diffeomorphisms and Anosov endomorphisms is the non-structural stability of the latter. Indeed, $C^1-$close to any linear Anosov endomorphism $A$ of Torus, Przytycki [@PRZ] constructed Anosov endomorphism which has infinitely many unstable direction for some orbit and consequently he showed that $A$ is not structurally stable. However, it is curious to observe that the topological entropy is locally constant among Anosov endomorphisms. Indeed, take the lift of Anosov endomorphism to the inverse limit space (see preliminaries for the definition). At the level of inverse limit space, two nearby Anosov endomorphisms are conjugate ([@PRZ], [@BerRov]) and lifting to inverse limit space does not change the entropy.
Two endomorphisms (permitting singularities) $f_1, f_2$ are $C^1-$inverse limit conjugated, if there exists a homeomorphism $h : M^{f_1} \rightarrow M^{f_2}$ such that $h \circ \tilde{f_1} = \tilde{f_2} \circ h$ where $\tilde{f_i}$ are the lift of $f_i$ to the orbit space (see preliminaries).
Denote by $p$ the natural projection $p: \overline{M} \rightarrow M,$ where $\overline{M}$ is the universal covering. Note that an unstable direction $E^u_{\overline{f}}(y)$ projects on an unstable direction of $T_x M, x = p(y)$ following the definition $\ref{defprz}, $ that is $Dp(y) \cdot (E^u_{f}(y)) = E^u(\tilde{x}), $ where ${\tilde{x}}= p (\mathcal{O}(y)).$
\[propMP\][@MP] $f$ is an Anosov endomorphism of $M,$ if and only if, the lift $\overline{f}: \overline{M} \rightarrow \overline{M} $ is an Anosov diffeomorphism of $\overline{M},$ the universal cover of $M.$
An advantage to work with the latter definition is that in $\overline{M}$ we can construct invariant foliations $\mathcal{F}^s_{\overline{f}}, \mathcal{F}^u_{\overline{f}}.$
Given an Anosov endomorphism and ${\tilde{x}}= (x_n)_{n \in \mathbb{Z}}$ an $f-$ orbit we denote by $ E^u({\tilde{x}})$ the unstable bundle subspace of $T_{x_0}(M)$ corresponding to the orbit $(x_n)_{n \in \mathbb{Z}}.$ In [@PRZ] one constructs examples of Anosov endomorphism such that $E^u({\tilde{x}}) \neq E^u (\tilde{y}),$ when $x_0 = y_0,$ but $(x_n)_n \neq (y_n)_n.$ In fact, it is possible that $x_0 \in M$ has uncountable unstable directions, see [@PRZ]. An Anosov endomorphism for which $E^u({\tilde{x}})$ just depends on $x_0$ (unique unstable direction for each point) is called special Anosov endomorphism. A linear Anosov endomorphism of torus is an example of special Anosov endomorphism.
A natural question is whether it is possible to find an example of (non special)Anosov endomorphism, such that every $x \in M$ has a finite number of unstable directions. It is also interesting to understand the structure of points with infinitely many unstable directions. For transitive Anosov endomorphisms we prove the following dichotomy:
\[teo1\] Let $f: M \rightarrow M$ be a transitive Anosov endomorphism, then:
1. Either $f$ is an special Anosov endomorphism,
2. Or there exists a residual subset $\mathcal{R} \subset M,$ such that for every $x \in \mathcal{R},$ $x$ has infinitely many unstable directions.
Observe that when $M$ is the torus $\mathbb{T}^n, n \geq 2,$ all Anosov endomorphism of $\mathbb{T}^n$ are transitive, see [@AH].
Analysing the unstable Lyapunov exponents of the Anosov endomorphism, similarly with [@MT] we can prove:
\[teo2\] Let $A: \mathbb{T}^n \rightarrow \mathbb{T}^n, n \geq 2$ be a linear Anosov endomorphism, with $dim E^u_A = 1.$ Then there is a $C^1$ open set $\mathcal{U},$ containing $A,$ such that for every $f \in \mathcal{U}$ a $C^{1 + \alpha}, \alpha> 0,$ conservative Anosov endomorphism we have $\lambda^u_f(x) \leq \lambda^u(A),$ for $m$ almost everywhere $x \in \mathbb{T}^n,$ where $m$ is the Lebesgue measure of $\mathbb{T}^n.$
To prove the Theorem \[teo2\], the neighbourhood $\mathcal{U}$ is can be chosen very small, such that every $f \in \mathcal{U}$ has its lift conjugated to $A$ in $\mathbb{R}^n.$ Then by this fact, we can consider a priori that also we have $dim E^u_f = 1.$
General Preliminaries Results. {#section.preliminaries}
==============================
In this section we present some classical results on the theory of Anosov endomorphism, that will be important for our purposes for the rest of this work.
The Limit Inverse Space.
------------------------
Consider $(X,d) $ a compact metric space and $f: X \rightarrow X$ a continuous map, we define a new compact metric space
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: |
We identify general trends in the (in)civility and complexity of political discussions occurring on Reddit between January 2007 and May 2017 – a period spanning both terms of Barack Obama’s presidency and the first 100 days of Donald Trump’s presidency.
We then investigate four factors that are frequently hypothesized as having contributed to the declining quality of American political discourse – (1) the rising popularity of Donald Trump, (2) increasing polarization and negative partisanship, (3) the democratization of news media and the rise of fake news, and (4) merging of fringe groups into mainstream political discussions.
author:
- Rishab Nithyanand
- Brian Schaffner
- Phillipa Gill
bibliography:
- 'bibliography.bib'
title: Online Political Discourse in the Trump Era
---
Introduction
============
The 2016 election featured the two most disliked candidates in modern US presidential election history competing in the context of decades of increasing partisan polarization [@schaffner2017making]. In this paper we explore how online political discourse during the election differed from discourse occurring prior to it, in terms of incivility and linguistic complexity. We find that incivility in online political discourse, even in non-partisan forums, is at an all time high and linguistic complexity of discourse in partisan forums has declined from a seventh-grade level to a first-grade level ().
The election was noteworthy for the high levels of incivility and declining complexity of discourse among political elites, particularly Donald Trump [@schaffnertrump]. Research has shown that when people are exposed to incivility from political elites that they themselves will respond by using more offensive rhetoric [@gervais2014following; @kwon2017aggression]. We explore how Trump’s increasing popularity impacted the civility and complexity of discourse in partisan forums. Our work uncovers a strong correlation between Trump’s rise in popularity and the increasing incivility observed in Republican forums on Reddit ().
In may ways, the 2016 campaign was the logical culmination of two decades of affective polarization that witnessed Democrats and Republicans grow increasingly negative in their feelings about the opposing party. Political scientists have documented the increasing polarization among Americans for quite some time [@abramowitz2008polarization]; however, more recent work has emphasized the emotion-based (affective) nature of this polarization. Drawing on social identity theory [@tajfel1979integrative], studies have found that one of the defining features of partisan polarization is the increasingly negative feelings that members of one party have for the other party [@iyengar2012affect]. We measure the incidence of negative partisanship in political forums and find a strong correlation with incivility, supporting the theory that partisan identity leads people to experience emotions of both enthusiasm and anger [@mason2016cross; @huddy2015expressive]. Anger, in particular, is likely to give rise to incivility due to its ability to motivate political action [@groenendyk2014emotional; @valentino2011election; @huddy2015expressive]. Thus as Americans experience political anger more frequently they are likely to be motivated to go online to engage in political discussions [@ryan2012makes]. While we see that the 2016 election was not very dissimilar to 2012 (in terms of incidence of negative partisanship), we find that negative partisanship has shown an upward trend even after inauguration day (unlike 2012). We also find that hatred towards political entities of both parties was at an all time high during the 2016 elections, reinforcing the theory that 2016 was the ideal year for a non-establishment candidate ().
The 2016 campaign also witnessed unprecedented rhetoric from a major presidential candidate regarding the credibility of the news media. Additionally, during this time, public distrust of and anger at the political establishment and traditional news media was at an all time high [@gallup-media]. Taken together, these conditions can lead individuals to engage in partisan motivated reasoning [@weeks2015emotions], which can fuel the spread and belief of “fake news”. We explore how frequently misinformation was shared and discussed online. We find that during the elections, Republican forums shared and discussed articles from outlets known to spread conspiracy theories, heavily biased news, and fake news at a rate 16 times higher than prior to the election – and more than any other time in the past decade. Our study shows that this misinformation fuels the uncivil nature of discourse ().
The racism (Trump’s statements concerning Mexicans, Muslims, and other broad groups), sexism (the Access Hollywood recordings), and general incivility exhibited by the Trump campaign did not have any significant impact on his presidential run. In fact, recent events ([*e.g.,* ]{}Charlottesville and other Unite the Right rallies) have shown that these actions have emboldened and brought fringe groups into the mainstream. We investigate partisan forums and find a significant overlap between participants in mainstream Republican and extremist forums. We uncover a strong correlation between the rise in offensive discourse and discourse participation from extremists ().
Reddit and the Reddit Dataset
=============================
Reddit is the fourth most visited site in the United States and ninth most visited site in the world [@Alexa-Reddit]. At a high-level, Reddit is a social platform which enables its users to post content to individual forums called *subreddits*. Reddit democratizes the creation and moderation of these subreddits – [*i.e.,* ]{}any user may create a new subreddit and most content moderation decisions are left to moderators chosen by the individual subreddit. Subscribers of a subreddit are allowed to up-vote and down-vote posts made by other users. These votes determine which posts are visible on the front page of the subreddit (and, even the front-page of Reddit). Reddit also allows its users to discuss and have conversations about each post through the use of comments. Specifically, subscribers of a subreddit can make and also reply to comments on posts made within the subreddit. Like posts, the comments may also be up-voted and down-voted. These votes determine which comments are visible to users reading the discussion.
Reddit is an attractive platform for analyzing political behaviour for three main reasons: First, the democratization of content moderation and discussion combined with the ability of participants to use pseudonymous identities has resulted in a strong online disinhibition effect and free-speech culture on Reddit [@Reddit-Freespeech]. This is unlike Facebook which has stronger moderation policies and requires accounts to register with their email addresses and real names (although the enforcement of both are questionable). Second, Reddit enables users to participate in long conversations and complex discussions which are not limited by length. This is unlike Twitter which limits posts and replies to 280 characters (prior to Sep 26, 2017 this limit was 140 characters [@TweetLength]). Finally, Reddit allows scraping of its content and discussions. This has enabled the community to build a dataset [^1] including every comment and post made since the site was made public in 2005.
As of October 2017, the Reddit dataset includes a total of 3.5 billion comments from 25.3 million authors made on 398 million posts. We categorize the posts and comments in the dataset into two categories: political and non-political. Posts and comments made in subreddits categorized by *r/politics* moderators as “related” subreddits [^2] are tagged as political. We also tag the subreddits dedicated to all past Democratic, Libertarian, and Republican presidential candidates as political. All other subreddits are tagged as non-political. In total our political dataset contained comments and posts from 124 subreddits – each individually categorized as general-interest, democratic, libertarian, republican, international, and election-related. In our study we focus on comments and posts made between December 1$^{st}$, 2005 and May 1$^{st}$ 2017 – 100 days into Donald Trump’s presidency. We analyze every comment and post made in our set of political subreddits during this period – 130 million comments in 3 million posts – and contrast these with a random (10%) sample of non-political comments made during the same period– a total of 332 million comments in 12 million posts. shows the number of political and non-political comments analyzed during each month of from December 2005 to May 2017. It should be noted that the first political subreddit appeared only in January 2007 – therefore we have no political content to analyze before this period.
![**(log-scale)** Number of comments analyzed during each month from December 2005 to June 2017. For each election year, P indicates the start of the primaries, R/DNom indicates the month when the Republican/Democrat candidate became the presumptive nominee, R/DNC indicates the month of the Republican/Democratic National Conventions, E indicates the election month, and I indicates the Presidential Inauguration.[]{data-label="fig:comments-analyzed"}](./figures/comments-analyzed-logscale.png){width=".49\textwidth"}
Civility and Complexity of Discourse {#sec:discourse}
====================================
In order to understand how online political discourse has evolved, we focus on two concepts: (in)civility and complexity of discourse.
Incivility in political discourse
---------------------------------
We use the prevalence of offensive speech in political discussions on Reddit as a metric for incivility. Previous work [@mutz2006hearing] has defined uncivil discourse as “communication that violates the norms of politeness” – a definition that clearly includes offensive speech.
[**Identifying offensive speech.** ]{} In order to identify if a Reddit comment contains offensive speech, we make use of the offensive speech classifier proposed by Nithyanand [*et al.*]{}[@Nithyan
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The method of fusion barrier distribution has been widely used to interpret the effect of nuclear structure on heavy-ion fusion reactions around the Coulomb barrier. We discuss a similar, but less well known, barrier distribution extracted from large-angle quasi-elastic scattering. We argue that this method has several advantages over the fusion barrier distribution, and offers an interesting tool for investigating unstable nuclei.'
author:
- 'K. Hagino'
- 'N. Rowley'
title: ' Quasi-elastic barrier distribution as a tool for investigating unstable nuclei'
---
Introduction
============
It has been well recognized that heavy-ion collisions at energies around the Coulomb barrier are strongly affected by the internal structure of colliding nuclei [@DHRS98; @BT98]. The couplings of the relative motion to the intrinsic degrees of freedom (such as collective inelastic excitations of the colliding nuclei and/or transfer processes) results in a single potential barrier being replaced by a number of distributed barriers. It is now well known that a barrier distribution can be extracted experimentally from the fusion excitation function $\sigma_{\rm fus}(E)$ by taking the second derivative of the product $E\sigma_{\rm fus}(E)$ with respect to the center-of-mass energy $E$, that is, $d^2(E\sigma_{\rm fus})/dE^2$ [@RSS91]. The extracted fusion barrier distributions have been found to be very sensitive to the structure of the colliding nuclei [@DHRS98; @L95], and thus the barrier distribution method has opened up the possibility of exploiting the heavy-ion fusion reaction as a “quantum tunneling microscope” in order to investigate both the static and dynamical properties of atomic nuclei.
The same barrier distribution interpretation can be applied to the scattering process as well. In particular, it was suggested in Ref. [@ARN88] that the same information as the fusion cross section may be obtained from the cross section for quasi-elastic scattering (a sum of elastic, inelastic, and transfer cross sections) at large angles. Timmers [*et al.*]{} proposed to use the first derivative of the ratio of the quasi-elastic cross section $\sigma_{\rm qel}$ to the Rutherford cross section $\sigma_R$ with respect to energy, $-d (d\sigma_{\rm qel}/d\sigma_R)/dE$, as an alternative representation of the barrier distribution [@TLD95]. Their experimental data have revealed that the quasi-elastic barrier distribution is indeed similar to that for fusion, although the former may be somewhat smeared and thus less sensitive to nuclear structure effects (see also Refs.[@PKP02; @MSS03; @SMO02] for recent measurements). As an example, we show in Fig. 1 a comparison between the fusion and the quasi-elastic barrier distributions for the $^{16}$O + $^{154}$Sm system [@HR04].

In this contribution, we undertake a detailed discussion of the properties of the quasi-elastic barrier distribution [@HR04], which are less known than the fusion counterpart. We shall discuss possible advantagges for its exploitation, putting a particular emphasis on future experiments with radioactive beams.
Quasi-elastic barrier distributions
===================================
Let us first discuss heavy-ion reactions between inert nuclei. The classical fusion cross section is given by, $$\sigma^{cl}_{\rm fus}(E)=\pi
R_b^2\left(1-\frac{B}{E}\right)\,\theta(E-B),$$ where $R_b$ and $B$ are the barrier position and the barrier height, respectively. From this expression, it is clear that the first derivative of $E\sigma^{cl}_{\rm fus}$ is proportional to the classical penetrability for a 1-dimensional barrier of height $B$ or eqivalently the s-wave penetrability, $$\frac{d}{dE}[E\sigma^{cl}_{\rm fus}(E)]=\pi R_b^2\,\theta(E-B)
=\pi R_b^2\,P_{cl}(E),$$ and the second derivative to a delta function, $$\frac{d^2}{dE^2}[E\sigma^{cl}_{\rm fus}(E)]=\pi R_b^2\,\delta(E-B).
\label{clfus}$$ In quantum mechanics, the tunneling effect smears the delta function in Eq. (\[clfus\]). If we define the fusion test function as $$G_{\rm fus}(E)=\frac{1}{\pi R_b^2}\frac{d^2}{dE^2}
[E\sigma_{\rm fus}(E)],$$ this function has the following properties: i) it is symmetric around $E=B$, ii) it is centered on $E=B$, iii) its integral over $E$ is unity, and iv) it has a relatively narrow width of around $\hbar\Omega\ln(3+\sqrt{8})/\pi \sim 0.56 \hbar\Omega$, where $\hbar\Omega$ is the curvature of the Coulomb barrier.
We next ask ourselves the question of how best to define a similar test function for a scattering problem. In the pure classical approach, in the limit of a strong Coulomb field, the differential cross sections for elastic scattering at $\theta=\pi$ is given by, $$\sigma_{\rm el}^{cl}(E,\pi)=\sigma_R(E,\pi)\,\theta(B-E),$$ where $\sigma_R(E,\pi)$ is the Rutherford cross section. Thus, the ratio $\sigma_{\rm el}^{cl}(E,\pi)/\sigma_R(E,\pi)$ is the classical reflection probability $R(E)$ ($=1-P(E)$), and the appropriate test function for scattering is [@TLD95], $$G_{\rm qel}(E)=-\frac{dR(E)}{dE}
=-\frac{d}{dE}\left(\frac{\sigma_{\rm el}(E,\pi)}{\sigma_R(E,\pi)}\right).
\label{qeltest}$$ In realistic systems, due to the effect of nuclear distortion, the differential cross section deviates from the Rutherford cross section even at energies below the barrier. Using the semi-classical perturbation theory, we have derived a semi-classical formula for the backward scattering which takes into account the nuclear effect to the leading order [@HR04]. The result for a scattering angle $\theta$ reads, $$\frac{\sigma_{\rm el}(E,\theta)}{\sigma_R(E,\theta)}
=\alpha(E,\lambda_c)\cdot |S(E,\lambda_c)|^2,
\label{ratio}$$ where $S(E,\lambda_c)$ is the total (Coulomb + nuclear) $S$-matrix at energy $E$ and angular momentum $\lambda_c = \eta\cot(\theta/2)$, with $\eta$ being the usual Sommerfeld parameter. Note that $|S(E,\lambda_c)|^2$ is nothing but the reflection probability of the Coulomb barrier, $R(E)$. For $\theta=\pi$, $\lambda_c$ is zero, and $|S(E,\lambda_c=0)|^2$ is given by $$|S(E,\lambda_c=0)|^2 = R(E) =
\frac{\exp\left[-\frac{2\pi}{\hbar\Omega}(E-B)\right]}
{1+\exp\left[-\frac{2\pi}{\hbar\Omega}(E-B)\right]}$$ in the parabolic approximation. $\alpha(E,\lambda_c)$ in Eq. (\[ratio\]) is given by $$\begin{aligned}
\alpha(E,\lambda_c)&=&1+\frac{V_N(r_c)}{ka}\,
\frac{\sqrt{2a\pi k\eta}}{E}\,\\
&\times&
\left[1-\frac{r_c}{Z_PZ_Te^2}\cdot
2V_N(r_c)
\left(\frac{r_c}{a}-1\right)\right],\end{aligned}$$ where $k=\sqrt{2\mu E/\hbar^2}$, with $\mu$ being the reduced mass for the colliding system. The nuclear potential $V_N(r_c)$ is evaluated at the Coulomb turning point $r_c=(\eta+\sqrt{\eta^2+\lambda_c^2})/k$, and $a$ is the diffuseness parameter in the nuclear potential.
![ The ratio of elastic scattering to the Rutherford cross section at $\theta=\pi$ (upper panel) and the quasi-elastic test function $G_{\rm qel}(E)=-d/dE (\sigma_{\rm el}/\sigma_R)$ (lower
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Given a directed graph $D=(N,A)$ and a sequence of positive integers $1 \leq c_1< c_2< \dots < c_m \leq |N|$, we consider those path and cycle polytopes that are defined as the convex hulls of simple paths and cycles of $D$ of cardinality $c_p$ for some $p \in \{1,\dots,m\}$, respectively. We present integer characterizations of these polytopes by facet defining linear inequalities for which the separation problem can be solved in polynomial time. These inequalities can simply be transformed into inequalities that characterize the integer points of the undirected counterparts of cardinality constrained path and cycle polytopes. Beyond we investigate some further inequalities, in particular inequalities that are specific to odd/even paths and cycles.'
author:
- Volker Kaibel and Rüdiger Stephan
title: On cardinality constrained cycle and path polytopes
---
Introduction
============
Let $D=(N,A)$ be a directed graph on $n$ nodes that has neither loops nor parallel arcs, and let $c =
(c_1,\dots,c_m)$ be a nonempty sequence of integers such that $1 \leq
c_1 < c_2 < \dots < c_m \leq n$ holds. Such a sequence is called a *cardinality sequence*. For two different nodes $s,t \in N$, the *cardinality constrained (s,t)-path polytope*, denoted by ${P_{s,t-\mbox{\scriptsize path}}^{c}(D)}$, is the convex hull of the incidence vectors of simple directed $(s,t)$-paths $P$ such that $|P| = c_p$ holds for some $p \in \{1,\dots,m\}$. The *cardinality constrained cycle polytope* ${P_C^{c}(D)}$, similar defined, is the convex hull of the incidence vectors of simple directed cycles $C$ with $|C|=c_p$ for some $p$. Note, since $D$ does not have loops, we may assume $c_1 \geq 2$ when we investigate cycle polytopes. The undirected counterparts of these polytopes are defined similarly. We denote them by ${P_{s,t-\mbox{\scriptsize path}}^{c}(G)}$ and ${P_C^{c}(G)}$, where $G$ is an undirected graph. The associated polytopes without cardinality restrictions we denote by $P_{s,t-\mbox{\scriptsize
path}}(D)$, $P_{s,t-\mbox{\scriptsize path}}(G)$, $P_C(D)$, and $P_C(G)$.
Cycle and path polytopes, with and without cardinality restrictions, defined on graphs or digraphs, are already well studied. For a literature survey on these polytopes see Table 1.
---------------------------------------------- -------------------------------------------------------------
Schrijver [@Schrijver2003], chapter 13: dominant of $P_{s,t-\mbox{\scriptsize path}}(D)$
Stephan [@Stephan]: $P_{s,t-\mbox{\scriptsize path}}^{(k)}(D)$
Dahl, Gouveia [@DG]: $P_{s,t-\mbox{\scriptsize path}}^{\leq
k}(D):= P_{s,t-\mbox{\scriptsize path}}^{(1,\dots,k)}(D)$
Dahl, Realfsen [@DR]: $P_{s,t-\mbox{\scriptsize path}}^{\leq
k}(D)$, $D$ acyclic
Nguyen [@Nguyen]: dominant of $P_{s,t-\mbox{\scriptsize path}}^{\leq
k}(G)$
Balas, Oosten [@BO]: directed cycle polytope $P_C(D)$
Balas, Stephan [@BST]: dominant of $P_C(D)$
Coullard, Pulleyblank [@CP], Bauer [@Bauer]: undirected cycle polytope $P_C(G)$
Hartmann, Özlük [@HO]: $P_C^{(k)}(D)$
Maurras, Nguyen [@MN1; @MN2]: $P_C^{(k)}(G)$
Bauer, Savelsbergh, Linderoth [@BLS]: $P_C^{\leq k}(G)$
\[3mm\]
---------------------------------------------- -------------------------------------------------------------
: **Literature survey on path and cycle polyhedra**
Those publications that treat cardinality restrictions, discuss only the cases $\leq k$ or $= k$, while we address the general case. In particular, we assume $m \geq 2$. The main contribution of this paper will be the presentation of IP-models (or IP-formulations) for cardinality constrained path and cycle polytopes whose inequalities generally define facets with respect to complete graphs and digraphs. Moreover, the associated separation problem can be solved in polynomial time.
The basic idea of this paper can be presented best for cycle polytopes. Given a finite set $B$ and a cardinality sequence $b=(b_1,\dots,b_m)$, the set ${\mbox{CHS}}^{b}(B):=\{F \subseteq B : |F|=b_p \mbox{ for some
} p\}$ is called a *cardinality homogenous set system*. Clearly, $P_C^c(D) = \mbox{conv} \{\chi^C \in \mathbb{R}^A\;|\;
C \mbox{ simple cycle}, \, C \in CHS^c(A)\}$, where $CHS^{c}(A)$ is the cardinality homogeneous set system defined on the arc set $A$ of $D$. According to Balas and Oosten [@BO], the integer points of the cycle polytope $P_C(D)$ can be characterized by the system $$\label{model1}
\begin{array}{rcll}
x(\delta^{\mbox{\scriptsize out}}(i))- x(\delta^{\mbox{\scriptsize in}}(i)) & = & 0 & \mbox{for all } i \in N,\\
x(\delta^{\mbox{\scriptsize out}}(i)) & \leq & 1 & \mbox{for all } i \in N,\\
- x((S:N \setminus S)) + x(\delta^{\mbox{\scriptsize out}}(i))+ x(\delta^{\mbox{\scriptsize out}}(j)) & \leq & 1&
\mbox{for all }
S \subset N,\\
& & & 2 \leq |S| \leq n-2,\\
& & & i \in S, j \in N \setminus S,\\
x(A) & \geq & 2,\\
\multicolumn{3}{r}{x_{ij} \in \{0,1\}} & \mbox{for all } (i,j) \in A.
\end{array}$$ Here, $\delta^{\mbox{\scriptsize out}}(i)$ and $\delta^{\mbox{\scriptsize in}}(i)$ denote the set of arcs leaving and entering node $i$, respectively; for an arc set $F \subseteq A$ we set $x(F):=\sum_{(i,j) \in F} x_{ij}$; for any subsets $S,T$ of $N$, $(S:T)$ denotes $\{(i,j) \in A| i \in S, j \in T\}$. Moreover, for any $S \subseteq N$, we denote by $A(S)$ the subset of arcs whose both endnodes are in $S$.
Grötschel [@Groetschel] presented a complete linear description of a cardinality homogeneous set system. For $CHS^{c}(A)$, the model reads: $$\label{model2}
\begin{array}{l}
\hspace{3.7cm}
\begin{array}{rcccll}
0 & \leq & x_{ij}& \leq & 1 & \mbox{for all } (i,j) \in A,\\
c_1 & \leq & x(A) & \leq & c_m,
\end{array} \\
\\
(c_{p+1} - |F|) \; x(F)-(|F| - c_p) \; x(A \setminus F) \leq
c_p(c_{p+1}-|F|) \\
\hspace{1cm} \mbox{for all } F \subseteq A \mbox{ with } c_p < |F| <
c_{p+1} \mbox{ for some } p \in \{1,\dots,m-1\}.
\end{array}$$ The *cardinality bounds* $c_1 \leq x(A) \leq c_m$ exclude all subsets of $A$ whose cardinalities are out of the bounds $c_1$ and $c_m$, while the latter class of inequalities of model (\[model2\]), which are called *cardinality forcing inequalities*, cut off all arc sets $F \subseteq A$ of forbidden cardinality between the bounds, since for each such $F$, the cardinality forcing inequality associated with $F$ is violated by $\chi^F$: $$(c_{p+1}-|F|) \chi^F(F)-(|F|-c_p)\chi^F(A \setminus F)=
|F|(c_{p
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We propose a method to improve image clustering using sparse text and the wisdom of the crowds. In particular, we present a method to fuse two different kinds of document features, image and text features, and use a common dictionary or “wisdom of the crowds” as the connection between the two different kinds of documents. With the proposed fusion matrix, we use topic modeling via non-negative matrix factorization to cluster documents.'
author:
-
-
-
-
bibliography:
- 'asilomar\_imgtxt.bib'
title: Improving Image Clustering using Sparse Text and the Wisdom of the Crowds
---
Introduction
============
There has been substantial research in organizing large image databases. Often, these images have corresponding text, such as captions in textbooks and metatags. We investigate strategies to use this text information to improve clustering image documents into groups of similar images. Image Clustering is used for image database management, content based image searches, and image classification. In this paper, we present a method for improving image clusters using sparse text and freely obtainable information form the internet. The motivation behind our method stems from the idea that we can fuse image and text documents and use the “wisdom of the crowds” (WOC), the freely obtainable information, to connect the sparse text documents where WOC documents act as a representative of a single class.
In Section 2, we breifly touch upon related material. In Section 3, we introduce our method of fusing text and image documents using the term frequency-inverse document frequency weighting scheme. We then describe how non-negative matrix factorization is used for the purpose of topic modeling in section 4. In Section 5, we present results from an application of our method.
Related Works
=============
There have been many studies on text document clustering and image clustering. A general joint image and text clustering strategy proceeds in two steps, first two different types of documents must be combined into a single document feature matrix. Then, a clustering technique is implemented.
The term frequency-inverse document frequency (TF-IDF) is a technique to create a feature matrix from a collection, or corpus, of documents. TF-IDF is a weighting scheme that weighs features in documents based on how often the words occurs in an individual document compared with how often it occurs in other documents [@tf-idf_original]. TF-IDF has been used for text mining, near duplicate detection, and information retrieval. When dealing with text documents, the natural features to use are words (i.e. delimiting strings by white space to obtain features). We can represent each word by a unique integer.
In order to use text processing techniques for image databases, we generate a collection of image words using two steps. First, we obtain a collection of image features, and then define a mapping from the image features to the integers. To obtain image features, we use the scale invariant feature transform (SIFT) [@lowe2004distinctive]. We then use k-means to cluster the image features into $K$ different clusters. The mapping from the image feature to the cluster is used to identify image words, and results in the image Bag-Of-Words model [@fei2005bayesian].
Topic modeling is used to uncover a hidden topical structure of a collection of documents. There have been studies on using large scale data collections to improve classification of sparse, short segments of text, which usually cluster inaccurately due to spareness of text [@phan2008learning]. Latent Dirichlet Allocation (LDA), singular value decomposition (SVD), and non-negative matrix factorization (NNMF) are just some of the models that have been used in topic modeling [@arora2012learning].
In our method, we integrate these techniques to combine and cluster different types of documents. We use SIFT to obtain image features and term frequency-inverse document frequency to generate a feature matrix in the fused collection of documents. Then, we use the non-negative matrix factorization to learn a representation of the corpus which is used to cluster the documents.
Fusing Image and Text Documents
===============================
We denote a collection of image documents $D = \{d_1, ... d_n\}$ and a collection of sparse text documents $S = \{s_1,...s_n\}$ and text document $s_i$ describes image document $d_i$ for i=1,...m. Some of the text documents may be empty, indicating the absence of any labeled text.
**Image Documents** Using the scale invariant feature transform (SIFT) and k-means, we obtain $A \in \mathbb{R}^{n \times p}$ where $p$ is the number of image features and n is the number of image documents and element $A_{i,j}$ represents the number of times the image document $d_i$ contains the $j^{th}$ feature.
**Wisdom of the Crowds** Due to the sparse nature of the text documents we are considering, the WOC is needed to link features that represent a single class. For example, if one wishes to obtain a class of documents and images about cats, text and images from a wikipedia page on cats can be used as the wisdom of the crowds. Using Wikipedia, we collect WOC documents $W = \{w_1, ... w_k\}$ where $k$ is the number of clusters we wish to cluster the images into. Each $w_i$ is a text document that contains features that collectively describe a single class. To create text features, we parse text documents by white space (i.e. break up text by words) and obtain a corpus $f = (f_1, ... f_q)$ of $q$ unique features. Let $C \in \mathbb{R}^{k \times q}$ Each $C_{i,j}$ is the number of times the feature $f_j$ appears in $w_i$.
**Text Documents** In the same manner as with WOC documents, we parse text documents into features to obtain a corpus. In most cases, the features in this corpus have already appeared somewhere in the WOC documents so we use the same $f = (f_1,...f_q)$ from the previous step. If it is not the case, “missing" features can simply be appended to the list of features and the $C$ matrix extended to reflect the absence of the missing features. We calculate $B \in \mathbb{R}^{m \times q}$ where $m$ is the number of text documents, $q$ is the number of features in corpus $f$, and element $B_{i,j}$ is the number of times text document $s_i$ contains the feature $f_j$. We then extend $B \in \mathbb{R}^{m \times q}$ to $B \in \mathbb{R}^{n \times q}$ such that $B_{i,j}= 0$ for $i = m+1,...n, j=1,...q$. Intuitively, this means that none of the text features knowingly describes the $m+1, ... n$ image documents.
We combine the image feature matrix $A$, the text feature matrix $B$, and the WOC matrix $C$ to initialize matrix $M$: $$M = \begin{bmatrix}
A & B\\
0 & C\\
\end{bmatrix},$$ where $M \in \mathbb{R}^{(m+k) \times (p+q)}$ and $0 \in \{0\}^{k \times p}$. We call $M$ our mixed document feature matrix. Each row represents a document and each column represents a feature (either an image feature or a text feature).
Without the reweighing using IDF, it is difficult to use sparse text to aid in image classification. This is because the frequency of the image features outweigh any sort of effect the sparse text has in the classification of image documents. The inverse document frequency matrix $\text{IDF} \in \mathbb{R}^{p+q \times p+q}$ is defined as the diagonal matrix with nonzero elements: $$\text{IDF}_{j,j} = \log\dfrac{m+k}{|\{i : M_{i,j}>0\}|},$$ where ${|\{i : M_{i,j}>0\}|}$ is the number of documents containing the $j^{th}$ feature. We then re-evaluate M to be $M = M \times IDF$.
![Example of an M matrix with 4500 image documents, 9 WOC documents, and 450 text documents.[]{data-label="fig:mixmat"}](mixMat.jpg)
Topic Modeling using Non-negative Matrix Factorization
======================================================
We use non-negative matrix factorization (NNMF) on the document feature matrix to cluster documents into topics. We consider the document feature matrix as a set of (m+k) points in a (p+q) dimensional space. Each document is a point and each feature is a dimension. We want to reduce the dimensionality of this space into $k^* << \min(m+k, p+q)$ dimensions [@Lsas]. NMF is a method that takes a non-negative matrix $M_+ \in \mathbb{R}^{(m+k) \times (p+q)}$ and factors it into two non-negative matrices $U_+ \in \mathbb{R}^{(m+k) \times k^*}$ and $V_+ \in \mathbb{R}^{k^* \times (p+q)}$ where $k^*$ is the rank of the desired lower dimensional approximation
|
{
"pile_set_name": "ArXiv"
}
| null | null |
Introduction
============
The single band two dimensional Hubbard Hamiltonian[@HUBBARD] has recently received considerable attention due to possible connections with high temperature superconductors. Indeed, evidence is accumulating that this Hamiltonian may describe, at least qualitatively, some of the normal state properties of the cuprates.[@review] Exact Diagonalization (ED) and Quantum Monte Carlo (QMC) have been used to model static properties like the behavior of spin correlations and magnetic susceptibility both at half-filling and with doping.[@review] Comparisons of dynamic quantities like the spectral weight and density of states with angle-resolved photoemission results[@flat-exper; @flat; @bulut; @hanke; @berlin] have also proven quite successful. Significantly, while analytic calculations have pointed towards various low temperature superconducting instabilities, such indications have been absent in numerical work.[@review]
Historically, however, the Hubbard model was first proposed to model magnetism and metal-insulator transitions in 3D transition metals and their oxides,[@HUBBARD] rather than superconductivity. Now that the technology of numerical work has developed, it is useful to reconsider some of these original problems. A discussion of possible links between the 3D Hubbard model and photoemission results for ${\rm YTiO_3}$, ${\rm Sr VO_3}$ and others[@fujimori; @inoue; @morikawa] has already recently occurred. In such perovskite ${\rm Ti^{3+}}$ and ${\rm V^{4+}}$ oxides, which are both in a $3d^1$ configuration, the hopping amplitude $t$ between transition-metal ions can be varied by modifying the $d-d$ neighboring overlaps through a tetragonal distortion. Thus, the strength of the electron correlation $U/t$ can be varied by changing the composition. In fact, a metal-insulator transition has been reported in the series ${\rm SrVO_3}-{\rm Ca VO_3}-{\rm La Ti O_3}-{\rm YTiO_3}$. On the metallic side, a quasiparticle band is experimentally observed near the Fermi energy $E_F$, as well as a high energy satellite associated to the lower Hubbard band (LHB).[@fujimori; @rrmp] Spectral weight is transferred from the quasiparticle to the LHB as $U/t$ is increased at half-filling.
In this paper, we report the first use of Quantum Monte Carlo, combined with analytic continuation techniques, to evaluate the spectral function and density of states for the 3D Hubbard Hamiltonian. The motivation is twofold. First, we want to compare general properties of the 3D Hubbard Hamiltonian with the extensive studies already reported in the literature[@WHITE; @jarrell; @rrmp; @review] for the 2D and infinite-D cases. Of particular importance is the presence of quasiparticles near the half-filling regime, as well as the evolution of spectral weight with doping. Many of the high-Tc cuprates contain ${\rm CuO_2}$ planes that are at least weakly coupled with each other, and thus the study of the 3D system may help in understanding part of the details of the cuprates. More generally, the Hubbard Hamiltonian is likely to continue being one of the models used to capture the physics of strongly correlated electrons, so we believe it is important to document its properties in as many environments as possible for potential future comparisons against experiments.
Secondly, we discuss a particular illustration of such contact between Hubbard Hamiltonian physics and experiments on 3D transition metal oxides. In addition to the studies of half-filled systems with varying correlation energy mentioned above, experiments where the band filling is tuned by changing the chemical composition have also been reported.[@fujimori2; @morikawa; @tokura] One compound that has been carefully investigated in this context is ${\rm Y_{1-x}
Ca_x Ti O_3}$. At $x=0$ the system is an antiferromagnetic insulator. As $x$ increases, a metal-insulator transition is observed in PES studies. The lower and upper Hubbard bands (LHB and UHB) are easily identified even with $x$ close to 1, which would naively correspond to small electronic density in the single band Hubbard model, i.e. a regime where $U/t$ is mostly irrelevant. In the experiments, a very small amount of spectral weight is transferred to the Fermi energy, filling the gap observed at half-filling (i.e. generating a “pseudogap”).
Analysis of the PES results of these compounds using the paramagnetic solution of the Hubbard Hamiltonian in infinite-D [@metzner], a limit where dynamic mean field theory becomes exact (see section II), has resulted in qualitative agreement [@jarrell; @georges; @rrmp] with the experimental results. At and close to half-filling there is an antiferromagnetic (AF) solution which becomes unstable against a paramagnetic (PM) solution at a critical concentration of holes. In the PM case, weight appears in the original Hubbard gap as reported experimentally. However, this analysis of the spectral weight in terms of the infinite-D Hamiltonian is in contradiction with results for the density of states reported in the 2D Hubbard model[@review] where it is found that upon hole (electron) doping away from half-filling the chemical potential $\mu$ moves to the top (bottom) of the valence (conduction) band. The results at $\langle n
\rangle =1$ in 2D already show the presence of a robust quasiparticle peak which is absent in the insulating PM solution of the $D=\infty$ model. That is, in the 2D system the large peak in the density of states observed away from half-filling seems to evolve from a robust peak already present at half-filling. On the other hand, at $D=\infty$ a feature resembling a “Kondo-resonance” is $generated$ upon doping if the paramagnetic solution is used. This peak in the density of states does not have an analog at half-filling unless frustration is included.[@jarrell] Studies in 3D may help in the resolution of this apparent non-continuity of the physics of the Hubbard model when the dimension changes from 2 to $\infty$. The proper way to carry out a comparison between $D=3$ and $\infty$ features is to base the analysis on ground state properties. With this restriction, i.e. using the AF solution at $D=\infty$ and close to half-filling, rather than the PM solution, we found that the $D=3$ and $\infty$ results are in good agreement.
In this paper we will consider which of these situations the 3D Hubbard Hamiltonian better corresponds to, and therefore whether the single band Hubbard Hamiltonian provides an adequate description of the density of states of 3D transition-metal oxides.
Model and Methods
=================
The single band Hubbard Hamiltonian is $$\begin{aligned}
H & = & -t \sum_{\bf \langle ij \rangle } ( c^\dagger_{ {\bf i} \sigma}
c_{{\bf j} \sigma} + h.c.)
- \mu \sum_{{\bf i}\sigma} n_{{\bf i}\sigma} \nonumber \\
& & + U \sum_{\bf i} (n_{{\bf i} \uparrow} - 1/2 )
(n_{{\bf i} \downarrow} - 1/2 ),
\label{hubbard}\end{aligned}$$ where the notation is standard. Here ${\bf \langle ij \rangle }$ represents nearest-neighbor links on a 3D cubic lattice. The chemical potential $\mu$ controls the doping. For $\mu=0$ the system is at half filling ($\langle n \rangle=1$) due to particle-hole symmetry. $t\equiv 1$ will set our energy scale.
We will study the 3D Hubbard Hamiltonian using a finite temperature, grand canonical Quantum Monte Carlo (QMC) method [@blankenbecler] which is stabilized at low temperatures by the use of orthogonalization techniques [@white]. The algorithm is based on a functional-integral representation of the partition function obtained by discretizing the “imaginary-time” interval $[0,\beta]$ where $\beta$ is the inverse temperature. The Hubbard interaction is decoupled by a two-valued Hubbard-Stratonovich transformation [@hirsch] yielding a bilinear time-dependent fermionic action. The fermionic degrees of freedom can hence be integrated out analytically, and the partition function (as well as observables) can be written as a sum over the auxiliary fields with a weight proportional to the product of two determinants, one for each spin species. At half-filling ($\langle n \rangle=1$), it can be shown by particle-hole transformation of one spin species $(c_{{\bf i}\downarrow} \rightarrow
(-1)^{{\bf i}} c_{{\bf i}\downarrow}^\dagger)$ that the two determinants differ only by a positive factor, hence their product is positive definite. At general fillings, however, the product can become negative, and this “minus-sign problem” restricts the application of QMC to relatively high temperature (of order 1/30 of the bandwidth) off half-filling.
The QMC algorithm provides a variety of static and dynamic observables. One equal time quantity in which we are interested is the magnetic (spin-spin) correlation function, $$C({\bf l}) = \frac{1}{N} \sum_{{\bf j}} \langle m_{\bf j} m_{{\bf j+l}} \rangle.
\label{correl}$$ Here $m_{\bf j}=\sum_\sigma\sigma n_{{\bf j}\sigma}$ is the local spin operator, and $N$ is the total number of
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'In this paper, we study the multifractal Hausdorff and packing dimensions of Borel probability measures and study their behaviors under orthogonal projections. In particular, we try through these results to improve the main result of M. Dai in [@D] about the multifractal analysis of a measure of multifractal exact dimension.'
address:
- 'bilel.selmi@fsm.rnu.tn'
- |
Faculty of Sciences of Monastir\
Department of Mathematics\
5000-Monastir\
Tunisia
author:
- Bilel SELMI
date: 'May 20, 2011'
title: Multifractal dimensions for projections of measures
---
Multifractal analysis, Dimensions of measures, Projection.
Introduction
============
The notion of dimensions is an important tool in the classification of subsets in $\mathbb{R}^n$. The Hausdorff and packing dimensions appear as some of the most common examples in the literature. The determination of set’s dimensions is naturally connected to the auxilliary Borel measures supported by these sets. Moreover, the estimation of a set’s dimension is naturally related to the dimension of a probability measure $\nu$ in $\mathbb{R}^n$. In this way, thinking particularly to sets of measure zero or one, leads to the respective definitions of the lower and upper Hausdorff dimensions of $\nu$ as follows $$\underline{\dim}(\nu)=\inf\Big\{\dim(E);\; E \subseteq
\mathbb{R}^n\; \text{and}\; \nu(E)>0\Big\}$$ and $$\overline{\dim}(\nu)=\inf\Big\{\dim(E);\; E \subseteq \mathbb{R}^n\;
\text{and}\;\nu(E)=1\Big\},$$ where $\dim(E)$ denotes the Hausdorff dimension of $E$ (see[@F]). If $\underline{\dim}(\nu)= \overline{\dim}(\nu)$, this common value is denoted by ${\dim}(\nu)$. In this case, we say that $\nu$ is unidimensional. Similarly, we define respectively the lower and upper packing dimensions of $\nu$ by $$\underline{\operatorname{Dim}}(\nu)=\inf\Big\{\operatorname{Dim}(E);\;E \subseteq \mathbb{R}^n\;
\text{and}\; \nu(E)>0\Big\}$$ and $$\overline{\operatorname{Dim}}(\nu)=\inf\Big\{\operatorname{Dim}(E);\;E \subseteq \mathbb{R}^n\;
\text{and}\; \nu(E)=1\Big\},$$ where $\operatorname{Dim}(E)$ is the packing dimension of $E$ (see [@F]). Also, if the equality $\underline{\operatorname{Dim}}(\nu)= \overline{\operatorname{Dim}}(\nu)$ is satisfied, we denote by ${\operatorname{Dim}}(\nu)$ their common value.
The lower and upper Hausdorff dimensions of $\nu$ were studied by A.H. Fan in [@FF; @FF1]. They are related to the Hausdorff dimension of the support of $\nu$. A similar approach, concerning the packing dimensions, was developed by Tamashiro in [@T]. There are numerous works in which estimates of the dimension of a given measure are obtained [@NBC; @B; @D; @F; @FLR; @H1; @H2; @H; @L; @BBSS]. When $\overline{\dim}(\nu)$ is small (resp. $\underline{\dim}(\nu)$ is large), it means that $\nu$ is singular (resp. regular) with respect to the Hausdorff measure. Similar definitions are used when concerned with the upper and lower packing dimensions.\
Note that, in many works (see for example [@F; @H1; @H2; @H]), the quantities $\underline{\dim}(\nu)$, $\overline{\dim}(\nu)$, $\underline{\operatorname{Dim}}(\nu)$ and $\overline{\operatorname{Dim}}(\nu)$ are related to the asymptotic behavior of the function $\alpha_\nu(x,r)=
\frac{\log\nu(B(x,r))}{\log r}$.
One of the main problems in multifractal analysis is to understand the multifractal spectrum, the Rényi dimensions and their relationship with each other. During the past 20 years, there has been enormous interest in computing the multifractal spectra of measures in the mathematical literature and within the last 15 years the multifractal spectra of various classes of measures in Euclidean space $\mathbb{R}^n$ exhibiting some degree of self-similarity have been computed rigorously (see [@F; @LO; @Pe] and the references therein). In an attempt to develop a general theoretical framework for studying the multifractal structure of arbitrary measures, Olsen [@LO] and Pesin [@Pes] suggested various ways of defining an auxiliary measure in very general settings. For more details and backgrounds on multifractal analysis and its applications, the readers may be referred also to the following essential references [@NB; @NBC; @BJ; @BB; @BBH; @BenM; @CO; @DB; @BD1; @FM1; @MMB; @MMB1; @LO; @O2; @O1; @Ol; @SH1; @SELMI1; @SELMI; @W; @W1; @W3; @W4].
In this paper, we give a multifractal generalization of the results about Hausdorff and packing dimension of measures. We first estimate the multifractal Hausdorff and packing dimensions of a Borel probability measure. We try through these results to improve the main result of M. Dai in [@D Theorem A] about the multifractal analysis of a measure of exact multifractal dimension. We are especially based on the multifractal formalism developed by Olsen in [@LO]. Then, we investigate a relationship between the multifractal dimensions of a measure $\nu$ and its projections onto a lower dimensional linear subspace.
Preliminaries
=============
We start by recalling the multifractal formalism introduced by Olsen in [@LO]. This formalism was motivated by Olsen’s wish to provide a general mathematical setting for the ideas present in the physics literature on multifractals.
Let $E\subset \mathbb{R}^n$ and $\delta>0$, we say that a collection of balls $\big(B(x_i, r_i)\big)_i$ is a centered $\delta$-packing of $E$ if $$\forall i,\; 0<r_i<\delta,\quad x_i\in E,\; \text{and} \quad
B(x_i,r_i)\cap B(x_j, r_j)=\emptyset,\quad{\forall}\; i\neq j.$$ Similarly, we say that $\big(B(x_i, r_i)\big)_i$ is a centered $\delta$-covering of $E$ if $$\forall i,\; 0<r_i<\delta, \quad x_i\in E, \quad \text{and} \qquad
E\subset \bigcup_i \; B(x_i, r_i).$$
Let $\mu$ be a Borel probability measure on $\mathbb{R}^n$. For $q,
t\in\mathbb{R}$, $E \subseteq{\mathbb R}^n$ and $\delta>0$, we define $$\overline{{\mathcal P}}^{q,t}_{\mu,\delta}(E) =\displaystyle \sup
\left\{\sum_i \mu(B(x_i,r_i))^q (2r_i)^t\right\},$$ where the supremum is taken over all centered $\delta$-packings of $E$. The generalized packing pre-measure is given by $$\overline{{\mathcal P}}^{q,t}_{\mu}(E)
=\displaystyle\inf_{\delta>0}\overline{{\mathcal
P}}^{q,t}_{\mu,\delta}(E).$$ In a similar way, we define $$\overline{{\mathcal H}}^{q,t}_{\mu,\delta}(E) = \displaystyle\inf
\left\{\sum_i \mu(B(x_i,r_i))^q(2r_i)^t\right\},$$ where the infinimum is taken over all centered $\delta$-coverings of $E$. The generalized Hausdorff pre-measure is defined by $$\overline{{\mathcal H}}^{q,t}_{\mu}(E) = \displaystyle\sup_{
\delta>0}\overline{{\mathcal H}}^{q,t}_{\mu,\delta}(E).$$ Especially, we have the conventions $0^q=\infty$ for $q\leq0$ and $0^q=0$ for $q>0$.
Olsen [@LO] introduced the following modifications on the generalized Hausdorff and packing measures, $${\mathcal H}^{q,t}_{\mu}(E)=\displaystyle\sup_{F\subseteq
E}\overline{{\mathcal H}}^{q,t}_{\mu}(F)\quad\text{and}\quad
{\mathcal P}^{q,t}_{\mu}(E) = \inf_{E \subseteq \bigcup_{i}E_i}
\sum_i \overline{\mathcal P}^{q,t}_{\mu}(E_i).$$
The functions ${\mathcal H}^{q,t}_{\mu}$ and ${\mathcal
P}^{q,t}_{\mu}$ are metric outer measures and thus measures on the family of Borel subsets of $\mathbb{R}^n$. An important feature of the Hausdorff and packing measures is that ${\mathcal
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'In coventional imaging experiments, objects are localized in a position space and such optically responsive objects can be imaged with a convex lens and can be seen by a human eye. In this paper, we introduce an experiment on a three-dimensional imaging of a pattern which is localized in a three-dimesional phase space. The phase space pattern can not be imaged with a lens in a conventional way and it can not be seen by a human eye. In this experiment, a phase space pattern is produced from object transparancies and imprinted onto the phase space of an atomic gaseous medium, of doppler broadened absorption profile at room temperature, by utilizing velocity selective hole burning in the absorption profile. The pattern is localized in an unique three dimensional phase space which is a subspace of the six dimensional phase space. Imaging of the localized phase space pattern is performed at different momentum locations. In addition, imaging of the imprinted pattern of an object of nonuniform transmittance is presented.'
author:
- Mandip Singh and Samridhi Gambhir
title: 'Three-dimensional imaging of a pattern localized in a phase space'
---
In most imaging experiments, a structure of an object is defined in a position space. The structural pattern can be stationary or for a dynamic object can be non stationary *w.r.t.* time. An image of such an optically responsive object can be produced with a convex lens therefore, such an object can be seen with a camera or by a human eye. In this paper, we go beyond the conventional notion of imaging. A structural pattern of objects in our experiment is defined in a phase space therefore, such a pattern can not be imaged with a lens or a camera and a human eye can not visualize such a pattern. In this paper, we introduce a three-dimensional (3D) imaging of a pattern localized in a phase space. The pattern is localized in an unique 3D subspace, of the six-dimensional (6D) phase space, involving two position and one momentum coordinates. However, the pattern is delocalized in a 3D position subspace and in a 3D momentum subspace of the 6D phase space, separately.
In experiment presented in this paper, the pattern of interest is produced by object transparencies and imprinted onto the phase space of an atomic gaseous medium at room temperature. Experiment is performed by utilizing velocity selective hole-burning [@lamb; @bennet; @haroche; @hughes2; @scholten1; @boudot; @schm] in doppler broadened absorption profile of an atomic gaseous medium. Tomographic images of the pattern localized in a 3D phase space are then captured with an imaging laser beam. The imaging laser beam is not interacting with actual objects used to produce the localized phase space pattern. Imaging of objects localized in a position space has been realized with quantum $\&$ classical methods with undetected photons [@zeilinger_1; @wong]. Quantum imaging with undetected photons, and unlike the ghost imaging [@bar; @imphase; @qimaging; @ghim; @boyd; @shih; @lugiato], does not rely on coincidence detection of photons. In this paper, a pattern of an object of nonuniform transmittance is also imprinted onto the phase space of an atomic medium and the pattern is then imaged at a constant location of momentum.
A localized pattern in a 3D subspace of the 6D phase space is shown in Fig. \[fig1\] (a), where a two dimensional position space is spanned by orthogonal position unit vectors $\hat{x}$ and $\hat{y}$ and a third dimension corresponds to the $z$-component of momentum, $p_{z}$.
![\[fig1\] *(a) A localized pattern in a 3D phase space and its three tomograms. (b) Experimental schematic diagram, a linearly polarized imaging laser beam is overlapped in an atomic gaseous medium with counter propagating object laser beams. A 2D transverse intensity profile of the imaging laser beam at different detunings is captured with an EMCCD camera. (c) A 2D transverse intensity profile of the overlapped object laser beams prior to their entrance into the atomic medium. All three alphabets are overlapped with each other. (d) Transmittance, for imaging laser beam, of the atomic medium in presence of object laser beams without masks. Three peaks labeled as $1$, $2$ and $3$ correspond to a velocity selective hole-burning, in doppler broadened absorption profile, produced by object laser beams of frequencies $\nu_{1}$, $\nu_{o}$ and $\nu_{2}$, respectively.*](fig1.png)
In experiment, $p_{z}$ is the $z$-component of momentum of atoms. The pattern is stationary *w.r.t.* time. A 2D planar section of a localized 3D phase space pattern at a constant $p_{z}$ represents a tomogram of the localized pattern. In Fig. \[fig1\] (a), three different tomograms at three different momenta are shown. Tomograms with an image of the English script alphabets $\bf{C}$, $\bf{A}$ and $\bf{T}$ are localized at $p_{z}$ equals to $p_{1}$, $p_{2}$ and $p_{3}$, respectively. Furthermore, in a 3D position space, spanned by orthogonal position unit vectors $\hat{x}$, $\hat{y}$ and $\hat{z}$, each tomogram is completely delocalized on $z$-axis that implies, all images are overlapped with each other and distributed at all points on $z$-axis. However, in a 3D momentum space, spanned by orthogonal unit vectors $\hat{p}_{x}$, $\hat{p}_{y}$ and $\hat{p}_{z}$ of momentum components along $\hat{x}$, $\hat{y}$ and $\hat{z}$ directions, each tomogram is delocalized in all planes parallel to $p_{x}$-$p_{y}$ plane. A subspace where the pattern is completely localized is an unique 3D subspace of the 6D phase space, as shown in Fig. \[fig1\] (a). In remining 3D subspaces of the 6D phase space the pattern is delocalized. In this paper, stationary localized 3D phase space pattern of interest is produced from objects located in the position space. The pattern is then imprinted onto the phase space of an atomic gas obeying Maxwell velocity distribution, in form of difference of number density of atoms in ground state and excited state. Tomographic images of the 3D phase space pattern are then imaged with an imaging laser beam, where by varing the frequency of the laser beam the location, $p_{z}$, of the tomogram can be shifted.
In experiment, a stationary pattern in phase space of atoms is produced at room temperature (25$^{o}$C) by velocity selective hole-burning in the doppler broadened absorption profile of an atomic gaseous medium. Consider a linearly polarized object laser beam, of frequency $\nu_{p}$ and transverse intensity profile $I_{p}(x,y,\nu_{p})$, propagating in an atomic gaseous medium in a direction opposite to $z$-axis. The intensity profile $I_{p}(x,y,\nu_{p})$ represents a 2D image of an object in position space. This image information is transferred to a velocity class of the atomic gaseous medium at temperature $T$ by velocity selective atomic excitation. Consider an atomic gaseous medium where an isolated stationary atom has a ground quantum state $|g\rangle$ of energy $E_{g}$ and an excited quantum state $|e\rangle$ of energy $E_{e}$ with linewidth $\Gamma$. The object laser beam is on resonance with a velocity class of atoms of $z$-component of their velocity equals to $v_{r}=2\pi(\nu_{o}-\nu_{p})/k$, where $\nu_{o}=(E_{e}-E_{g})/h$, $k=2\pi/\lambda$ is the magnitude of the propagation vector of the object laser beam having wavelength $\lambda$. Atoms of other velocity classes are out of resonance due to the doppler shift. Transverse doppler shift is negligible beacuse of non relativistic velocity regime at room temperature. In absence of an object laser beam, all the atoms are in the ground state. Consider $n$ is the number of atoms per unit volume of the gaseous medium. According to Maxwell velocity distribution, a fraction of atoms with velocity in an interval $dv_{z}$ around $v_{z}$ at temperature $T$ is $f(v_{z}) dv_{z}=(m/2 \pi k_{B} T)^{1/2}e^{-m v^{2}_{z}/2 k_{B} T} dv_{z}$. Where, $k_{B}$ is the Boltzmann constant and $m$ is mass of an atom. Consider $L$ is length of the atomic medium along beam propagation direction. In presence of an object laser beam the ground state atoms of resonant velocity class are populated to the excited state. A steady state difference of atomic number densities in the ground state ($n_{1}$) and in the excited state ($n_{2}$) at $v_{z}$ is $n_{1}(x,y,v_{z})-n_{2}(x,y,v_{z})=n f(v_{z})/(1+I_{p}({x,y,\nu_{p}})\Gamma^{2}/(4I_{s}((2 \pi \nu_{p}-2 \pi \nu_{o}+kv_{z})^{2}+\Gamma^{2}/4)))$, where $I_{s}$ is the saturation intensity of the atomic transition.
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: |
#### Morphology and defects: {#morphology-and-defects .unnumbered}
Issues of Ge hut cluster array formation and growth at low temperatures on the Ge/Si(001) wetting layer are discussed on the basis of explorations performed by high resolution STM and [*in-situ*]{} RHEED. Dynamics of the RHEED patterns in the process of Ge hut array formation is investigated at low and high temperatures of Ge deposition. Different dynamics of RHEED patterns during the deposition of Ge atoms in different growth modes is observed, which reflects the difference in adatom mobility and their ‘condensation’ fluxes from Ge 2D gas on the surface for different modes, which in turn control the nucleation rates and densities of Ge clusters. Data of HRTEM studies of multilayer Ge/Si heterostructures are presented with the focus on low-temperature formation of perfect films.
#### Photo-emf spectroscopy: {#photo-emf-spectroscopy .unnumbered}
Heteroepitaxial Si [*p–i–n*]{}-diodes with multilayer stacks of Ge/Si(001) quantum dot dense arrays built in intrinsic domains have been investigated and found to exhibit the photo-emf in a wide spectral range from 0.8 to 5$\mu$m. An effect of wide-band irradiation by infrared light on the photo-emf spectra has been observed. Photo-emf in different spectral ranges has been found to be differently affected by the wide-band irradiation. A significant increase in photo-emf is observed in the fundamental absorption range under the wide-band irradiation. The observed phenomena are explained in terms of positive and neutral charge states of the quantum dot layers and the Coulomb potential of the quantum dot ensemble. A new design of quantum dot infrared photodetectors is proposed.
#### Terahertz spectroscopy: {#terahertz-spectroscopy .unnumbered}
By using a coherent source spectrometer, first measurements of terahertz dynamical conductivity (absorptivity) spectra of Ge/Si(001) heterostructures were performed at frequencies ranged from 0.3 to 1.2 THz in the temperature interval from 300 to 5K. The effective dynamical conductivity of the heterostructures with Ge quantum dots has been discovered to be significantly higher than that of the structure with the same amount of bulk germanium (not organized in an array of quantum dots). The excess conductivity is not observed in the structures with the Ge coverage less than 8Å. When a Ge/Si(001) sample is cooled down the conductivity of the heterostructure decreases.
address: |
(1) A M Prokhorov General Physics Institute of RAS, 38 Vavilov Street, Moscow 119991, Russia\
(2)Technopark of GPI RAS, 38 Vavilov Street, Moscow, 119991, Russia\
(3)Moscow Institute of Physics and Technology, Institutsky Per. 9, Dolgoprudny, Moscow Region, 141700, Russia
author:
- 'Vladimir A Yuryev$^{1,2}$'
- 'Larisa V Arapkina$^{1}$'
- 'Mikhail S Storozhevykh$^{1}$'
- 'Valery A Chapnin$^{1}$'
- 'Kirill V Chizh$^{1}$'
- 'Oleg V Uvarov$^{1}$'
- 'Victor P Kalinushkin$^{1,2}$'
- 'Elena S Zhukova$^{1,3}$'
- 'Anatoly S Prokhorov$^{1,3}$'
- 'Igor E Spektor$^{1}~$ and Boris P Gorshunov$^{1,3}$'
bibliography:
- 'EMNC2012-article.bib'
title: |
Ge/Si(001) heterostructures with dense arrays of Ge\
quantum dots: morphology, defects, photo-emf spectra\
and terahertz conductivity
---
\[1995/12/01\]
Introduction {#introduction .unnumbered}
============
Artificial low-dimensional nano-sized objects, like quantum dots, quantum wires and quantum wells, as well as structures based on them, are promising systems for improvement of existing devices and for development of principally new devices for opto-, micro- and nano-electronics. Besides, the investigation of physical properties of such structures is also of fundamental importance. In both regards, amazing perspectives are provided when playing around with quantum dots that can be considered as artificial atoms with a controlled number of charge carriers that have a discrete energy spectrum [@Pchel_Review-TSF; @Pchel_Review]. Arrays of a *large* number of quantum dots including multilayer heterostructures make it possible to create artificial “solids" whose properties can be controllably changed by varying the characteristics of constituent elements (“atoms") and/or the environment (semiconductor matrix). The rich set of exciting physical properties in this kind of systems originates from single-particle and collective interactions that depend on the number and mobility of carriers in quantum dots, Coulomb interaction between the carriers inside a quantum dot and in neighbouring quantum dots, charge coupling between neighbouring quantum dots, polaron and exciton effects, etc. Since characteristic energy scales of these interactions (distance between energy levels, Coulomb interaction between charges in quantum dots, one- and multiparticle exciton and polaron effects, plasmon excitations, etc.) are of order of several meV [@3-Colomb_interactions-Dvur; @4-Drexler-InGaAs; @5-Lipparini-far_infrared], an appropriate experimental tool for their study is provided by optical spectroscopy in the far-infrared and terahertz bands.
To get access to the effects, one has to extend the operation range of the spectrometers to the corresponding frequency domain that is to the terahertz frequency band. Because of inaccessibility of this band, and especially of its lowest frequency part, below 1 THz (that is $\apprle 33$cm$^{-1}$), for standard infrared Fourier-transform spectrometers, correspondent data is presently missing in the literature. In this paper, we present the results of the first detailed measurements of the absolute dynamical (AC) conductivity of multilayer Ge/Si heterostructures with Ge quantum dots, at terahertz and sub-terahertz frequencies and in the temperature range from 5 to 300K.
In addition, for at least two tens of years, multilayer Ge/Si heterostructures with quantum dots have been candidates to the role of photosensitive elements of monolithic IR arrays promising to replace and excel platinum silicide in this important brunch of the sensor technology [@Wang-properties; @Wang-Cha; @Dvur-IR-20mcm]. Unfortunately, to date achievements in this field have been less than modest.
We believe that this state of affairs may be improved by rigorous investigation of formation, defects and other aspects of materials science of such structures, especially those which may affect device performance and reliability, focusing on identification of reasons of low quantum efficiency and detectivity, high dark current and tend to degrade with time as well as on search of ways to overcome these deficiences. New approaches to device architecture and design as well as to principles of functioning are also desirable.
This article reports our latest data on morphology and defects of Ge/Si heterostructures. On the basis of our recent results on the photo-emf in the Si [*p–i–n*]{}-structures with Ge quantum dots, which are also reported in this article, we propose a new design of photovoltaic quantum dot infrared photodetectors.
Methods {#methods .unnumbered}
=======
Equipment and techniques {#equipment-and-techniques .unnumbered}
------------------------
The Ge/Si samples were grown and characterized using an integrated ultrahigh vacuum instrument [@classification; @stm-rheed-EMRS; @CMOS-compatible-EMRS; @VCIAN2011] built on the basis of the Riber SSC2 surface science center with the EVA32 molecular-beam epitaxy (MBE) chamber equipped with the RH20 reflection high-energy electron diffraction (RHEED) tool (Staib Instruments) and connected through a transfer line to the GPI-300 ultrahigh vacuum scanning tunnelling microscope (STM) [@gpi300; @STM_GPI-Proc; @STM_calibration]. Sources with the electron beam evaporation were used for Ge or Si deposition. A Knudsen effusion cells was utilized if boron doping was applied for QDIP [*p–i–n*]{}-structure formation. The pressure of about $5\times 10^{-9}$Torr was kept in the preliminary sample cleaning (annealing) chamber. The MBE chamber was evacuated down to about $10^{-11}$Torr before processes; the pressure increased to nearly $2\times 10^{-9}$Torr at most during the Si substrate cleaning and $10^{-9}$Torr during Ge or Si deposition. The residual gas pressure did not exceed $10^{-10}$Torr in the STM chamber. Additional details of the experimental instruments and process control can be found in Ref.[@VCIAN2011].
RHEED measurements were carried out [*in situ*]{}, i.e., directly in the MBE chamber during a process [@stm-rheed-EMRS]. STM images were obtained in the constant tunnelling current mode at the room temperature. The STM tip was zero-biased while the sample was positively or negatively biased when scanned in empty- or
|
{
"pile_set_name": "ArXiv"
}
| null | null |
=0.5cm
=1em
#### Introduction. {#introduction. .unnumbered}
Two new large colliders with relativistic heavy nuclei, the RHIC and the LHC, are scheduled to be in operation in the nearest future. The charge numbers $Z_1=Z_2=Z$ of the nuclei with masses $M_1=M_2=M$ and their Lorentz factors $\gamma_1=\gamma_2=\gamma=E/M$ are the following $$\begin{aligned}
Z=79\,, \ \gamma &=&\,\;108 \ {\mathrm {for \ RHIC \ (Au--Au \ collisions)}}\,
\nonumber \\
Z=82\,, \ \gamma &=&3000 \ {\mathrm {for \ LHC \ \ (Pb--Pb \ collisions)}}\,.
\label{1}\end{aligned}$$ Here $E$ is the heavy ion energy in the c.m.s. One of the important processes at these colliders is $$Z_1Z _2\to Z_1Z_2 \, e^+e^- \,.
\label{2}$$ Its cross section is huge. In the Born approximation (see Fig. \[f1\] with $n=n'=1$) the total cross section according to
the Racah formula [@R] is equal to $\sigma_{\mathrm{Born}} = 36 $ kbarn for the RHIC and 227 kbarn for the LHC. Therefore it will contribute as a serious background to a number of experiments, besides, this process is the leading beam loss mechanism (for details see review [@BB]).
The cross sections of the process (\[2\]) in the Born approximation are known with accuracy $\sim 1/ \gamma^2$ (see, for example, Refs. [@R; @KLBGMS] and more recent calculations reviewed in Refs. [@BB; @BHT]). However, besides of the Born amplitude $M_{\mathrm {Born}} =M_{11}$, also other amplitudes $M_{nn'}$ (see Fig. \[f1\]) have to be taken into account for heavy nuclei since in this case the parameter of the perturbation series $Z\alpha$ is of the order of unity. Therefore, the whole series in $Z\alpha$ has to be summed to obtain the cross section with sufficient accuracy. Following Ref. [@BM], we call the Coulomb correction (CC) the difference $d\sigma_{\mathrm{Coul}}$ between the whole sum $d \sigma$ and the Born approximation $$d\sigma = d \sigma_{\mathrm{Born}} + d \sigma_{\mathrm{Coul}}\,.
\label{4}$$
Such kind of CC is well known in the photoproduction of $e^+e^-$ pairs on atoms (see Ref. [@BM] and §98 of [@BLP]). The Coulomb correction to the total cross section of that process decreases the Born contribution by about 10 % for a Pb target. For the pair production of reaction (\[2\]) with $Z_1\alpha \ll
1$ and $ Z_2 \alpha \sim 1$ CC has been obtained in Refs. [@NP; @BB]. Recently this correction has been calculated for the pair production in the collisions of muons with heavy nuclei [@IKSS]. The results of Refs. [@NP; @BB; @IKSS] agree with each other in the corresponding kinematic regions and noticeably change the Born cross sections. Formulae for CC for two heavy ions were suggested ad hoc in Sect. 7.3 of [@BB]. However, our calculations presented here do show that this suggestion is incorrect.
In the present paper we calculate the Coulomb correction for process (\[2\]) omitting terms of the order of $1$ % compared with the main term given by the Born cross section. We find that these corrections are negative and quite important: $$\begin{aligned}
\sigma_{\mathrm{Coul}}/ \sigma_{\mathrm{Born}} &=& -25\, \% \;\;
{\mathrm for \ \ RHIC}\,, \nonumber \\
\sigma_{\mathrm{Coul}}/ \sigma_{\mathrm{Born}} &=& -14\, \% \;\;
{\mathrm for \ \ LHC}\,.
\label{5}\end{aligned}$$ This means that at the RHIC the background process with the largest cross section will have a production rate 25 % smaller than expected.
Our main notations are given in Eq. (\[1\]) and Fig. \[f1\], besides, $(P_1+P_2)^2 = 4E^2 = 4 \gamma^2 M^2$, $q_i= (\omega_i,\, {\bf q}_i)= P_i-P_i'$, $\varepsilon=
\varepsilon_++\varepsilon_-$ and $$\sigma_0=\frac{\alpha^4 Z_1^2 Z_2^2}{\pi m^2} \,, \;\;
L= \ln{P_1P_2 \over 2M_1 M_2}= \ln{\gamma^2}
\label{3}$$ where $m$ is the electron mass. The quantities ${\mathbf
q}_{i\perp}$ and ${\mathbf p}_{\pm\perp}$ denote the transverse part of the corresponding three–momenta. Throughout the paper we use the well known function[@BM] $$f(Z) = Z^2\alpha^2 \sum_{n=1}^{\infty}
{1\over n(n^2+Z^2\alpha^2)}\,,$$ its particular values for the colliders under discussion are $f(79)=0.313$ and $f(82)=0.332$.
#### Selection of the leading diagrams and the structure of the amplitude. {#selection-of-the-leading-diagrams-and-the-structure-of-the-amplitude. .unnumbered}
Let ${\cal M}$ be the sum of the amplitudes $M_{nn'}$ of Fig. \[f1\]. It can be presented in the form $$\begin{aligned}
\label{7a}
{\cal M}&=& \sum_{nn'\geq 1 } M_{nn'}= M_{\mathrm{Born}}
+M_1+{\tilde M}_1+ M_2\,,\\
M_1 &=& \sum_{n'\geq 2} M_{1n'}\,, \ \
\tilde M_1 = \sum_{n\geq 2} M_{n1}\,, \ \
M_2= \sum_{nn'\geq 2} M_{nn'} \,. \nonumber\end{aligned}$$ The Born amplitude $M_{\mathrm{Born}}$ contains the one–photon exchange both with the first and the second nucleus, whereas the amplitude $M_1$ ($\tilde M_1$) contains the one–photon exchange only with the upper (lower) nucleus. In the last amplitude $M_2$ we have no one–photon exchange. According to this classification we write the total cross section as $$\sigma = \sigma_{\mathrm{Born}} +\sigma_1 +\tilde\sigma_1 + \sigma_2
\label{7}$$ where $$\begin{aligned}
&&d\sigma_{\mathrm{Born}} \propto |M_{\mathrm{Born}}|^2\,,
\nonumber \\
&&d\sigma_1 \propto 2 {\mathrm Re}(M_{\mathrm{Born}} M_1^*) +|M_1|^2 \,,
\nonumber \\
&&d\tilde\sigma_1 \propto 2 {\mathrm Re}(M_{\mathrm{Born}} \tilde M_1^*) +|
\tilde M_1|^2 \,,
\nonumber \\
&&d\sigma_2 \propto 2 {\mathrm Re}\left( M_{\mathrm{Born}} M_2^* +
M_1\tilde M_1^* +M_1M_2^* \right.
\nonumber \\
&& \left. \hspace{1cm}+\tilde M_1M_2^* \right) + |M_2|^2 \,.
\nonumber\end{aligned}$$
It is not difficult to show that the ratio $\sigma_i /
\sigma_{\mathrm{Born}}$ is a function of $(Z\alpha)^2$ only but not of $Z \alpha$ itself. Additionally we estimate the leading logarithms appearing in the cross sections $\sigma_i$. The integration over the transfered momentum squared $q_1^2 $ and $q_2^2$ results in two large Weizsäcker–Williams (WW) logarithms $\sim L^2$ for the $\sigma_{\mathrm{Born}}$, in one large WW logarithm $\sim L$ for $\sigma_1$ and $\tilde\sigma_1$. The cross section $\sigma_2$ contains no large WW logarithm. Therefore, the relative contribution of the cross sections $\sigma_i$ is $\sigma_1 / \sigma_{\mathrm{Born }}
=\tilde\sigma_1 / \sigma_{\mathrm{Born}} \sim (Z\alpha)^2 /L$ and ${\sigma_2 / \sigma_{\mathrm{Born}}} \sim (Z\alpha)^2 /L^2 \, <
0.4$ % for the colliders (\[1\]). As a result, with an accuracy of the order of $1 \%$ we can neglect $\sigma_2$ in the total cross section and use the equation $$\sigma = \sigma_{\mathrm{Born}} +\sigma_1 +\tilde\sigma_1\,.
\label{9}$$ With that accuracy it is sufficient to calculate $\sigma_1$ and $\tilde\sigma
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: |
This paper is a contribution to the study of the subgroup structure of exceptional algebraic groups over algebraically closed fields of arbitrary characteristic. Following Serre, a closed subgroup of a semisimple algebraic group $G$ is called irreducible if it lies in no proper parabolic subgroup of $G$. In this paper we complete the classification of irreducible connected subgroups of exceptional algebraic groups, providing an explicit set of representatives for the conjugacy classes of such subgroups. Many consequences of this classification are also given. These include results concerning the representations of such subgroups on various $G$-modules: for example, the conjugacy classes of irreducible connected subgroups are determined by their composition factors on the adjoint module of $G$, with one exception.
A result of Liebeck and Testerman shows that each irreducible connected subgroup $X$ of $G$ has only finitely many overgroups and hence the overgroups of $X$ form a lattice. We provide tables that give representatives of each conjugacy class of connected overgroups within this lattice structure. We use this to prove results concerning the subgroup structure of $G$: for example, when the characteristic is $2$, there exists a maximal connected subgroup of $G$ containing a conjugate of every irreducible subgroup $A_1$ of $G$.
address: 'School of Mathematics, University of Bristol, Bristol, BS8 1TW, UK, and Heilbronn Institute for Mathematical Research, Bristol, UK'
author:
- 'Adam R. Thomas'
bibliography:
- 'biblio.bib'
title: The Irreducible Subgroups of Exceptional Algebraic Groups
---
[^1]
[^1]: The author is indebted to Prof. M. Liebeck for his help in producing this paper. He would also like to thank Dr A. Litterick and Dr T. Burness for their comments on previous versions of this paper. Finally, the author would like to thank the anonymous referee for their careful reading of this paper and many insightful comments and corrections.
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'In this work the diffusion in the quenched trap model with diverging mean waiting times is examined. The approach of randomly stopped time is extensively applied in order to obtain asymptotically exact representation of the disorder averaged positional probability density function. We establish that the dimensionality and the geometric properties of the lattice, on top of which the disorder is imposed, dictate the plausibility of a mean-filed approximation that will only include annealed disorder. Specifically, for any case when the probability to return to the origin ($Q_0$) is less than $1$, i.e. the transient case, the quenched trap model can be mapped on to the continuous time random walk. The explicit form of the mapping is provided. In the case when an external force is applied on a tracer particle in a media described by the quenched trap model, the response to such force is calculated and a non-linear response for sufficiently low dimensionality is observed.'
author:
- Stanislav Burov
bibliography:
- './quenchedLiterature.bib'
title: The Transient Case of The Quenched Trap Model
---
Introduction
============
Brownian Motion is probably the simplest manifestation of a transport in random environment. In this case the particle path is constantly modified by collisions with molecules that compose the surrounding media. The trajectory will appear as if the direction of motion is randomly changes as a function of time and a simple random walk (RW) is quite useful to describe the motion. The continuum representation of a RW is a regular diffusion [@Weiss]. When the motion of the particle occurs in a complex media, the simple RW might be insufficient for proper description of the transport. In many materials the basic linear dependence of the mean squared displacement (MSD), $\langle x^2(t) \rangle$, is missing and instead $\langle x^2(t)\rangle\sim t^{\alpha} $ while $0<\alpha<1$. Such behavior is termed anomalous subdiffusion and materials where it appears include living cells [@Metzler2011; @LiveCell; @Tabei; @Bariers], blinking quantum dots [@QuantumD], plasma membrane [@Krapf2011], filamentous networks [@BurovPnas] and many more [@Sokolov2005]. The modeling of transport in these systems is quite complicated, when compared to the original RW. In the works of Scher and Montroll [@ScherMontroll] the continuous time random walk (CTRW) approach for transport in amorphous materials was developed. The idea behind CTRW is the existence of regions of local arrest, i.e. traps, where the traced particle waits for some random time before it continues its motion inside the media. When the expected random waiting times diverge the behavior is non-ergodic [@Bel2005; @YongHe] and CTRW will produce the mentioned subdiffusive scaling of the MSD. While CTRW became extremely popular and applicative [@Bouchaud; @Klafter; @Kutner2017], this approach treats the disorder in the media as annealed and uncorrelated. Quenchness of the disorder in the media is more physically appealing in many situations but it implies existence of strong correlations that in their turn introduce significant difficulties in calculating basic properties of the transport [@Kehr]. When the local dwell times of CTRW are fixed the model is known as the quenched trap model (QTM).
The QTM was found to be an important model that describes glassy behavior such as aging, weak-ergodicity breaking and non self-averaging [@BouchaudAg; @Monthus1996; @Rinn2000; @Rinn2001; @Bertin; @Burov2007]. Beyond the applications of the QTM, the difficulty of untangling the behavior dictated by quenched disorder, that is associated with QTM, posed this model and methods for its solution as a fundamental problem of anomalous transport [@Bouchaud]. The presence of the mentioned correlations, imposed by the quenched disorder, make the treatment of the QTM highly non-trivial task. Over the years many theoretical methods were devised to advance the general understanding of the QTM. The method of semi-equilibration [@Derrida] allowed to determine the average velocity and diffusion constant in the one-dimensional ($d=1$) case for the non-anomalous transport. Description of the QTM in terms of master equation and their representation in the Fourier space produced the scaling behavior of the QTM propagator at the origin [@Bernasconi; @Alexander]. Renormalization Group approach [@Machta], and scaling arguments [@Bouchaud1987], provided the existence of a critical dimension, $d=2$, for the QTM and the scaling behavior of the MSD. Based on these works a qualitative understanding that for sufficient high dimension ($d>2$) the behavior of the QTM can be mapped on-to the mean filed representation, i.e. CTRW. Further, the behavior of the QTM was studied for various lattices under the simplification of directed walk, i.e. without returns to previously visited traps [@Aslangul]. The decimation of disorder allowed Monthus to calculate (among other quantities) the behavior of the positional probability density function (PDF) in $d=1$ case in the limit of very low temperatures [@Monthus; @MonthusSec]. Rigorous probabilistic approach to the QTM led to mathematically exact scaling theorems [@BenArous1; @BenArous2] and further generalization of the QTM to such models as the randomly trapped random walk [@BenArous3; @Cerny01]. The effect of fractal structures for QTM [@Akimoto2015] and behavior of the QTM under influence of a bias [@Akimoto2019] are part of a current research.
The previously obtained results suggest that for any dimension $d>2$ the behavior of QTM converges to the one of CTRW. A simple hand-waving argument that support this qualitative result is that in sufficiently high dimensions the traced particle rarely returns to the same lattice point, thus reducing the effect of strong correlations imposed by the quenched disorder. The P[ó]{}lya’s [@Weiss] theorem states that the probability to return to the origin (or any previously occupied position) is less then $1$ for any dimension above $d=2$. A valid question is what is the quantitative representation of the mapping between QTM and CTRW? Can one extend this mapping to the cases where dimensionality is low but the formerly raised hand-waiving argument still holds, i.e. the biased case? In this manuscript we will provide an explicit form of the mapping between QTM and CTRW for any transient case in any dimension. By using the randomly stopped time approach, that was originally developed for the $d=1$ case [@Burov1; @Burov2], we manage to obtain a subordiantion of the spatial process to the temporal $\alpha$-stable process. Unlike the CTRW where the subordinated spatial process advances as a function of the number of jumps [@Bouchaud; @Fogedby; @Barkai], for QTM the local time of the spatial process is quite different. A brief summary of part of our results was published in Ref. [@Burov2017].
This paper is organized as follows. In Sec. \[section\_def\] the QTM is defined together with local time, measurement time and the subordination approach. In Sec. \[salphaSec\] the local time $S_\alpha$ is explored and the mean value of the local time is computed in Sec. \[meansalpha\] and the second moment in Sec. \[secondsalpha\]. In Sec. \[deltafunction\] we summarize the results of the first and second moment calculation and show that the local time convergences to the number of jumps that the process has performed. In Section \[doublesubordination\] the previously established convergence of the local time is exploited in order to establish an explicit mapping between the CTRW and QTM, by the means of double subordination. The formulas are applied to the one-dimensional cased of biased QTM. In Sec. \[nonlinresp\] we obtain analytic expressions for the moments of the transient case of the QTM and show how the quenched disorder gives rise to the non-linear response of externally applied field. The summary is provided in Sec. \[summary\]. Several Appendices supply specific technical calculations and referred to in the manuscript.
The Quenched Trap Model and Subordination {#section_def}
=========================================
The QTM is defined as a random jump process of a particle on top of a lattice of dimension $d$. For every lattice point ${\bf x}$ a quenched random variable $\tau_{\bf x}$ is defined. This quenched variable $\tau_{\bf x}$ defines the time that the particle is going to spend at ${\bf x}$ before jumping to some other site ${\bf x}'$’, i.e. $\tau_{\bf x}$ is the local dwell time. The probability to jump from ${\bf x}$ to ${\bf x}'$ is provided by $p({\bf x}',{\bf x})$. In the following we will assume translational invariance of the lattice that leads to $p({\bf x}',{\bf x})$ of the form $p({\bf x}'-{\bf x})$. The quenched dwell times $\{\tau_{\bf x}\}$ are , real, positive and independently distributed random variables with $$\psi(\tau_{\bf x})\sim\tau^{-(1+\alpha)}A\big/|\Gamma(-\alpha)|\qquad \left(\tau_{\bf x}\
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The quantum critical dynamics of the quantum phase transitions is considered. In the framework of the unified theory, based on the Keldysh technique, we consider the crossover from the classical to the quantum description of the boson many-particle system dynamics close to the second order quantum phase transition. It is shown that in this case the upper critical space dimension of this model is $d_c^{+}=2$, therefore the quantum critical dynamics approach is useful in case of $d<2$. In the one-dimension system the phase coherence time does diverge at the quantum critical point, $g_c$, and has the form of $\tau \propto -\ln |g-g_c|/(g-g_c) $, the correlation radius diverges as $r_c\propto |g-g_c|^{-\nu } (\nu = 0.6)$.'
author:
- 'Vasin M.G.'
title: 'Quantum critical dynamics of the boson system in the Ginsburg-Landau model'
---
Introduction
============
We consider the dissipative critical dynamics of the quantum phase transitions (QPT) taking place in the system of the coupled enharmonic oscillators with one-component order parameter ($n=1$) corresponding, for example, to the Ising magnet [@Sachdev].
It is well known that at $T=0$, in the regime of quantum fluctuations (zero-point fluctuations), the ordering phase transition is possible in these systems [@Kagan]. In addition it is believed that the critical exponents of this phase transition are determined with help of the simple rule: the exponents of the phase transition in a $d$-dimension system at $T=0$ are the same as at $T\neq 0$ but in the system with greater per unit dimension: $d_{eff}=d+z=d+1$ [@Pankov]. Hence one can conclude that the upper critical space dimension of the considered system is $d^+_{cr}=3$. Let us call it quantum mechanical (QM) approach. However, when describing the dynamics of the statistical ensemble of the coupled oscillators ($N\to \infty$) one needs to take into account the dissipation effect [@Weiss]. In case of $\hbar \omega \gg kT$ this leads to the change of the critical exponents and the universality class of the phase transition in the one-dimension system: $z\thickapprox 2$, $d^+_{cr}=2$ [@HH].
In this paper we describe the critical dynamics of the Ising magnet system close to QPT using the Keldysh technique [@K]. This approach is developed for description of the dynamics of non-equilibrium quantum systems. Therefore, we suppose it will allow us to describe the crossover from the critical dynamics to the quantum critical dynamics (QCD) using the uniform technique. We also believe that it help us to outline the borders of applicability of the QM and QCD approaches to the QPT description.
Crossover from the critical dynamics to the quantum critical dynamics in the Keldysh technique
==============================================================================================
Let us consider the quantum critical dynamics of the Ginsburg-Landau model in terms of the Keldysh technique. The Lagrangian of this model has the following form: $$\begin{gathered}
\mathcal{L}\thickapprox(\vec\partial \phi )^2+\mu(g)\phi^2+v(g)\phi^4.
\label{I}\end{gathered}$$ where $\phi $ is the scalar order parameter field, which obeys to the bose statistics. We suppose $\mu $ and $v$ to depend on some external parameter $g$, that controls the system state.
It is convenient to describe the non-equilibrium dynamics of quantum systems in terms of the Keldysh technique. Since we assume the uniform description of both quantum ($T\to 0$) and classical ($T\gg 0$) systems, we consider the system interacting with a heat bath at the temperature $T$.
According to the Keldysh approach to the description of non-equilibrium dynamics of the system one should write the generating functional in the form of $$\begin{gathered}
W=\int \mathfrak{D}\vec\phi \exp\left\{i\int d^{d+1}x \mathcal{L}(\phi_{cl},\,\phi_q;\,g_{cl},\,g_q)\right\},\end{gathered}$$ where $\vec \phi=\{\phi_q,\,\phi_{cl}\}$, $\phi_{cl}$ and $\phi_q $ are the “classical” and “quantum” parts of the order parameter accordingly, $g_{cl}$ and $g_q$ are the sources of these fields, and $\mathcal{L}$ is the fields lagrangian density. Below it will be more convenient to move from the Minkowski space to the Euclidean one by the Wick rotation, $t= -ix_4 $. Then $$\begin{gathered}
W=\int \mathfrak{D}\vec\phi \exp\left\{-\int d^{d}kd\omega \mathcal{L}(\phi_{cl},\,\phi_q;\,g_{cl},\,g_q)\right\},\end{gathered}$$ Note, that in this case every contact of the system with any environment, including external noise, is described as the interaction with the heat bath, while the “internal (quantum) noise” is implied in the description directly in the description. In this case according to [@K] one can write the Keldysh Lagrangian in the form of: $$\begin{gathered}
\mathcal{L}=\mathcal{L}_{free}+\mathcal{L}_{int}+\mathcal{L}_{noise},\end{gathered}$$ where $$\begin{gathered}
\mathcal{L}_{free}=\phi_q\left( \varepsilon_k-i\gamma \omega\right)\phi_{cl}+\phi_{cl}\left( \varepsilon_k+i\gamma \omega\right)\phi_q,\\[10pt]
\mathcal{L}_{int}= -\frac{}{}U(\phi_{cl}+\phi_q,\,g_{cl}+g_q)+U(\phi_{cl}-\phi_q,\,g_{cl}-g_q),\\
\mathcal{L}_{noise}=\phi_q \left(2\gamma \omega \coth {\frac {\displaystyle \omega }{\displaystyle T}}\right)\phi_q,\end{gathered}$$ $\varepsilon_k=k^2+\mu(g)$, and $U(\phi)$ is the interaction part.
According to the Keldysh approach to the description of non-equilibrium dynamics one can write an expression for the Retard, Advanced and the Keldysh parts of the Green function (matrix) in the form of: $$\begin{gathered}
G^K=G^R\circ F-F\circ G^A ,\end{gathered}$$ where $F$ is the Hermitian matrix ($F=F^{\dag}$), and the circular multiplication sign implies integration over the intermediate time (matrix multiplication) [@K]. One can check that $$\begin{gathered}
[G^{-1}]^K=[G^R]^{-1}\circ F-F\circ [G^A]^{-1} .\end{gathered}$$ After the Wigner transform (WT) in the frequency representation we come to $$\begin{gathered}
G^K=f(\omega )(G^R-G^A) ,\\
[G^{-1}]^K=f(\omega)\left([G^R]^{-1}-[G^A]^{-1}\right),\end{gathered}$$ where $f(\omega )$ is the distribution function. For a boson system in thermal equilibrium $f=-i\coth (\omega/T)$, where $T$ is the temperature of the heat bath [@K]. This is the FDT, which, as it is shown later, takes a different form in the classical and quantum limits.
If we consider the system with dissipation, then $$\begin{gathered}
[G^R]^{-1}=\varepsilon_k +i\gamma\omega ,\quad [G^A]^{-1}=\varepsilon_k -i\gamma\omega ,\\
[G^{-1}]^K=2\gamma\omega \coth (\omega/T),
\label{a3}\end{gathered}$$ where $\gamma $ is the kinetic coefficient. In the quantum case $T\ll \omega $ (see Fig.\[f1\]) $$\begin{gathered}
\coth (\omega/T)\to \mbox{sign}(\omega ) \quad \Rightarrow \quad [G^{-1}]^K=2\gamma |\omega |.\end{gathered}$$ The FDT has the following form: $ G^K=i\,\mbox{sign}(\omega )(G^R-G^A)$. In the classical case $T\gg \omega$ (see Fig.\[f1\]) $$\begin{gathered}
\coth (\alpha \omega)\to {\frac {\displaystyle T}{\displaystyle \omega }} \quad \Rightarrow \quad [G^{-1}]^K=2\gamma T,\end{gathered}$$ and the system satisfies the usual classical form of FDT: $G^K=T(G^R-G^A)/i\omega $.
![The red line is the graphic representation of $\coth (\omega/T)$ versus $T$ function (with $\omega =4$), the green line is the $T/\omega $ function. At high temperatures these graphics coincide, which corresponds
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'A simple variant of a realistic flavour symmetry scheme for fermion masses and mixings provides a possible interpretation of the diphoton anomaly as an electroweak singlet “flavon”. The existence of TeV scale vector-like T-quarks required to provide adequate values for CKM parameters can also naturally account for the diphoton anomaly. Correlations between $V_{ub}$ and $V_{cb}$ with the vector-like T-quark mass can be predicted. Should the diphoton anomaly survive in a future Run, our proposed interpretation can also be tested in upcoming B and LHC studies.'
author:
- Cesar Bonilla
- Miguel Nebot
- Rahul Srivastava
- ' José W. F. Valle'
title: A flavour physics scenario for the $750$ GeV diphoton anomaly
---
The ATLAS [@atlas750] and CMS [@cms750] collaborations have presented first results obtained from proton collisions at the LHC with 13 TeV center-of-mass energy. The ATLAS collaboration sees a bump in the invariant mass distribution of diphoton events at 750 GeV, with a 3.9 sigma significance, while CMS sees a 2.6 sigma excess at roughly the same value. Taking these hints at face value, we suggest a possible theoretical framework to interpret these findings. We propose that the new particle is a singlet scalar boson carrying a flavour quantum number. Our proposed framework accounts for three important aspects of the flavor puzzle:
- the observed value of the Cabbibo angle arises mainly from the down-type quark sector through the Gatto-Sartori-Tonin relation [@Gatto:1968ss];
- the observed pattern of neutrino oscillations [@Forero:2014bxa] is reproduced in a restricted parameter range [@Morisi:2013eca];
- the observed values of the “down-type” fermions is well-described by the generalized b-tau unification formula [@Morisi:2011pt; @Morisi:2013eca; @King:2013hj; @Bonilla:2014xla] $$\label{eq:massrelation}
\frac{m_{\tau}}{\sqrt{m_{e}m_{\mu}}}\approx \frac{m_{b}}{\sqrt{m_{s}m_{d}}},$$ predicted by the flavour symmetry of the model.
There are in principle several possible realizations of the 750 GeV anomaly as a flavon [@Morisi:2012fg; @King:2014nza] : a flavor–carrying singlet scalar. Our main idea is to obtain a scheme where the CERN anomaly may be probed also in the flavor sector. For this purpose we consider a simple variant of that proposed in [@Morisi:2013eca] in order to address the points above. Phenomenological consistency of the model requires the presence of vector–like fermions in order to account for the observations in the quark sector. Their presence can naturally account for a production cross section of the scalar anomaly through gluon–gluon fusion similar to that indicated by ATLAS and CMS [@Morisi:2013eca] [^1]
Here we investigate the allowed parameter space of our scheme which provides an adequate joint description of CKM physics describing the B sector and the recent CERN diphoton data, illustrating how the two aspects are inter-related in our scheme. For definiteness and simplicity, here we focus on a nonsupersymmetric version of the model discussed in [@Morisi:2013eca]. The charge assignments for the fields is as shown in Table \[tab1\]
Fields $L$ $E^c$ $Q$ $U^c$ $D^c$ $H^u$ $ H^d$ $T$ $T^c$ $\sigma$ $\sigma'$ $ \xi $
-------------------- ----- ------- ----- ------- ------- ------- -------- ---------- ------------ ------------ ------------ ----------
$\mathrm{SU(2)_L}$ $2$ $1$ $2$ $1$ $1$ $2$ $2$ $1$ $1$ $1$ $1$ $1$
$A_4$ $3$ $3$ $3$ $3$ $3$ $3$ $3$ $1$ $1$ $3$ $3$ $1$
$\mathrm{Z}_4$ $1$ $1$ $1$ $1$ $1$ $1$ $1$ $\omega$ $\omega^2$ $\omega^3$ $\omega^2$ $\omega$
: Matter content of the model, where $\omega^4=1$.[]{data-label="tab1"}
Here, $T, T^c$ are a pair of vector like “quarks” transforming as $(3, 1, 4/3)$ and $(\bar{3}, 1, -4/3)$ under the [[Standard Model ]{}]{}gauge group . The scalars $\sigma, \sigma'$ are singlets under but transform as $A_4$ triplets and carry $\mathrm{Z}_4$ charge. The scalar $\xi$ is also a singlet under [[Standard Model ]{}]{}as well as under the $A_4$ symmetry but transforms as $\omega$ under the $\mathrm{Z}_4$ symmetry. In addition to the above charges, the scalars and fermions also carry an additional $\mathrm{Z}_2$ charge such that the scalar $H^u$ only couples to the up-type quarks, while $H^d$ only couples to the down-type quarks and charged leptons (this $\mathrm{Z}_2$ symmetry would not be needed if supersymmetry were assumed). The invariant Yukawa Lagrangian of the model is given by, $$\begin{aligned}
\mathcal{L}_f & = & y^u_{ijk} Q_i H^u_j U^c_k + y^d_{ijk} Q_i H^d_j D^c_k +
y^l_{ijk} L_i H^d_j E^c_k \nonumber \\
& + & X' T U^c_i \sigma_i + \frac{Y'}{\Lambda} Q_i (H^u \cdot \sigma')_i T^c +
y_T T T^c \xi
\label{yuk}\end{aligned}$$ where we take all couplings $y_T$, $y^a_{ijk}$ and $X$, $Y$ as real for simplicity; $a = u,d,l$ and $i,j,k = 1,2,3$.
Following Ref. [@Morisi:2013eca] after electroweak symmetry breaking and requiring certain hierarchy in the flavon vevs, $\vev{H^{u,d}} = (v^{u,d},\varepsilon^{u,d}_1,\varepsilon^{u,d}_2)$ where $\varepsilon_{1,2}^u \ll v^u$ and $\varepsilon_{1,2}^d\ll v^d$, one gets the mass relation between the down-type quarks and charged leptons given by Eq. (\[eq:massrelation\]). The up-type quark sector gets modified due to the presence of vector like quarks so that the full up-type quark mass matrix is $4 \times 4$ and given by $$\begin{aligned}
\label{Mu}
M_{u} = \left( \begin{array}{cccc}
0 & a^u \alpha^u & b^u & Y'_1 \\
b^u \alpha^u & 0 & a^u r^u & Y'_2 \\
a^u & b^u r^u & 0 & Y'_3 \\
X'_1 & X'_2 & X'_3 & M'_T
\\
\end{array}\right) \end{aligned}$$ where $a^u = y_1^u \varepsilon_1^u $, $b^u = y_2^u \varepsilon_1^u$; $y_{1,2}^u$ being the only two possible Yukawa couplings arising from the $A_4$-tensor in Eq. (\[yuk\]). Also, $r^u = v^u/\varepsilon_1^u$ and $\alpha^u = \varepsilon_2^u / \varepsilon_1^u$. Moreover, $X'_i =
X' \vev{\sigma_i}$, $Y'_i= Y' \vev{(H^u\cdot \sigma')_i}/\Lambda$ and $M'_T = y_T \vev{\xi}$. The mass matrix in Eq. \[Mu\] is the same as that obtained in [@Morisi:2013eca] where the detailed treatment of the Yukawa sector is given. Notice that the addition of vector quarks only changes the up sector mass matrix, the down sector mass matrix remaining unchanged and thus the relation in Eq. (\[eq:massrelation\]) remains unchanged, see [@Morisi:2013eca] for further details.
The scalar sector of the model consists of $\mathrm{SU(2)_L}$ doublet scalars $H^u, H^d$ both transforming as triplets under the $A_4$ symmetry. In addition it contains
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Using angle-resolved photoemission spectroscopy, we report electronic structure for representative members of ternary topological insulators. We show that several members of this family, such as Bi$_2$Se$_2$Te, Bi$_2$Te$_2$Se, and GeBi$_2$Te$_4$, exhibit a singly degenerate Dirac-like surface state, while Bi$_2$Se$_2$S is a fully gapped insulator with no measurable surface state. One of these compounds, Bi$_2$Se$_2$Te, shows tunable surface state dispersion upon its electronic alloying with Sb (Sb$_x$Bi$_{2-x}$Se$_2$Te series). Other members of the ternary family such as GeBi$_2$Te$_4$ and BiTe$_{1.5}$S$_{1.5}$ show an in-gap surface Dirac point, the former of which has been predicted to show nonzero weak topological invariants such as (1;111); thus belonging to a different topological class than BiTe$_{1.5}$S$_{1.5}$. The measured band structure presented here will be a valuable guide for interpreting transport, thermoelectric, and thermopower measurements on these compounds. The unique surface band topology observed in these compounds contributes towards identifying designer materials with desired flexibility needed for thermoelectric and spintronic device fabrication.'
author:
- 'M. Neupane'
- 'S.-Y. Xu'
- 'L. A. Wray'
- 'A. Petersen'
- 'R. Shankar'
- 'N. Alidoust'
- Chang Liu
- 'A. Fedorov'
- 'H. Ji'
- 'J. M. Allred'
- 'Y. S. Hor'
- 'T.-R. Chang'
- 'H.-T. Jeng'
- 'H. Lin'
- 'A. Bansil'
- 'R. J. Cava'
- 'M. Z. Hasan'
title: Topological surface states and Dirac point tuning in ternary topological insulators
---
Introduction
============
A topological insulator (TI), as experimentally realized in bismuth-based materials, is a novel electronic state of quantum matter characterized by a bulk-insulating band gap and spin-polarized metallic surface states. [@Kane; @PRL; @David; @Nature08; @Hasan; @SCZhang; @Suyang_1; @Ran; @Nature; @physics; @David; @Science; @BiSb; @Matthew; @Nature; @physics; @BiSe; @Chen; @Science; @BiTe; @David; @Nature; @tunable; @Pedram; @Nature; @BiSb; @Hor; @PRB; @BiSe; @Essin; @PRL; @Magnetic; @Galvanic; @effect; @Yu; @Science; @QAH; @Qi; @Science; @Monopole; @Linder; @PRL; @Superconductivity; @Liang; @Fu; @PRL; @Superconductivity; @Phuan; @Hor; @arXiv; @BiTe; @superconducting] Owing to time reversal symmetry, topological surface states are protected from backscattering and localization in the presence of weak perturbation, resulting in spin currents with reduced dissipation. On the other hand, bismuth-based materials are also being studied for enhanced thermoelectric device performance. [@Moore] Therefore, it is of general importance to study the band structure of these materials as a starting point. Using angle-resolved photoemission spectroscopy (ARPES) and spin-resolved ARPES, several Bi-based topological insulators have been identified, such as the Bi$_{1-x}$Sb$_x$ alloys [@David; @Nature08; @David; @Science; @BiSb], the Bi$_2X_3$ ($X$ = Se, Te) series and their derivatives.[@Matthew; @Nature; @physics; @BiSe; @Chen; @Science; @BiTe] Although significant efforts have been made to realize multifunctional electronic properties in the existing materials, little success has been obtained so far due to residual bulk conduction.[@Phuan; @Hor; @arXiv; @BiTe; @superconducting; @Suyang] This led to the search for other topological materials, which might potentially be optimized for the realization of functional devices.
Recently, ternary topological insulators such as Bi$_2$Se$_2$Te, Bi$_2$Te$_2$Se, Bi$_2$Te$_2$S, GeBi$_2$Te$_4$, and PbBi$_4$Te$_{7}$ have been theoretically predicted to feature multifunctional and flexible electronic structures. [@Suyang_1; @Wang_Johnson; @Lin] However, limited ARPES studies are reported even on Bi$_2$Te$_2$Se to date. [@Suyang; @Wang_Johnson; @Lin; @Sergey; @BTS_Ando; @Ong_BTS; @Arakane; @Kimura; @Souma] In this paper, we investigate the electronic structure of four distinct and unique compounds, namely, Bi$_2$Se$_2$Te (Se-rich), Bi$_2$Te$_2$Se (Te-rich), Bi$_2X_{3-x}$S$_x$ ($X$ = Se, Te; $x$ = 1, 1.5), and GeBi$_2$Te$_4$, as representative members of the ternary family. Surface state properties relevant for the enhanced functionality are identified in these materials. First-principles band calculations are also presented for comparison with our experimental data.
Our experimental findings are itemized as follows. First, our data suggests that the ternary compound Bi$_2$Se$_2$Te (Se-rich) has a large effective bulk band gap. By tuning the ratio of bismuth to antimony, we are able not only to lower the Fermi level into the band gap but also to fine tune the Fermi level so that it lies exactly at the Dirac point. Second, we show that the Dirac point of Bi$_2$Te$_2$Se (Te-rich) is not isolated from the bulk valence bands when the chemical potential is placed at the Dirac point. Third, we report band structure properties of sulfur doped Bi$_2X_3$ \[Bi$_2X_{3-x}$S$_x$ ($X$ = Se, Te; $x$ = 1, 1.5)\] in some detail. The compound Bi$_2$Te$_{1.5}$S$_{1.5}$, derived from Bi$_2$Te$_3$ by replacing Te with S, shows a large bulk band gap and a single Dirac cone surface state, where the Dirac point is located inside the bulk band gap, in contrast to the related Bi$_2$Te$_3$ where the Dirac point is buried inside the bulk valence band. The detail of crystal growth of this compound is described in Ref. \[40\]. The replacement of Te by S is a critically important process to realize the exposed Dirac point electronic structure in Te-rich sample. Finally, we discuss the electronic structure of GeBi$_2$Te$_4$, which serves as a single Dirac cone topological insulator belonging to a class with nonzero weak topological invariants. Despite its high Te-content, this compound exhibits in-gap Fermi level and isolated Dirac node. This is likely due to the change of global crystal potential associated with the Ge sub-lattice.
![(Color online) Crystal structure and topological surface states in ternary spin-orbit compounds: $B_2X_2X'$, $AB_2X_4$, $A_2B_2X_5$ and $AB_4X_7$ \[$A$ = Pb, Ge; $B$ = Bi, Sb; $X, X'$ = Se, Te\]. (a)-(d) crystal structure and calculated bulk and surface band structures for the (111) surface of $B_2X_2X'$, $AB_2X_4$, $A_2B_2X_5$ and $AB_4X_7$, respectively. The bulk band projections are represented by shaded areas.](Fig1){width="8.0"}
Methods
=======
The first-principles band calculations were performed with the linear augmented plane-wave (LAPW) method using the WIEN2K package[@wien2k] and the projected augmented wave method[@PAW] using the VASP package[@VASP] in the framework of density functional theory (DFT). The generalized gradient approximation (GGA) of Perdew, Burke, and Ernzerhof[@PBE] was used to describe the exchange correlation potentials. Spin-orbit coupling (SOC) was included as a second variational step using a basis of scalar relativistic eigenfunctions. The surface electronic structure computation was performed with a symmetric slab of six quintuple layers; a vacuum region with thickness larger than 10 $\mathrm{\AA}$ was used.
Single crystalline samples of ternary topological insulators were grown using the Bridgman method, which is described elsewhere. [@Hor; @PRB; @BiSe; @BTS_Ando; @Jia] ARPES measurements for the low energy electronic structures were performed at the Synchrotron Radiation Center (SRC), Wisconsin, the Stanford Synchrotron Radiation Lightsource (SSRL
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
- Dejan Lavbič and Marjan Krisper
title: Facilitating Ontology Development with Continuous Evaluation
---
> **Dejan Lavbič** and Marjan Krisper. 2010. **Facilitating Ontology Development with Continuous Evaluation**, [Informatica **(INFOR)**](https://www.mii.lt/informatica/), 21(4), pp. 533 - 552.
Abstract {#abstract .unnumbered}
========
In this paper we propose facilitating ontology development by constant evaluation of steps in the process of ontology development. Existing methodologies for ontology development are complex and they require technical knowledge that business users and developers don’t poses. By introducing ontology completeness indicator developer is guided throughout the development process and constantly aided by recommendations to progress to next step and improve the quality of ontology. In evaluating the ontology, several aspects are considered; from description, partition, consistency, redundancy and to anomaly. The applicability of the approach was demonstrated on Financial Instruments and Trading Strategies (FITS) ontology with comparison to other approaches.
Keywords {#keywords .unnumbered}
========
Ontology development methodology, ontology evaluation, ontology completeness, rapid ontology development, semantic web
Introduction
============
The adoption of Semantic Web technologies is less than expected and is mainly limited to academic environment. We are still waiting for wide adoption in industry. We could seek reasons for this in technologies itself and also in the process of development, because existence of verified approaches is a good indicator of maturity. As technologies are concerned there are numerous available for all aspects of Semantic Web applications; from languages for capturing the knowledge, persisting data, inferring new knowledge to querying for knowledge etc. In the methodological sense there is also a great variety of methodologies for ontology development available, as it will be further discussed in section \[related-work\], but the simplicity of using approaches for ontology construction is another issue. Current approaches in ontology development are technically very demanding and require long learning curve and are therefore inappropriate for developers with little technical skills and knowledge. In majority of existing approaches an additional role of knowledge engineer is required for mediation between actual knowledge that developers possess and ontology engineers who encode knowledge in one of the selected formalisms. The use of business rules management approach [@smaizys_business_2009] seems like an appropriate way to simplification of development and use of ontologies in business applications. Besides simplifying the process of ontology creation we also have to focus on very important aspect of ontology completeness. The problem of error-free ontologies has been discussed in [@fahad_ontological_2008; @porzel_task-based_2004] and several types of errors were identified - inconsistency, incompleteness, redundancy, design anomalies etc. All of these problems have to already be addressed in the development process and not only after development has reached its final steps.
In this paper we propose a Rapid Ontology Development (ROD) approach where ontology evaluation is performed during the whole lifecycle of the development. The idea is to enable developers to rather focus on the content than the formalisms for encoding knowledge. Developer can therefore, based on recommendations, improve the ontology and eliminate the error or bad design. It is also a very important aspect that, before the application, the ontology is error free. Thus we define ROD model that introduces detail steps in ontology manipulation. The starting point was to improve existing approaches in a way of simplifying the process and give developer support throughout the lifecycle with continuous evaluation and not to conclude with developed ontology but enable the use of ontology in various scenarios. By doing that we try to achieve two things:
- guide developer through the process of ontology construction and
- improve the quality of developed ontology.
The remainder of the paper is structured as follows. In the following section \[related-work\] state of the art is presented with the review of existing methodologies for ontology development and approaches for ontology evaluation. After highlighting some drawbacks of current approaches section \[ROD\] presents the ROD approach. Short overview of the process and stages is given with the emphasis on ontology completeness indicator. The details of ontology evaluation and ontology completeness indicator are given in section \[indicator\], where all components (description, partition, redundancy and anomaly) that are evaluated are presented. In section \[evaluation\] evaluation and discussion about the proposed approach according to the results obtained in the experiment of **Financial Instruments and Trading Strategies (FITS)** is presented. Finally in section \[conclusion-and-future-work\] conclusions with future work are given.
Related work
============
Review of related approaches
----------------------------
Ontology is a vocabulary that is used for describing and presentation of a domain and also the meaning of that vocabulary. The definition of ontology can be highlighted from several aspects. From taxonomy [@corcho_methodologies_2003; @sanjuan_text_2006; @veale_analogy-oriented_2006] as knowledge with minimal hierarchical structure, vocabulary [@bechhofer_thesaurus_2001; @miller_wordnet:_1995] with words and synonyms, topic maps [@dong_hyo-xtm:_2004; @park_xml_2002] with the support of traversing through large amount of data, conceptual model [@jovanovic_achieving_2005; @mylopoulos_information_1998] that emphasizes more complex knowledge and logic theory [@corcho_methodologies_2003; @dzemyda_optimization_2009; @waterson_verifying_1999] with very complex and consistent knowledge.
Ontologies are used for various purposes such as natural language processing [@staab_system_1999], knowledge management [@davies_semantic_2006], information extraction [@wiederhold_mediators_1992], intelligent search engines [@heflin_searching_2000], digital libraries [@kesseler_schema_1996], business process modeling [@brambilla_software_2006; @ciuksys_reusing_2007; @magdalenic_dynamic_2009] etc. While the use of ontologies was primarily in the domain of academia, situation now improves with the advent of several methodologies for ontology manipulation. Existing methodologies for ontology development in general try to define the activities for ontology management, activities for ontology development and support activities. Several methodologies exist for ontology manipulation and will be briefly presented in the following section. CommonKADS [@schreiber_knowledge_1999] is in fact not a methodology for ontology development, but is focused towards knowledge management in information systems with analysis, design and implementation of knowledge. CommonKADS puts an emphasis to early stages of software development for knowledge management. Enterprise Ontology [@uschold_towards_1995] recommends three simple steps: definition of intention; capturing concepts, mutual relation and expressions based on concepts and relations; persisting ontology in one of the languages. This methodology is the groundwork for many other approaches and is also used in several ontology editors. METHONTOLOGY [@fernandez-lopez_building_1999] is a methodology for ontology creation from scratch or by reusing existing ontologies. The framework enables building ontology at conceptual level and this approach is very close to prototyping. Another approach is TOVE [@uschold_ontologies:_1996] where authors suggest using questionnaires that describe questions to which ontology should give answers. That can be very useful in environments where domain experts have very little expertise of knowledge modeling. Moreover authors of HCONE [@kotis_human_2003] present decentralized approach to ontology development by introducing regions where ontology is saved during its lifecycle. OTK Methodology [@sure_methodology_2003] defines steps in ontology development into detail and introduces two processes – Knowledge Meta Process and Knowledge Process. The steps are also supported by a tool. UPON [@nicola_building_2005] is an interesting methodology that is based on Unified Software Development Process and is supported by UML language, but it has not been yet fully tested. The latest proposal is DILIGENT [@davies_semantic_2006] and is focused on different approaches to distributed ontology development.
From information systems development point of view there are several methodologies that share similar ideas found in ontology development. Rapid Ontology Development model, presented in this paper follows examples mainly from blended, object-oriented, rapid development and people-oriented methodologies [@avison_information_2006]. In blended methodologies, that are formed from (the best) parts of other methodologies, the most influential for our approach was Information Engineering [@martin_information_1981] that is viewed as a framework within which a variety of techniques are used to develop good quality information systems in an efficient way. In object-oriented approaches there are two representatives – Object-Oriented Analysis (OOA; @booch_object_1993) and Rational Unified Process (RUP; @jacobson_unified_1999). Especially OOA with its five major activities: finding class and objects, identifying structures, indentifying subjects, defining attributes and defining services had profound effect on our research, while it was extended with the support of design and implementation phases that are not included in OOA. The idea of rapid development methodologies is closely related to ROD approach and current approach addresses the issue of rapid ontology development which is based on rapid development methodologies of information systems. James Martin’s RAD [@martin_rapid_1991] is based on well known techniques and tools but adopts prototyping approach and focuses on obtaining commitment from the business users. Another rapid approach is Dynamic Systems Development Method (DSDM; @consortium_dsdm_2005) which has some similarities with Extreme Programming (XP; @beck_extreme_2004). XP attempts to support quicker development of software, particularly for small and medium-sized applications.
Comparing to techniques involved
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The IceCube Neutrino Observatory with its 1-km$^3$ in-ice detector and the 1-km$^2$ surface detector (IceTop) constitutes a three-dimensional cosmic ray detector well suited for general cosmic ray physics. Various measurements of cosmic ray properties, such as energy spectra, mass composition and anisotropies, have been obtained from analyses of air showers at the surface and/or atmospheric muons in the ice.'
address: 'Humboldt-Universität zu Berlin and DESY'
author:
- 'H. Kolanoski (for the IceCube Collaboration)'
bibliography:
- 'ecrs\_pcr2\_hlt\_Kolanoski.bib'
title: Cosmic Ray Physics with the IceCube Observatory
---
Introduction
============
The IceCube Neutrino Observatory [@achterberg06; @Kolanoski_HLT_icrc2011] is a detector situated in the ice of the geographic South Pole at a depth of about 2000 m. The observatory is primarily designed to measure neutrinos from below, using the Earth as a filter to discriminate against muon background induced by cosmic rays (neutrino results are reported elsewhere in these proceedings [@kappes_HLT_ecrs2012]). IceCube also includes an air shower array on the surface called IceTop extendind IceCube’s capabilities for cosmic ray physics. Construction of IceCube Neutrino Observatory was completed in December 2010.
IceCube can be regarded as a cubic-kilometer scale three-dimensional cosmic ray detector with the air showers (mainly the electromagnetic component) measured by the surface detector IceTop and the high energy muons and neutrinos measured in the ice. In particular the measurement of the electromagnetic component in IceTop in coincidence with the high energy muon bundle, originating from the first interactions in the atmosphere, has a strong sensitivity to composition. Here IceCube offers the unique possibility to clarify the cosmic ray composition and spectrum in the range between about 300 TeV and 1 EeV, including the ‘knee’ region and a possible transition from galactic to extra-galactic cosmic rays.
Detector
========
#### IceCube:
The main component of the IceCube Observatory is an array of 86 strings equipped with 5160 light detectors in a volume of 1 km$^3$ at a depth between 1450m and 2450m (Fig.\[fig:I3Array\]). The nominal IceCube string spacing is 125 m on a hexagonal grid. A part of the detector, called DeepCore, is more densely instrumented resulting in a lower energy threshold.
![Left: The IceCube detector with its components DeepCore and IceTop in the final configuration (January 2011). In this paper we present data taken with the still incomplete detector. We will refer to the configuration as IC79/IT73, for example, meaning 79 strings in IceCube and 73 stations in IceTop. The final detector has the configuration IC86/IT81. Right: View of a cosmic ray event which hits IceTop and IceCube. The size of the colored spots is proportional to the signal in the DOMs, the colors encode the signal times, separately for IceCube and IceTop. []{data-label="fig:I3Array"}](I3Array_vector_Jan2011_modHK_red "fig:"){width="62.00000%"}![Left: The IceCube detector with its components DeepCore and IceTop in the final configuration (January 2011). In this paper we present data taken with the still incomplete detector. We will refer to the configuration as IC79/IT73, for example, meaning 79 strings in IceCube and 73 stations in IceTop. The final detector has the configuration IC86/IT81. Right: View of a cosmic ray event which hits IceTop and IceCube. The size of the colored spots is proportional to the signal in the DOMs, the colors encode the signal times, separately for IceCube and IceTop. []{data-label="fig:I3Array"}](BigEvent.pdf "fig:"){width="37.00000%"}
Each string, except those of DeepCore, is equipped with 60 light detectors, called ‘Digital Optical Modules’ (DOMs), each containing a $10''$ photo multiplier tube (PMT) to record the Cherenkov light of charged particles traversing the ice. In addition, a DOM houses complex electronic circuitry supplying signal digitisation, readout, triggering, calibration, data transfer and various control functions. The most important feature of the DOM electronics is the recording of the analog waveforms in $3.3{\,\mathrm{ns}}$ wide bins for a duration of $422{\,\mathrm{ns}}$. With a coarser binning a ‘fast ADC’ extends the time range to 6.4$\mu$s.
#### IceTop:
The 1-km$^2$ IceTop air shower array [@ITDet-IceCube:2012nn] is located above IceCube at a height of 2835m above sea level, corresponding to an atmospheric depth of about 680 g/cm$^2$. It consists of 162 ice Cherenkov tanks, placed at 81 stations mostly near the IceCube strings (Fig.\[fig:I3Array\]). In the center of the array, a denser station distribution forms an in-fill array with a lower energy threshold (about 100TeV). Each station comprises two cylindrical tanks, 10 m apart, with an inner diameter of $1.82{\,\mathrm{m}}$ and filled with ice to a height of $90{\,\mathrm{cm}}$.
Each tank is equipped with two DOMs which are operated at different PMT gains to cover linearly a dynamic range of about $10^5$ with a sensitivity to a single photoelectron (the thresholds, however, are around 20 photoelectrons). DOMs, electronics and readout scheme are the same as for the in-ice detector.
Cosmic Ray spectrum {#sec:spectrum}
===================
![First evalution of one year of data taken with the 73-station configuration of IceTop in 2010. The events were required to have more than 5 stations and zenith angles in the range $\cos\theta \geq 0.8$. The spectrum is shown for the two assumptions ‘pure proton’ and ‘pure iron’ for the primary composition. []{data-label="fig:IT73-spectrum-p-Fe"}](IT26_spectrum-v2_2.pdf){width="100.00000%"}
![First evalution of one year of data taken with the 73-station configuration of IceTop in 2010. The events were required to have more than 5 stations and zenith angles in the range $\cos\theta \geq 0.8$. The spectrum is shown for the two assumptions ‘pure proton’ and ‘pure iron’ for the primary composition. []{data-label="fig:IT73-spectrum-p-Fe"}](FullYear_cosZenith_above_08_3and_MoreStations.pdf){width="100.00000%"}
![Composition analysis (IC40/IT40 configuration) [@ITIC40-composition_Abbasi:2012]. Left: Simulated correlation between the energy loss of the muon bundles in the ice (K70) and the shower size at the surface (S125) for proton and iron showers. The shading indicates the percentage of protons over the sum of protons and iron in a bin. The lines of constant primary energy are labeled with the logarithms of the energies. Right: IceCube result for the average logarithmic mass of primary cosmic rays compared to other measurements (references in [@ITIC40-composition_Abbasi:2012]). []{data-label="fig:composition_ITIC40"}](pretty_plot_berries_ICRC_zaxis_v2-eps-converted-to.pdf "fig:"){width="43.00000%"} ![Composition analysis (IC40/IT40 configuration) [@ITIC40-composition_Abbasi:2012]. Left: Simulated correlation between the energy loss of the muon bundles in the ice (K70) and the shower size at the surface (S125) for proton and iron showers. The shading indicates the percentage of protons over the sum of protons and iron in a bin. The lines of constant primary energy are labeled with the logarithms of the energies. Right: IceCube result for the average logarithmic mass of primary cosmic rays compared to other measurements (references in [@ITIC40-composition_Abbasi:2012]). []{data-label="fig:composition_ITIC40"}](compositionplot2-eps-converted-to.pdf "fig:"){width="54.00000%"}
Figure \[fig:IT26\_spectrum-v2\_2\] shows the energy spectrum from 1 to 100 PeV [@IT26-spectrum_Abbasi:2012wn] determined from 4 month of data taken in the IT26 configuration in 2007. The relation between the measured shower size and the primary energy is mass dependent. Good agreement of the spectra in three zenith angle ranges was found for the assumption of pure proton and a simple two-component model (see [@IT26-spectrum_Abbasi:2012wn]). For zenith angles below 30[$^{\circ}$]{}, where the mass dependence is smallest, the knee in the cosmic ray energy spectrum was observed at about 4.3PeV with the largest uncertainty coming from the composition dependence (+0.38PeV and -1.1PeV). The spectral index changes from 2.76 below the knee to 3.11 above the knee. There is an indication of a flattening of the spectrum above
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'As part of a study BSM corrections to leptonic decays of the $B_c$ meson, Tran et al. [@Tran:2018kuv] use the covariant confining quark model (CCQM) to estimate the matrix element of the pseudo-scalar curent between the vacuum and the $B_c$ meson. We note that this matrix element can be determined using existing lattice QCD results.'
author:
- 'C. T. H. Davies'
- 'C. McNeile'
title: 'Comment on Implications of new physics in the decays $B_c \to (J/\psi,\eta_c)\tau\nu$'
---
Introduction
============
The paper by Tran et al. [@Tran:2018kuv] discusses Beyond the Standard Model (BSM) contributions to leptonic and semi-leptonic decays of the $B_c$ meson. This is a very topical calculation because of the tantalizing hints that there are lepton symmetry violations in various B and $B_c$ meson decays found by the LHCb collaboration [@Aaij:2017tyk]. To quantify the constraints from these analyses it is important to have reliable values for the operator matrix elements involved, with quantified uncertainties.
The pseudo-scalar matrix element {#sec:psme}
================================
[@Tran:2018kuv] considers a Hamiltonian of corrections to the standard model: $${\cal H}_{eff} =
\frac{4 G_F V_{cb}}{\sqrt{2}}
(
{\cal O}_{V_L}
+ \sum_{X=S_i, V_i, T_L} \delta_{l \tau} X {\cal O_X}
)
\label{eq:Hnew}$$ and works out the phenomenology for the leptonic and semi-leptonic decays of the $B_c$ meson. The operators considered are: $$\begin{aligned}
{\cal O}_{V_i} & = & (\overline{c} \gamma^\mu P_i b )
(\overline{l} \gamma_{\mu} P_L \nu_l ) , \\
{\cal O}_{S_i} & = & (\overline{c} P_i b )
(\overline{l} P_L \nu_l ), \\
{\cal O}_{T_L} & = & (\overline{c} \sigma^{\mu\nu} P_L b )
(\overline{l} \sigma_{\mu\nu} P_L \nu_l ) ,
\label{eq:BSMcontribution}\end{aligned}$$ where $\sigma_{\mu\nu} = i [\gamma_\mu , \gamma_\nu] / 2 $, $P_L = (1 - \gamma_5 ) / 2 $, and $P_R = (1 + \gamma_5 ) / 2 $.
The delta function in the Hamiltonian in equation \[eq:Hnew\] takes into account lepton flavor violation in this model. The complex $X$ are the Wilson coefficients from the Beyond the Standard Model (BSM) theory. We note that there is no suppression of the operators by the scale of the BSM physics, because the three additional operators all have the same dimension as the operators in the standard model:
The leptonic decay constant of the $B_c$ meson, $f_{B_c}$: $$\label{eq:fbc}
\langle 0 \mid \overline{c} \gamma_5 \gamma_{\mu} b \mid B_c \rangle =
f_{Bc} p_{\mu} ,$$ is used in the standard model calculation of the annihilation rate of the $B_c$ meson to leptons via a $W$ boson. The additional operators in equation \[eq:BSMcontribution\] require the introduction of the pseudo-scalar matrix element of the $B_c$ meson defined via $$\langle 0 \mid \overline{c} \gamma_5 b \mid B_c \rangle = f_{Bc}^{P}(\mu)
M_{bc}.$$ The matrix element $f_{Bc}^{P}$ depends on the renormalization scale $\mu$ in QCD. A physical result is obtained when it is combined with the Wilson coefficient, which also depends on $\mu$, from the BSM theory.
The leptonic branching fraction of the $B_c$ meson is $$\begin{gathered}
{\cal B} (B_c \rightarrow \tau \nu) =
\frac{G_F^2}{8 \pi} \mid V_{cb} \mid^2 \tau_{B_c} m_{B_c} m_{\tau}^2 \\
\left( 1 - \frac{m_\tau^2}{m_{B_c}^2} \right)^2
f_{B_c}^2
A_{BSM},\end{gathered}$$ where $A_{BSM}$ is
$$A_{BSM} = \mid 1 - (V_R - V_L) +
\frac{m_{B_c}}{m_\tau} \frac{f_{B_c}^{P} }{f_{B_c}} (S_R - S_L)
\mid^2 .$$
In the standard model $A_{BSM}$ = 1. If there are experimental deviations of the leptonic decay of the $B_c$ meson from the value in the standard model, then the values of $f_{B_c}$ and $f_{B_c}^{P}$ are required to constrain values of the Wilson coefficients $V_R$, $V_L$, $S_R$, and $S_L$, of the BSM theory. The Wilson coefficients also contribute to semi-leptonic decays of heavy light mesons, so additional constraints on them can be obtained. This is an modern update of the experimental origins of the V-A theory in the standard model, where experimental data was used to constrain the interactions between quarks (see [@Das:2009zzd] for example). Although the leptonic decay of the $B_c$ meson has not been observed experimentally, the constraints from a LEP1 measurement allowed, Tran et al. [@Tran:2018kuv] to put bounds on the $S_L$ and $S_R$ couplings. [@Tran:2018kuv; @Ivanov:2016qtw] use a CCQM to estimate $f_{Bc}^{P}(\mu)$, although without giving a scale, $\mu$, at which it is determined.
Lattice QCD results {#sec:lattice}
===================
The decay constant of the $B_c$ has been calculated in lattice QCD using two different approaches which give results in good agreement [@McNeile:2012qf; @Colquhoun:2015oha]. The most accurate results comes from using the Highly Improved Staggered Quark (HISQ) formalism [@Follana:2006rc]. In this formalism there is an exact partially conserved axial current (PCAC) [@Kilcup:1986dg] relation $$\partial_{\mu} A_{\mu} = (m_1 + m_2) P \;\;.
\label{eq:PCACdefn}$$ From the pseudoscalar matrix element times quark mass we can then obtain the matrix element of the temporal axial current (at zero spatial momentum) needed for eq. (\[eq:fbc\]) with absolute normalisation. This is done in [@McNeile:2012qf] for heavy-charm pseudoscalar mesons for a range of heavy quark masses and values of the lattice spacing, $a$. This enables the heavy quark mass dependence of the heavy-charm decay constant to be mapped out in the continuum ($a \rightarrow 0$) limit and a result for $f_{B_c}$ to be obtained when the heavy quark mass corresponds to that of the $b$. The value obtained is $$\label{eq:fbcresult}
f_{B_c} = 0.427(6)(2) \,\mathrm{GeV},$$ and a complete error budget is given in [@McNeile:2012qf].
A completely different approach for $f_{B_c}$ based on the lattice discretisation of nonrelativistic QCD (NRQCD) [@Lepage:1992tx] is given in [@Colquhoun:2015oha]. There the matrix element of the temporal axial current is calculated directly but, since there is no PCAC relation on the lattice in this case, the current is matched to that of continuum QCD using lattice QCD perturbation theory through $\mathcal{O}(\alpha_s)$ [@Monahan:2012dq]. A result for $f_{B_c}$ of 0.434(15) GeV is obtained, where the uncertainty is dominated by that from lattice discretisation effects and systematic uncertainties in matching the current. Although the uncertainty is larger here than in the HISQ case, the agreement between the two results is confirmation of our understanding of the errors from the two approaches.
Since, in the HISQ case [@McNeile:2012qf], the lattice PCAC relation was used to determine $f_{B_c}$, it is clear that we could also have determined $f_{Bc}^{P}(\mu)$. Since $f_{Bc}^P(\mu)$ runs with $\mu
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
-
-
-
bibliography:
- '../bibliography.bib'
title: 'Generating Optimal Privacy-Protection Mechanisms via Machine Learning\'
---
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: |
We present a general approach to deriving bounds on the generalization error of randomized learning algorithms. Our approach can be used to obtain bounds on the average generalization error as well as bounds on its tail probabilities, both for the case in which a new hypothesis is randomly generated every time the algorithm is used—as often assumed in the probably approximately correct (PAC)-Bayesian literature—and in the single-draw case, where the hypothesis is extracted only once.
For this last scenario, we present a novel bound that is explicit in the central moments of the information density. The bound reveals that the higher the order of the information density moment that can be controlled, the milder the dependence of the generalization bound on the desired confidence level.
Furthermore, we use tools from binary hypothesis testing to derive a second bound, which is explicit in the tail of the information density. This bound confirms that a fast decay of the tail of the information density yields a more favorable dependence of the generalization bound on the confidence level.
author:
- '\'
bibliography:
- 'reference.bib'
title: Generalization Error Bounds via $m$th Central Moments of the Information Density
---
Introduction {#sec:introduction}
============
A recent line of research, initiated by the work of Russo and Zou [@russo16-05b] and then followed by many recent contributions [@xu17-05a; @bassily18-02a; @bu19-01a; @esposito19-12a], has focused on obtaining bounds on the generalization error of randomized learning algorithms in terms of information-theoretic quantities, such as mutual information. The resulting bounds are *deterministic*, i.e., data-independent, and allow one to assess the speed of convergence of a given learning algorithm in terms of sample complexity [@shalev-shwartz14-a p. 44].
A parallel development has taken place in the machine learning and statistics community, where the probably approximately correct (PAC)-Bayesian framework, pioneered by McAllester [@mcallester98-07a], has resulted in several upper bounds on the generalization error. These bounds, which are expressed in terms of the relative entropy between a prior and a posterior distribution on the hypothesis class (see, e.g., [@guedj19-01a] for a recent review), are typically *empirical*, i.e., data-dependent, and can be used to design learning algorithms [@catoni07-a].
One difficulty in comparing the bounds on the generalization error available in the literature is that they sometimes pertain to different quantities. To illustrate this point, we need to introduce some key quantities, which will be used in the remainder of the paper. Following the standard terminology in statistical learning theory, we let $\setZ$ be the instance space, be the hypothesis space, and $\ell: \setW\times \setZ \rightarrow \positivereals$ be the loss function. A training data set $Z^n=[Z_1,\dots,Z_n]$ is a set of $n$ samples drawn from a distribution $P_Z$ defined on $\setZ$. We denote by $P_{Z^n}$ the product distribution induced by $P_Z$. A randomized learning algorithm is characterized by a conditional probability distribution $P_{W\!\given\! Z^n}$ on $\mathcal{W}$. Finally, we let the generalization error for a given hypothesis $w$ be defined as the difference between the population and empirical risks $$\label{eq:gen}
{\textnormal{gen}}(w,z^n)=\frac{1}{n}\sum_{k=1}^{n}\ell(w,z_k) -\Ex{P_Z}{\ell(w,Z)}.$$ Throughout the paper, we shall assume that the loss function $\ell(w,Z)$ is $\sigma$-subgaussian [@wainwright19-a Def. 2.2] under $P_Z$ for all $w\in \setW$.
The line of work initiated with [@russo16-05b] deals with bounding the average generalization error $$\label{eq:average-gen}
\Ex{P_{W\! Z^n}}{ {{\textnormal{gen}}(W,Z^n)}}.$$ Specifically, upper bounds on the absolute value of this quantity were first presented in [@russo16-05b] and then improved in [@xu17-05a Thm. 1] and [@bu19-01a Prop. 1].
On the contrary, the PAC-Bayesian approach seeks lower bounds on the probability [@guedj19-01a] $$\label{eq:pac-bayesian}
P_{Z^n}\lefto[\abs{\Ex{P_{W\!\given\! Z^n}}{{{\textnormal{gen}}(W,Z^n)}}} \leq \epsilon \right].$$ Characterizing such a probability, which is in the spirit of the PAC framework, is relevant when a new hypothesis $W$ is drawn from $P_{W\!\given\! Z^n}$ every time the algorithm is used. As can be verified by, e.g., comparing the proof of [@xu17-05a Lemma 1] and the proof of [@guedj19-10a Prop. 3],[^1] for the subgaussian case, one can obtain bounds both on and on that are explicit in the mutual information $I(W;Z^n)$ and in the relative entropy $\relent{P_{W\!\given\! Z^n}}{P_W}$, respectively, by using the Donsker-Varadhan variational formula for relative entropy.
One may also be interested in the scenario in which the hypothesis $W$ is drawn from $P_{W\!\given\! Z^n}$ only once, i.e., it is kept fixed for all uses of the algorithm. In such a scenario, which, following the terminology used in [@catoni07-a p. 12], we shall refer to as a *single-draw* scenario, the probability of interest is $$\label{eq:single-draw}
P_{W\! Z^n}\lefto[\abs{{{\textnormal{gen}}(W,Z^n)}} \leq \epsilon \right].$$ Bounds on this probability that depend on the mutual information $I(W;Z^n)$ were provided in [@xu17-05a Thm. 3] and [@bassily18-02a]. Several novel bounds, which are explicit in information-theoretic quantities such as $f$-divergence, $\alpha$-mutual information, and maximal leakage, were recently derived in [@esposito19-12a]. Interestingly, all these bounds make use of a different set of tools compared with the ones used to establish bounds on and , with one of the main ingredients being the data processing inequality for $f$-divergences.
Furthermore, they yield drastically different estimates for the generalization error. Specifically, let us assume that we want to be greater than $1-\delta$ where, throughout the paper, $\delta \in (0,1)$. Then a slight refinement of the analysis in [@bassily18-02a] yields the following bound on $\epsilon$: $$\label{eq:sample_complexity_mi}
\epsilon\geq \sqrt{\frac{2\sigma^2}{n}\left(\frac{I(W;Z^n)+H_b(\delta)}{\delta}+\log 2\right)}. $$ Here, $H_b(\delta)$ denotes the binary entropy function. Throughout the paper, $\log(\cdot)$ denotes the natural logarithm. In contrast, the analysis in [@esposito19-12a Cor. 5], yields the following bound for $\alpha>1$: $$\label{eq:sample_complexity_alpha_mi}
\epsilon\geq \sqrt{\frac{2\sigma^2}{n} \left[I_{\alpha}(W;Z^n)+\log 2 + \frac{\alpha}{\alpha-1} \log \frac{1}{\delta}\right]}.$$ Here, $I_{\alpha}(\cdot,\cdot)$ is the $\alpha$-mutual information
[rCl]{}\[eq:alpha\_MI\] I\_(W;Z\^n)& = & \_[P\_[Z\^n]{} ]{},
where $\dv P_{W\! Z^n}/\dv P_W\! P_{Z^n}$ is the Radon-Nikodym derivative. Note that, since $\lim_{\delta\to 0} H_b(\delta)/\delta+\log \delta = 1$, the dependence of $\epsilon$ on $\delta$ in is of order $1/\sqrt{\delta}$. In contrast, it is of order $\sqrt{(\alpha/(\alpha-1)) \log(1/\delta)}$ in , which is typically more favorable. For example, in the limit $\alpha\to\infty$, the $\alpha$-mutual information converges to the maximal leakage [@issa16-a Thm. 1], and $\epsilon$ depends on $\delta$ only through the term $\sqrt{\log(1/\delta)}$.
The analysis in [@esposito19-12a], however, does not reveal why using $\alpha$-mutual information rather than mutual information results in a more benign dependence of the generalization error on the confidence parameter $\delta$. Moreover, the choice $\alpha=1$, for which $I_{\alpha}(
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We study the spin-orbit coupling induced by the splitting between TE and TM optical modes in a photonic honeycomb lattice. Using a tight-binding approach, we calculate analytically the band structure. Close to the Dirac point,we derive an effective Hamiltonian. We find that the local reduced symmetry ($\mathrm{D_{3h}}$) transforms the TE-TM effective magnetic field into an emergent field with a Dresselhaus symmetry. As a result, particles become massive, but no gap opens. The emergent field symmetry is revealed by the optical spin Hall effect.'
author:
- 'A. V. Nalitov'
- 'G. Malpuech'
- 'H. Terças'
- 'D. D. Solnyshkov'
bibliography:
- 'reference.bib'
title: 'Spin-orbit coupling and optical spin Hall effect in photonic graphene'
---
Spin-orbit coupling in crystals allows to create and control spin currents without applying external magnetic fields. These phenomena have been described in the seventies [@Dyakonov] and are nowadays called the spin Hall effect (SHE) [@Hirsch1999; @reviewSHE]. In 2005, the interplay between the spin-orbit coupling and the specific crystal symmetry of graphene[@Geim2007] has been proposed [@Kane2005] to be at the origin of a new type of spin Hall effect, the quantum spin Hall effect, in which the spin currents are supported by surface states and are topologically protected [@QSHE; @Kane2010]. This result has a special importance, since it defines a new class of $Z_2$-topogical insulator [@Kane2005b], not associated with the quantization of the total conductance, but associated with the quantization of the spin conductance. However, from an experimental point of view, the realization of any kind of SHE is difficult, because spin-orbit coupling does not only lead to the creation of spin current, but also to spin decoherence [@Dyakonov2]. In graphene, the situation is even worse, since the spin-orbit coupling is extremely weak. Deposition of adatoms has been proposed to increase the spin-orbit coupling [@Gmitra2013], and it allowed the recent observation of the SHE [@Balakrishnan2013], but associated with a very short spin relaxation length, of the order of 1 $\mu$m.
On the other hand, artificial honeycomb lattices for atomic Bose Einstein Condensates (BEC) [@hon_atom] and photons [@Peleg2007; @Kuhl2010; @Polini2013; @Kalesaki2014; @Jacqmin2014] have been realized. These systems are gaining a lot of attention due to the large possible control over the system parameters, up to complete Hamiltonian engineering[@Hafezi; @Umucalilar]. In BECs, the recent implementation of synthetic magnetic fields [@Lin1] and of non-Abelian, Rashba-Dresselhauss gauge fields [@Lin2] appears promising in the view of the achievement of topological insulator analogs. Photonic systems, and specifically photonic honeycomb lattices appear even more promising. They are based on coupled wave guide arrays [@Rechtsman2013], on photonic crystals with honeycomb symmetry [@Won2011], and on etched planar cavities [@Jacqmin2014]. A photonic Floquet topological insulator has been recently reported [@Chong2013], and some others based on the magnetic response of metamaterials predicted [@Khanikaev]. In photonic systems, spin-orbit coupling naturally appears from the energy splitting between the TE and TM optical modes and from structural anisotropies. Both effects can be described in terms of effective magnetic fields acting of the photon (pseudo)-spin [@Shelykh2010]. In planar cavity systems, the TE-TM effective field breaks the rotational symmetry, but preserves both time reversal and spatial inversion symmetries. It is characterized by a $k^2$ scaling and a double azimuthal dependence. This spin-orbit coupling is at the origin of the optical spin Hall effect (OSHE)[@Kavokin2005; @Leyder2007] and of the acceleration of effective magnetic monopoles [@Hivet; @Bramwell2012; @Solnyshkov2013]. As recently shown [@Tercas2014], the specific TE-TM symmetry can be locally transformed into a non-Abelian gauge field in a structure with a reduced spatial symmetry.
In this work, we calculate the band structure of photonic graphene in the presence of the intrinsic spin-orbit coupling induced by the TE-TM splitting. We derive an effective Hamiltonian which allows to extract an effective magnetic field acting on the photon pseudo-spin only. We find that the low symmetry ($\mathrm{D_{3h}}$) induced by the honeycomb lattice close to the Dirac points transforms the TE-TM field in a emergent field with a Dresselhaus symmetry. Particles become massive but no gap opens. The dispersion topology shows larges similarities with the one of bilayer graphene [@McCann2006] and of monolayer graphene with Rashba spin-orbit coupling [@Rakyta2010], featuring trigonal warping [@Dresselhaus1974] and Lifshitz transition [@Lifshitz1960]. The symmetry of these states is revealed by the optical spin Hall effect (OSHE) which we describe by simulating resonant optical excitation of the $\Gamma$, K and K’ points. The OSHE at the $\Gamma$ point shows four spin domains associated with the TE-TM symmetry. The OSHE at the K and K’ shows two domains characteristic of the Dresshlauss symmetry. The spin domains at the K and K’ points are inverted, which is a signature of the restored $\mathrm{D_{6h}}$ symmetry when the two valleys are included.
In what follows, in order to be specific, we consider a honeycomb lattice based on a patterned planar microcavity similar to the one recently fabricated and studied [@Jacqmin2014]. This does not reduce the generality of our description, which can apply to other physical realizations of honeycomb lattices, in optical and non-optical systems. In [@Jacqmin2014], quantum wells were embedded in the cavity which provided the strong coupling regime and the formation of cavity exciton-polaritons. Here, we will consider the linear regime, a parabolic in-plane dispersion, and no applied magnetic field. In such case, photons and exciton-polaritons behave in a similar way and our formalism applies to both types of particles.
\[fig1\] 
*Tight-binding model* First, we describe the spin-orbit coupling in photonic graphene structure (figure 1a) within the tight-binding approximation. We take a basis of $\sigma\pm$ polarized photon states localized on each pillar of the lattice as a zeroth approximation for the tight-binding model and introduce the hopping of photons from a pillar to one of its nearest neighbors as a perturbation $\hat{V}$ on this basis.
To illustrate the polarization dependence of the hopping probability, let us consider two neighbouring pillars $A$ and $B$, shown in Figure (1b). The photon hopping between them may be described as propagation through a “waveguide”-like link. TE-TM energy splitting imposes a slight difference $\delta J$ in tunneling matrix elements for states linearly-polarized longitudinally ($L$) and transversely ($T$) to vector $\mathbf{d}_\varphi$ linking the pillars [@suppl], as it was recently shown for the eigenstates in a photonic benzene molecule [@Vera]. In that framework, the matrix elements read: $$\langle A, L \vert \hat{V} \vert B, L\rangle \equiv -J-\delta J/2, \quad
\langle A, T \vert \hat{V} \vert B, T\rangle \equiv -J+\delta J/2. \notag$$ While a photon is in a link, TE-TM field does not rotate its eigenstate polarizations $L$ and $T$, implying no cross-polarization matrix elements: $$\langle A, L \vert \hat{V} \vert B, T\rangle = \langle A, T \vert \hat{V} \vert B, L\rangle = 0. \notag$$ In $\sigma\pm$ basis, the probability of spin flip during hopping is linear in $\delta J$ and its phase gain depends on the angle $\varphi$ between the link and the horizontal axis: $$\langle A, \pm \vert \hat{V}
\vert B, \pm \rangle = -J, \quad
\langle A, + \vert \hat{V}
\vert B,- \rangle =
- \delta J e^{-2\mathrm{i}\varphi}. \notag$$ This phase factor reflects the fact that when a link is rotated by 90 degrees, $L$ and $T$ polarization basis is inverted: if $L$ was horizontal, it becomes vertical and vice versa.
A photon state may be described in the bispinor form $\Phi = \left( \Psi_A^+, \Psi_A^-, \Psi_B^+, \Psi_B^- \right)^{\mathrm{
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We present predictions for the flux averaged muon energy spectra of quasi-elastic (QE) and 1-pion production events for the K2K long-baseline experiment. Using the general kinematical considerations we show that the muon energy spectra closely follow the neutrino energy spectrum with downward shift of the energy scale by $0.15\ \gev$ (QE) and $0.4\ \gev$ (1-pion production). These predictions seem to agree with the observed muon energy spectra in the K2K nearby detector. We also show the spectral distortion of these muon energy spectra due to the neutrino oscillation for the SK detector. Comparison of the predicted spectral distortions with the observed muon spectra of the 1-Ring and 2-Ring muon events in the SK detector will help to determine the oscillation parameters. The results will be applicable to other LBL experiments as well.'
author:
- 'Ji–Young Yu$^1$, E. A. Paschos$^1$, D. P. Roy$^2$, I. Schienbein$^3$'
title: 'Muon spectra of Quasi-Elastic and 1-pion production events at the KEK LBL neutrino oscillation experiment'
---
Introduction
============
Recently the KEK to Kamioka long-baseline neutrino oscillation experiment (K2K) has published its first result [@Ahn:2002up], which confirms the existence of ${\ensuremath{\nu_\mu}}$ oscillation as seen in the Super-Kamiokande (SK) atmospheric neutrino data [@Fukuda:1998mi]. The observed oscillation parameters from K2K agree well with the neutrino mass and mixing angles deduced from the atmospheric neutrino oscillation data [@Fukuda:1998mi] $$\sin^2 2 \theta \simeq 1\quad \text{and} \quad
\Delta m^2 \simeq 3 \times 10^{-3} \evsq \ .$$
As is well known, in a two flavor scenario, the probability for a muon neutrino with energy $E_\nu$ to remain a muon neutrino after propagating the distance $L$ is given by the following expression $$P_{\mu\mu} = 1-\sin^2 2\theta\sin^2
\Big(\frac{\Delta m^2 L}{4 E_\nu}\Big) \ .
\label{eq:pmumu}$$ Basically, the standard approach to measure the oscillation parameters is to determine the oscillation probability in Eq. in dependence of $E_\nu$. At the position of the minimum $\Delta m^2$ can be determined from the condition $\tfrac{\Delta m^2 L}{4 E_{\nu,{\rm min}}}
\overset{!}{=} \tfrac{\pi}{2}$ and $\sin^2 2\theta$ from $P_{\mu\mu}(E_{\nu,{\rm min}})\overset{!}{=} 1-\sin^2 2\theta$. The neutrino energy is not directly measurable but can be reconstructed from the simple kinematics of quasi-elastic (QE) scattering events. Measuring the energy $E_\mu$ and the polar angle $\cos \theta_\mu$ of the produced muon allows to reconstruct $E_\nu$ with help of the following relation (even if the scattered proton is not observed) $$E_\nu=E_\nu[E_\mu,\cos \theta_\mu] =
\frac{M E_\mu - {\ensuremath{m_{\mu}}}^2/2}{M - E_\mu + |\vec{k}_\mu| \cos\theta_\mu} \ .
\label{eq:E-reconstruction}$$ Here $M$ denotes the proton mass, $m_\mu$ the muon mass and $\vec{k}_\mu$ is the three-momentum of the muon in the laboratory system.
However, in practice there are some difficulties. First of all, the experimental one-ring muon events (${\ensuremath{1\rm{R}\mu}}$) are not pure QE event samples. About 30$\%$ of the ${\ensuremath{1\rm{R}\mu}}$ events are 1-pion production events with unidentified or absorbed pions. For the 1-pion events Eq. would systematically underestimate the true neutrino energy [@Walter:NuInt02]. Secondly, the reconstruction of $E_\nu$ gets more complicated including binding energy $\epsilon_B$ and Fermi motion of the target nucleons $$\begin{aligned}
E_\nu &=& E_\nu[E_\mu,\cos \theta_\mu,\vec{p},\epsilon_B]
\\
&=& \frac{(E_p+\epsilon_B) E_\mu -
(2 E_p \epsilon_B +\epsilon^2_B+ {\ensuremath{m_{\mu}}}^2)/2-\vec{p}\cdot \vec{k}_\mu}
{E_p+\epsilon_B-E_\mu+|\vec{k}_\mu|\cos\theta_\mu-|\vec{p}|\cos\theta_p} \ ,
\nonumber
\label{eq:E-reconstruction1}\end{aligned}$$ where $\vec{p}$ is the three momentum and $E_p = \sqrt{M^2 + \vec{p}^2}$ the energy of the initial nucleon. Further, $\theta_p$ is the polar angle of the target nucleon w.r.t. the direction of the incoming neutrino. Neglecting $\epsilon_B$ and the momentum $\vec{p}\, $ Eq. is recovered. Since the momentum $\vec{p}$ is unknown, $0 \le |\vec{p}| \le p_F$ where $p_F$ is the Fermi momentum, this will lead to an uncertainty of the reconstructed neutrino energy at given values $E_\mu$, $\cos\theta_\mu$, and $\epsilon_B$ of about -9$\%$ to +6$\%$ for a single event.
Hence we can see no reliable way for reconstructing the neutrino energy for the ${\ensuremath{1\rm{R}\mu}}$ sample on an event by event basis. On the other hand the muon energy is a directly measurable quantity for each event. Therefore it seems to us to be a better variable for testing the spectral distortion phenomenon compared to the reconstructed neutrino energy.
In this talk we summarize the basic ideas and the main results in [@Paschos:2003ej] where we have used kinematic considerations to predict the muon energy spectra of the QE and 1-pion resonance production events which constitute the bulk of the charged-current ${\ensuremath{\nu_\mu}}$ scattering events in the K2K experiment. These predictions can be checked with the observed muon energy spectra from the nearby detector. We also present the distortion of these muon spectra due to ${\ensuremath{\nu_\mu}}$ oscillation, which one expects to see at the SK detector. Comparison of the predicted muon spectra with those of the observed QE and 1-pion events at the SK detector will be very useful in determining the oscillation parameters.
Flux averaged muon energy spectra
=================================
The flux averaged muon energy spectra for QE and 1-pion events are given by $$\big<\frac{{\rm d}\sigma^R}{{\rm d} E_\mu}\big>
\equiv
\int f(E_\nu) \frac{{\rm d}\sigma^R}{{\rm d} E_\mu} {\rm d} E_\nu
\label{eq:xs}$$ where $f(E_\nu)$ is the neutrino flux at K2K for the nearby detector(ND) and ’R’ denotes QE and $\Delta$ resonance contribution to 1-pion production, which dominates the latter. Simple kinematic considerations lead to the following approximation for the flux averaged muon energy spectra, both, for QE and 1-pion production [@Paschos:2003ej] $$\big<\frac{{\ensuremath{{\operatorname{d}}}}\sigma^R}{{\ensuremath{{\operatorname{d}}}}E_\mu}\big>\
\simeq \sigma_{tot}^R(\overline{E_R})\ f(\overline{E_R})\ ,
\label{eq:app}$$ with $$\begin{aligned}
\overline{E_R} &=&
E_\mu + \Delta E^R
=
E_\mu +
\begin{cases}
0.15\ \gev & \text {for \ QE}\\
0.4\ \gev & \text {for \ 1-pion} \ . \end{cases}
\label{eq:shift}\end{aligned}$$ Furthermore, it is well known that the total cross sections for QE and $\Delta$ production tend to constant values for neutrino energies of about $1\ \gev$ and $1.4\ \gev$, respectively: $\sigma_{tot}^R[E_\nu] \to N^R$. Hence, for muon energies larger than about $1.2\ \gev$, Eq. can be further simplified by replacing $\sigma_{tot}^R$ by its constant asymptotic value $N^R$: $$\big<\frac{{\ensuremath{{\operatorname{d}}}}\sigma^R}{{\ensuremath{{\operatorname{d}}}}E_\mu}\big>\
\simeq N^R \times f(\overline{E_R})
\quad \text{for}\quad E_\mu \gtrsim 1.2\ \gev \ ,
\label{eq:app2}$$ with $$\begin{aligned}
N^R &=&
\begin{
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
- 'Tim Adamo,'
- Eduardo Casali
- '& Stefan Nekovar'
bibliography:
- 'biblio.bib'
title: Ambitwistor string vertex operators on curved backgrounds
---
Introduction
============
Ambitwistor strings [@Mason:2013sva; @Berkovits:2013xba] have many surprising properties; while much attention has rightly been paid to their utility for computing scattering amplitudes, they can also be defined on non-linear background fields [@Adamo:2014wea; @Adamo:2018hzd]. On such curved backgrounds the ambitwistor string is described by a chiral worldsheet CFT with free OPEs, which allows for many *exact* computations in these backgrounds, in stark contrast to conventional string theories where an expansion in the inverse string tension is needed (cf., [@Fradkin:1985ys; @Callan:1985ia; @Abouelsaood:1986gd]). For instance, the fully non-linear equations of motion for NS-NS supergravity [@Adamo:2014wea] and gauge theory [@Adamo:2018hzd] emerge as exact worldsheet anomaly cancellation conditions, and ambitwistor strings have been used to compute 3-point functions on gravitational and gauge field plane wave backgrounds [@Adamo:2017sze] correctly reproducing results found with ‘standard’ space-time techniques [@Adamo:2017nia].
Thus far, only a RNS formalism for the ambitwistor string has been shown to be quantum mechanically consistent at the level of the worldsheet. While pure spinor and Green-Schwarz versions of the ambitwistor string (or deformations thereof) have been defined on curved backgrounds [@Chandia:2015sfa; @Chandia:2015xfa; @Azevedo:2016zod; @Chandia:2016dwr], it is not clear that they are anomaly-free since only classical worldsheet calculations have been done in these frameworks. In this paper we study the heterotic and type II ambitwistor strings in the RNS formalism, at the expense of only working with NS-NS backgrounds. These backgrounds will be non-linear, and generic apart from constraints imposed by nilpotency of the BRST operator (i.e., anomaly cancellation): the Yang-Mills equations in the heterotic case and the NS-NS supergravity equations in the type II case.
For each of these models, we construct vertex operators in the $(-1,-1)$ picture for all NS-NS perturbations of the backgrounds and investigate the constraints imposed on the operators by BRST closure. In the heterotic model we consider only one such vertex operator whose BRST closure imposes the linearised gluon equations of motion (as well as gauge-fixing conditions) on the perturbation around a Yang-Mills background. In the type II model we consider three vertex operator structures, corresponding to symmetric rank-two tensor, anti-symmetric rank-2 tensor, and scalar perturbations. With a background metric (obeying the vacuum Einstein equations), BRST closure fixes the two tensorial perturbations to be a linearised graviton and $B$-field respectively. On a general NS-NS background (composed of a non-linear metric, $B$-field and dilaton), the three structures are combined into a single vertex operator, whose BRST closure imposes the linearised supergravity equations of motion on the perturbations.
We comment on the descent procedure for obtaining vertex operators in picture number zero, as well as the prospects for obtaining integrated vertex operators. We also mention some unresolved issues regarding the GSO projection in curved background fields.
Heterotic ambitwistor string
============================
As a warm up we first describe the vertex operator for a gluon in the heterotic ambitwistor string on a generic Yang-Mills background field since the calculations here are mostly straightforward. This model was defined in a gauge background in [@Adamo:2018hzd]; as usual for ambitwistor strings the worldsheet action is free $$\begin{aligned}
\label{wsa2}
S=\frac{1}{2\,\pi}\int_{\Sigma}\Pi_{\mu}\,\dbar X^{\mu}+\frac{1}{2}\,\Psi_{\mu}\,\dbar\Psi^{\mu} +S_{C}\,,\end{aligned}$$ where $\Sigma$ is a closed Riemann surface and $S_{C}$ is the action for a holomorphic current algebra for some gauge group. The bosonic field $X^\mu$ is a worldsheet scalar, and $\Pi_{\mu}$ is its spin $1$ conjugate. The real fermions $\Psi^{\mu}$ are spin $\frac{1}{2}$ fields on the worldsheet. The action implies free OPEs for the worldsheet fields, along with the usual OPE for a holomorphic worldsheet current algebra: $$\begin{aligned}
\begin{split}\label{OPEs}
&X^{\mu}(z)\,\Pi_{\nu}(w)\sim \frac{\delta^{\mu}_{\nu}}{z-w}\,, \qquad \Psi^{\mu}(z)\,\Psi^{\nu}(w)\sim \frac{\eta^{\mu\nu}}{z-w}\,,\\
&j^{{\mathsf{a}}}(z)\,j^{\mathsf{b}}(w)\sim \frac{k\,\delta^{\mathsf{ab}}}{(z-w)^2} + \frac{f^{\mathsf{abc}}\,j^{\mathsf{c}}(w)}{z-w}\,,
\end{split}\end{aligned}$$ where $\eta_{\mu\nu}$ is the $d$-dimensional Minkowski metric, $k$ is the level of the current algebra, and $f^{\mathsf{abc}}$ are the structure constants of the gauge group. At the level of the worldsheet fields dependence on a background gauge field enters through the non-standard gauge transformations of the field $\Pi_{\mu}$. From now on we take the $k\rightarrow 0$ limit to decouple gravitational degrees of freedom from the model [@Berkovits:2004jj; @Adamo:2018hzd].
In addition to the stress-energy tensor $T$, two other (holomorphic) currents are gauged: one is fermionic of spin $\frac{3}{2}$ while the other is bosonic of spin $2$. These currents depend explicitly on the background gauge field $A_{\mu}^{{\mathsf{a}}}$; the spin $\frac{3}{2}$ current is $$\begin{aligned}
\label{Gcurr}
\mathsf{G}=\Psi^{\mu}\left(\Pi_{\mu}-A^{\mathsf{a}}_{\mu}\,j^{\mathsf{a}}\right)\,,\end{aligned}$$ and the spin $2$ current is $$\begin{aligned}
\label{Hcurr}
\mathsf{H} = \Pi^2 - 2\, \Pi^\mu A_{\mu}^{\mathsf{a}} j^\mathsf{a} + A_\mu^\mathsf{a} A^{\mu \mathsf{b}} j^\mathsf{a} j^\mathsf{b} + \Psi^\mu \Psi^\nu F_{\mu\nu}^\mathsf{a} j^\mathsf{a} - \partial\left( \partial_\mu A^{\mu \mathsf{a}} j^\mathsf{a} \right) + f^{\mathsf{a}\mathsf{b}\mathsf{c}} j^\mathsf{c} A^{\mu \mathsf{b}} \partial A_\mu^\mathsf{a}\,.\end{aligned}$$ Here $F_{\mu\nu}^\mathsf{a}$ is the field strength of $A_\mu^{\mathsf{a}}$. It is straightforward to show that these currents obey $$\begin{aligned}
\label{Hcurr0}
\mathsf{G}(z)\,\mathsf{G}(w)\sim \frac{\mathsf{H}}{z-w}\,,\end{aligned}$$ without any conditions on the background field.
Constraints on $A_{\mu}^{{\mathsf{a}}}$ emerge by requiring the gauging of the currents and to be quantum mechanically consistent on the worldsheet. Indeed, this gauging leads to the modification of the worldsheet action by ghost systems \[hghosts\] SS+\_bc++, and an associated BRST charge \[hBRST\] Q=cT +bcc+++\^2, for $T$ the full stress-energy tensor (including all ghost and current algebra contributions, except the $(b,c)$ system) and all expressions assumed to be normal-ordered. Here $(b,c)$ are the fermionic ghosts associated to gauging holomorphic worldsheet gravity, $(\beta,\gamma)$ are the bosonic ghosts associated to gauging $\mathsf{G}$, and $(\tilde{b},\tilde{c})$ are the fermionic ghosts associated to gauging $\mathsf{H}$. Both $c,\tilde{c}$ are spin $-1$ while $\gamma$ is spin $-\frac{1}{2}$.
Requiring $Q^2=0$ gives the anomaly cancellation conditions for the theory. The holomorphic conformal anomaly – controlled entirely through $T$ – constrains the space-time dimension in terms of the central charge of the current algebra, but puts no restrictions on $A_{\mu}^{{\mathsf{a}}}$. However the $\{\mathsf{G},\mathsf{H}\}$ algebra is also anomalous unless it closes: $\mathsf{G}(z)\mathsf{H}(w)\sim 0$. This requirement *does* constrain the background gauge field: $$\begin{aligned}
\mathsf{G}(z)\mathsf{H}(w)\sim 0 \iff D_{[\mu}F_{\nu\alpha]}^\mathsf{a}=0=D^\mu F_{\mu\nu
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We discuss the relation between the solutions of the Skyrme model of lower degrees and the corresponding axially symmetric Hopfions which is given by the projection onto the coset space $SU(2)/U(1)$. The interaction energy of the Hopfions is evaluated directly from the product ansatz. Our results show that if the separation between the constituents is not very small, the product ansatz can be considered as a relatively good approximation to the general pattern of the charge one Hopfions interaction both in repulsive and attractive channel.'
author:
- |
[A. Acus]{}$^{\dagger}$, [E. Norvaišas]{}$^{\dagger}$ and [Ya. Shnir]{}$^{\star \ddagger}$\
\
\
$^{\dagger}$[Vilnius University, Institute of Theoretical Physics and Astronomy]{}\
[Goštauto 12, Vilnius 01108, Lithuania]{}\
$^{\star}$[BLTP, JINR, Dubna, Russia]{}\
$^{\ddagger}$[Institute of Physics, Carl von Ossietzky University Oldenburg, Germany]{}
title: Hopfions interaction from the viewpoint of the product ansatz
---
Introduction
============
Spatially localized particle-like non-perturbative soliton field configurations have a number of applications in a wide variety of physical systems, from modern cosmology and quantum field theory to condensed matter physics. The study of the interaction between the solitons and their dynamical properties has attracted a lot of attention in many different contexts (for a general review see e.g. [@Manton-Sutcliffe]). One of these interesting contexts include investigation of a new family of materials known as topological insulators, which also makes relevant the basis research involving topological solitons. Perhaps the most interesting possibility is the discovery that frustrated magnetic materials may support topological insulator phases, for which wave functions are classified by the Hopf invariant [@Moore2008].
Simple example of topological soliton solutions is given by the class of scalar models from the Skyrme family, the original Skyrme model [@Skyrme:1961vq], the Faddeev-Skyrme model [@Faddeev] in $d=3+1$, and the low-dimensional baby Skyrme model in $2+1$ dimensions [@Bsk]. The Lagrangian of all these models as they were formulated originally, has similar structure, it includes the usual sigma-model kinetic term, the Skyrme term, which is quartic in derivatives, and the potential term which does not contain the derivatives. According to the Derrick’s theorem [@Derrick], the latter term is optional in $d=3+1$, however it is necessary to stabilise the soliton configurations in the baby-Skyrme model.
A peculiar feature of these models is that the corresponding soliton solutions, Skyrmions and Hopfions, do not saturate the topological bound. In order to attain the topological lower bound and get a relation[^1] between the masses of the solitons and their topological charges $Q$, one has to modify the model, for example drop out the quadratic kinetic term [@Adam:2010fg; @Foster:2010zb] or extend the model by coupling of the Skyrmions to an infinite tower of vector mesons [@Sutcliffe:2011ig]. Thus, the powerful methods of differential geometry cannot be directly applied to describe low-energy dynamics of the Skyrmions and Hopfions, one has to analyse the processes of their scattering, radiation and annihilation numerically [@Piette:1994mh; @Battye:1996nt]. Interestingly, the numerical simulations of the head-on collision of the charge one Skyrmions reveal the celebrated picture of the $\pi/2$ scattering through the intermediate axially-symmetric charge two Skyrmion [@Battye:1996nt], which is typical for BPS configurations like vortices or monopoles (see [@Manton-Sutcliffe]). The same pattern was observed in the baby Skyrme model using the collective coordinate method [@Sutcliffe:1991aua]. However recent attempt to model the Hopfion dynamics [@Hietarinta:2011qk] failed to find the channel of right-angle scattering in head-on collisions.
Typically, the problem of direct simulation of the soliton dynamics is related with sophisticated numerical methods, the calculations require considerable amount of computational resources, actually this problem is fully investigated only for the low-dimensional baby Skyrme model. Even more simple task of full numerical investigation of the spinning solitons beyond rigid body approximation was performed only recently in the Faddeev-Skyrme model [@BattyMareike; @JHSS] and in the baby Skyrme model [@Halavanau:2013vsa; @Battye:2013tka], in the case of the original Skyrme model in $d=3+1$ this problem is not investigated yet.
Alternatively, one can delve into the assumptions about the character of the soliton interaction by analogy with the dynamical properties of the Bogomol’nyi type solitons [@Manton:1988ba; @Sutcliffe:1991aua; @Schroers:1993yk]. Then the moduli space approximation for low-energy soliton dynamics can be applied. This approach works especially well for low-dimensional baby Skyrme model because it can be considered as a deformation of the $O(3)$ sigma model. It also explains the observations of the right-angle scattering in the head-on collisions of the Skyrmions in $d=3+1$, however the question about validity of the moduli approximation to the low-energy dynamics of the Hopfions is not quite clear.
Another approach to the problem of interaction between the solitons is to consider the asymptotic field of the configurations, then for example the Skyrmions can be treated as triplets of scalar dipoles [@Schroers:1993yk; @Manton1994; @Manton:2002pf]. Similarly, the asymptotic fields both the baby Skyrmion and the Hopfion in the sector of degree one correspond to a doublet of orthogonal dipoles [@Piette:1994mh; @Gladikowski:1996mb; @Ward:2000qj]. Considering this system Ward predicted existence of three attractive channels in the interaction of the charge one Hopfions with different orientation [@Ward:2000qj]. It was suggested recently to use a simplified dipole-dipole picture of the interaction between the baby Skyrmions in the “easy plane” model, thus in this description the interaction energy depends only on the average orientation of the dipoles [@Jaykka:2010bq].
In his pioneering paper [@Skyrme:1961vq] Skyrme suggested to apply the product ansatz which yields a good approximation to a configuration of well-separated unit charge Skyrmions. The ansatz is constructed by the multiplication of individual Skyrmion fields, besides the rational map ansatz [@Houghton:1997kg] it can be used to produce an initial multi-Skyrmion configuration for consequent numerical calculations [@Battye1998].
In a similar way one can construct a system of well-separated baby-Skyrmions using the parametrization of the scalar triplet in terms of the $SU(2)$-valued hermitian matrix fields [@Acus:2009df]. Evidently, the same approach can be used to model the configuration of well separated static Hopfions of degree one. On the other hand the product ansatz can be applied in the Faddeev-Skyrme model to approximate various multicomponent configurations whose position curve consists of a few disjoint loops, like the $Q=4$ soliton.
In this Letter we discuss the relation between the solutions of the Skyrme model of lower degree and the corresponding axially symmetric Hopfions which is given by the projection onto the coset space $SU(2)/U(1)$. Using this approach we construct the product ansatz of two well-separated single Hopfion configurations. We confirm that the product ansatz correctly reproduces the channels of interaction. Indeed, it is known that similar with the case of the Skyrmions, the interaction between the two Hopfions can be repulsive or attractive depending upon the relative orientation of the solitons [@Ward:2000qj].
The model
=========
Let us consider a Faddeev-Skyrme model Lagrangian in 3+1 dimensions with metric $(+,-,-,-)$: $$\label{model}
{\cal L} = \frac{1}{32\pi^2}\left(\partial_\mu \phi^a \partial^\mu \phi^a -
\frac{1}{4}(\varepsilon_{abc}\phi^a\partial_\mu \phi^b\partial_\nu \phi^c)^2 \right)\,.$$ Here $\phi^a = (\phi^1, \phi^2,\phi^3)$ denotes a triplet of scalar real fields which satisfy the constraint $|\phi^a|^2=1$. The finite energy configurations should approach a constant value at spatial infinity, which we selected to be $\phi^a(\infty) = (0,0,1)$. Thus, the static field $\mathbf{\phi}(\mathbf{x})$ defines a map $R^3 \rightarrow S^2$, which can be characterized by Hopf invariant $Q = \pi_3(S^2) = \mathbb{Z}$. Then the finite energy solutions of the model, the Hopfions, are the map $S^3 \to S^2$ and the target space $S^2$ by construction is the coset space $SU(2)/U(1)$.
It follows that any coset space
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
- 'Yiwei Sun, , Suhang Wang$^{\mathsection}$ ,Xianfeng Tang, Tsung-Yu Hsieh, Vasant Honavar'
title: Node Injection Attacks on Graphs via Reinforcement Learning
---
[^1]
[^1]: $^{\mathsection}$Corresponding Author
|
{
"pile_set_name": "ArXiv"
}
| null | null |
[2]{}
Although memristive devices with threshold voltages are the norm rather than the exception in experimentally realizable systems, their SPICE programming is not yet common. Here, we show how to implement such systems in the SPICE environment. Specifically, we present SPICE models of a popular voltage-controlled memristive system specified by five different parameters for PSPICE and NGSPICE circuit simulators. We expect this implementation to find widespread use in circuits design and testing.
In the last few years, circuit elements with memory, namely, memristive [@chua76a], memcapacitive and meminductive [@diventra09a] systems have attracted considerable attention from different disciplines due to their capability of non-volatile low-power information storage, potential applications in analog and digital circuits, and their ability to store and manipulate information on the same physical platform [@pershin11a]. However, when combined into complex circuits, progress in this field significantly relies on the available tools at our disposal. One such tool is the SPICE simulation environment, commonly used in circuit simulations and testing. While several SPICE models of memristive [@Biolek2009-1; @Benderli2009-1; @Biolek2009-2; @Shin10a; @Rak10a; @Yakopcic11a; @Kvatinsky12a], memcapacitive [@Biolek2009-2; @Biolek10b] and meminductive [@Biolek2009-2; @Biolek11b] elements are already available, they typically [@Biolek2009-1; @Benderli2009-1; @Biolek2009-2; @Shin10a; @Rak10a] rely on physical models without a threshold (see, e.g., Refs. [@strukov08a; @joglekar09a]).
Threshold-type switching is instead an extremely important common feature of memristive devices (for examples, see Ref. [@pershin11a]) and, due to physical constraints, likely to be common in memcapacitive and meminductive elements as well [@diventra13a]. Indeed, it is the threshold-type switching which is responsible for non-volatile information storage, serves as a basis for logic operations [@borghetti10a; @pershin12a], etc., and therefore, it can not be neglected. For instance, experimentally demonstrated memristive logic circuits [@borghetti10a] and emerging memory architectures [@linn10a] support fixed-threshold modeling [@pershin09b] of memristive devices. Moreover, the atomic migration responsible for resistance switching in many important experimental systems is induced by the applied field and not by the electric current flow. Therefore, models with voltage threshold [@pershin09b; @Yakopcic11a] are physically better justified than those with the current one [@Kvatinsky12a].
In the present paper we introduce a SPICE model for a memristive device with threshold voltage that has been proposed by the present authors [@pershin09b]. Using this type of memristive devices, we have already demonstrated and analyzed several electronic circuits including a learning circuit [@pershin09b], memristive neural networks [@pershin10c], logic circuits [@pershin12a], analog circuits [@pershin10d] and circuits transforming memristive response into memcapacitive and meminductive ones [@pershin09e]. These previous results thus demonstrate the range of applicability of the selected physical model. As a consequence, we expect its SPICE implementation to find numerous applications as well.
{width="6.5cm"}
The equations describing memristive systems can be formulated in the voltage- or current-controlled form [@chua76a]. In some cases, a voltage-controlled memristive system can be easily re-formulated as a current-controlled one and vice versa [@pershin11a]. Let us then focus on voltage-controlled memristive systems whose general definition (for an $n$th-order voltage-controlled memristive system) is given by the following relations $$\begin{aligned}
I(t)&=&R_M^{-1}\left(X,V_M,t \right)V_M(t) , \label{Condeq1}\\
\dot{X}&=&f\left( X,V_M,t\right) \label{Condeq2}\end{aligned}$$ where $X$ is the vector representing $n$ internal state variables, $V_M(t)$ and $I(t)$ denote the voltage and current across the device, and $R_M$ is a scalar, called the [*memristance*]{} (for memory resistance).
\[t\]
{width="5.5cm"}
.subckt memristor pl mn PARAMS: Ron=1K Roff=10K Rinit=5K alpha=0 beta=1E13 Vt=4.6
Bx 0 x I='(f1(V(pl,mn))>0) && (V(x)<Roff) ? {f1(V(pl,mn))}: (f1(V(pl,mn))<0) && (V(x)>Ron) ? {f1(V(pl,mn))}: {0}'
Cx x 0 1 IC={Rinit}
R0 pl mn 1E12
Rmem pl mn r={V(x)}
.func f1(y)={beta*y+0.5*(alpha-beta)*(abs(y+Vt)-abs(y-Vt))}
.ends
.subckt memristor pl mn PARAMS: Ron=1K Roff=10K Rinit=5K beta=1E13 Vtp=4.6 Vtm=4.6 nu1=0.0001 nu2=0.1
Gx 0 x value={f1(V(pl)-V(mn))*(f2(f1(V(pl)-V(mn)))*f3(Roff-V(x))+f2(-f1(V(pl)-V(mn)))*f3(V(x)-Ron))}
Raux x 0 1E12
Cx x 0 1 IC={Rinit}
Gpm pl mn value={(V(pl)-V(mn))/V(x)}
.func f1(y)={beta*(y-Vtp)/(exp(-(y-Vtp)/nu1)+1)+beta*(y+Vtm)/(exp(-(-y-Vtm)/nu1)+1)}
.func f2(y1)={1/(exp(-y1/nu1)+1)}
.func f3(y)={1/(exp(-y/nu2)+1)}
.ends
A specific realization of a voltage-controlled memristive system [*with threshold*]{} has been suggested by the present authors in Ref. [@pershin09b]. Such a memristive system is described by $$\begin{aligned}
I&=&X^{-1}V_M, \label{eq3} \\
\frac{\textnormal{d}X}{\textnormal{d}t}&=&f\left( V_M\right) \left[
\nonumber \theta\left( V_M\right)\theta\left( R_{off}-X\right) + \right. \\
&{} &\qquad \qquad \qquad \left.
\theta\left(-V_M\right)\theta\left( X-R_{on}\right)\right], \label{eq4}\end{aligned}$$ with $$f(V_M)=\beta V_M+0.5\left( \alpha-\beta\right)\left[ |V_M+V_t|-|V_M-V_t| \right] \label{eq5}$$ where $V_t$ is the threshold voltage, $R_{on}$ and $R_{off}$ are limiting values of the memristance $R_M\equiv X$, and the $\theta$-functions (step functions) are used to limit the memristance to the region between $R_{on}$ and $R_{off}$. The important model parameters are the coefficients $\alpha$ and $\beta$ that characterize the rate of memristance change at $|V_M|< V_t$ and $|V_M|> V_t$, respectively. These two coefficients define the slopes of the $f(V_M)$ curve below and above the threshold (see Fig. \[fig1\]). When $\alpha=0$ (Fig. \[fig1\](b)), the device state changes only if $\left| V_M \right|>V_t$. Note that Eqs. (\[eq3\])-(\[eq5\]) are written in such a way that a positive/negative voltage applied to the top terminal with respect to the bottom terminal denoted by the black thick line always tends to increase/decrease the memristance $R_M$ (the opposite convention has been used in Ref. [@pershin09b]).
{width="4.5cm"}
The SPICE model for these devices is formulated following the general idea of Ref. [@Biolek2009-1]. For NGSPICE circuit simulator, the memristive system is realized as a sub-circuit combining a behavioral resistor $
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
address: Stanford University
author:
- Zhangsihao Yang
- Or Litany
- Tolga Birdal
- Srinath Sridhar
- Leonidas Guibas
bibliography:
- 'references.bib'
title: Continuous Geodesic Convolutions for Learning on 3D Shapes
---
\[1\][>p[\#1pt]{}]{}
Geometric Deep Learning ,Shape Descriptors ,Shape Segmentation ,Shape Matching
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The mineral barlowite, , has been the focus of recent attention due to the possibility of substituting the interlayer $^{2+}$ site with non-magnetic ions to develop new quantum spin liquid materials. We re-examine previous methods of synthesizing barlowite and describe a novel hydrothermal synthesis method that produces large single crystals of barlowite and Zn-substituted barlowite ($_3$$_x$$_{1-x}$). The two synthesis techniques yield barlowite with indistinguishable crystal structures and spectroscopic properties at room temperature; however, the magnetic ordering temperatures differ by 4 K and the thermodynamic properties are clearly different. The dependence of properties upon synthetic conditions implies that the defect chemistry of barlowite and related materials is complex and significant. Zn-substituted barlowite exhibits a lack of magnetic order down to *T* = 2 K, characteristic of a quantum spin liquid, and we provide a synthetic route towards producing large crystals suitable for neutron scattering.'
address:
- 'Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA'
- 'Department of Chemistry, Stanford University, Stanford, California 94305, USA'
- 'Department of Materials Science and Engineering, Stanford University, Stanford, California 94305, US'
- 'Department of Applied Physics, Stanford University, Stanford, California 94305, USA'
author:
- 'Rebecca W. Smaha'
- Wei He
- 'John P. Sheckelton'
- Jiajia Wen
- 'Young S. Lee'
bibliography:
- 'mendeley.bib'
title: 'Synthesis-Dependent Properties of Barlowite and Zn-Substituted Barlowite'
---
Crystal growth,Quantum spin liquid,Magnetic properties,Crystal structure determination,Spectroscopy,heat capacity
Introduction
============
Quantum spin liquid (QSL) materials have an exotic magnetic ground state characterized by the spins evading conventional magnetic long-range order down to *T* = 0 K and possessing long-range quantum entanglement.[@Balents2010; @Norman2016] One way to explain this ground state is as a resonating valence bond state, in which singlets of entangled spins fluctuate over the lattice but never break translational symmetry.[@Anderson1973] Through the possibility of obtaining long-range quantum entanglement of spins, a better understanding of the QSL ground state opens avenues to develop materials for topological quantum computing applications.[@Ioffe2002] In addition, investigating QSL candidate materials may have important implications for our understanding of high temperature superconductivity.[@Anderson1987; @Savary2017]
One of the best experimentally realized QSL candidates is the metal oxyhalide mineral herbertsmithite, .[@Braithwaite2004; @Shores2005; @Han2012; @Fu2015] Herbertsmithite has a rhombohedral, layered structure consisting of alternating kagomé lattice planes of $^{2+}$ ions with layers of nonmagnetic $^{2+}$ ions that serve to magnetically isolate the kagomé layers. Extreme magnetic frustration can be found when there are competing antiferromagnetic (AFM) interactions between nearest-neighbor S = 1/2 spins on a kagomé lattice, which consists of a network of corner-sharing triangles. The physics of herbertsmithite has been studied extensively, but chemical and synthetic limitations have held it back: a small fraction of excess $^{2+}$ impurities on the interlayer Zn site results in interlayer magnetic coupling that obscures the intrinsic QSL behavior.[@DeVries2012; @Han2016b]
The mineral barlowite[@Elliott2014; @Han2014], , is another rare example of a material that has an isolated, undistorted S = 1/2 kagomé lattice. It contains $^{2+}$ ions on its interlayer site, presumably causing it to have a transition to long-range magnetic order at 15 K.[@Han2014; @Jeschke2015] Barlowite, therefore, is not a QSL material; however, DFT calculations show that substituting the interlayer site with nonmagnetic $^{2+}$ or $^{2+}$ should suppress the long-range magnetic order and lead to a QSL state.[@Guterding2016a; @Liu2015a] It has a different coordination environment around the interlayer $^{2+}$ (trigonal prismatic as opposed to octahedral in herbertsmithite) and perfect AA stacking of the kagomé layers, while herbertsmithite has ABC stacking. It has been predicted that these differences will yield a significantly lower amount of $^{2+}$ impurities on the interlayer site in Zn- or Mg-substituted barlowite compared to herbertsmithite, opening up new avenues to study the intrinsic physics of QSL materials.[@Liu2015a]
Here, we re-examine the synthesis of barlowite after noting a discrepancy between the morphology of crystals of natural barlowite (described as “platy" along the *c*-axis[@Elliott2014]) and crystals of synthetic barlowite (rods down the *c*-axis).[@Han2016; @Pasco2018] We present a new method of synthesizing large single crystals of barlowite that are structurally and spectroscopically identical to polycrystalline barlowite at room temperature. However, at low temperatures, the magnetic transition temperature shifts by 4 K. Slight modifications of these two methods produce polycrystalline and single crystalline Zn-substituted barlowite ($_3$$_x$$_{1-x}$) showing a lack of magnetic order down to *T* = 2 K, consistent with a QSL ground state. This comparison of synthesis methods has implications for past and future studies of related synthetic minerals, especially copper oxysalts produced hydrothermally. We find that the large dependence of properties on synthetic route suggests that the defect chemistry of copper oxysalts is more complex than previously believed, implying that a true understanding of this class of materials requires careful control over synthesis.
Experimental Details
====================
Materials and Methods
---------------------
(Alfa, Cu 55%), (Alfa, 96%), HBr (Alfa, 48% wt), (BTC, 99.999%), (BTC, 99.5%), (Alfa, 99%), (Alfa, 99%), deionized (DI) (EMD Millipore), and (Aldrich, 99.9%) were used as purchased. Mid- and near-infrared (IR) measurements were performed on a Thermo Fisher Scientific Nicolet 6700 Fourier transform infrared spectrometer (FTIR) with a Smart Orbit diamond attenuated total reflectance (ATR) accessory. Raman measurements were performed on a Horiba LabRAM Aramis spectrometer with a CCD detector, 1800 grooves/mm grating, and 532 nm laser. DC magnetization measurements were performed on a Quantum Design Physical Properties Measurement System (PPMS) Dynacool from 2 to 350 K under applied fields of 0.005 T, 1.0 T, and 9.0 T. Heat capacity measurements were performed in the PPMS Dynacool on either a pressed single pellet of powder mixed with Ag powder in a 1:2 mass ratio or on a single crystal affixed to a sapphire platform using Apiezon-N grease.
Syntheses
---------
**1**: (1.5477 g), (0.2593 g), and HBr (0.8 mL) were sealed in a 45 mL PTFE-lined stainless steel autoclave with 36 mL DI or . This was heated over 3 hours to 175 $^{\circ}$C and held for 72 hours before being cooled to room temperature over 48 hours. The products were recovered by filtration and washed with DI , yielding polycrystalline barlowite.
**1-a**: This was prepared as **1** above but with the following heating profile: it was heated over 3 hours to 175 $^{\circ}$C and held for 17 days before being cooled to room temperature over 48 hours.
**Zn-1**: (0.5307 g), (0.0593 g), and (0.5405 g) were sealed in a 23 mL PTFE-lined stainless steel autoclave with 10 mL DI . This was heated over 3 hours to 210 $^{\circ}$C and held 24 hours before being cooled to room temperature over 30 hours. The products were recovered by filtration and washed with DI , yielding polycrystalline Zn-substituted barlowite.
**2**: (0.4569 g) and (0.9119 g) were sealed in a 23 mL PTFE-lined stainless steel autoclave with 15 mL DI . **Zn-2**: (0.2742 g), (0.4653 g), and (1.1724 g) were sealed in a 23 mL PTFE-lined stainless steel autoclave with 15 mL DI . For both, the autoclave was heated over 3 hours to 175 $^{\circ}$C and held for 72 hours, then cooled to 80 $^{\circ}$C over 24 hours. It was held at 80 $^{\circ}$C for 24 hours before being cooled to room temperature over 12 hours. The products were recovered by filtration and washed with DI , yielding barlowite or Zn-substituted barlowite crystals mixed with polycrystalline , which was removed by sonication in acetone.
X-ray Diffraction
-----------------
Single crystal diffraction (SCXRD) experiments were conducted at Beamline 15-ID at the Advanced Photon Source (APS), Argonne National Laboratory, using a Bruker D8 diffractometer equipped with a PILATUS3 X CdTe 1M detector or
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: |
Le texte concerne des généralisations de l’équation de Markoff en théorie des nombres, déduites des fractions continues. Il décrit la méthode pour une résolution complète de ces nouvelles équations, ainsi que leur interprétation en algébre et en géométrie algébrique. Cette approche algébrique est complétée par un développement analytique concernant les groupes fuchsiens. Le lien avec la théorie de Teichmüller des tores percés est complètement décrit, les classifiant au moyen d’une théorie de la réduction. Des considérations plus générales au sujet des surfaces de Riemann, les géodésiques et leur étude hamiltonienne sont citées, de même que des applications à la physique, au bruit en $1/f$ et à la fonction zéta. Des idées relatives à d’importantes conjectures sont présentées. On donne aussi des raisons pour lesquelles la théorie de Markoff apparaît dans différents contextes géométriques, grâce à des résultats de décomposition valables dans le groupe $GL(2,\mathbb{Z})$.\
\
\
\
<span style="font-variant:small-caps;">Abstract.</span> The text deals with generalizations of the Markoff equation in number theory, arising from continued fractions. It gives the method for the complete resolution of such new equations, and their interpretation in algebra and algebraic geometry. This algebraic approach is completed with an analytical development concerning fuchsian groups. The link with the Teichmüller theory for punctured toruses is completely described, giving their classification with a reduction theory. More general considerations about Riemann surfaces, geodesics and their hamiltonian study are quoted, together with applications in physics, $1/f$-noise and zeta function. Ideas about important conjectures are presented. Reasons why the Markoff theory appears in different geometrical contexts are given, thanks to decomposition results in the group $GL(2,\mathbb{Z})$.
author:
- Serge Perrine
date: '6 avril 2003 (version 6)'
title: |
Recherches autour\
de\
la théorie de Markoff
---
$$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$
$$\text{''Tout voir, tout entendre, ne perdre aucune id\'{e}e'' \ }$$ $$\text{\bfseries{Evariste Galois}}$$ $$$$ $$$$ $$\text{''Saisir les propri\'{e}t\'{e}s des choses, } \\
\text{d'apr\`{e}s leur mode d'existence dans l'infiniment petit''}$$ $$\text{\bfseries{Discours de F\'{e}lix Klein sur Bernhard Riemann
et son influence}}$$ $$$$ $$$$ $$\text{''Sans l'esp\'{e}rance, on ne trouvera pas l'inesp\'{e}r\'{e}, } \\
\text{qui est introuvable et inaccessible''}$$ $$\text{\bfseries{H\'{e}raclite}}$$ $$\
\begin{array}{cc}
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
&
\end{array}$$
Remerciements
=============
Mes remerciements s’adressent à différentes personnes sans lesquelles ce texte n’aurait jamais vu le jour, et à tous ceux qui m’ont aidé pour sa mise en forme. Je pense en particulier aux personnes suivantes :
- Georges Rhin qui tout au long de ces dernières années a prêté attention aux différents documents que je lui adressais périodiquement.
- Michel Planat avec qui une coopération régulière et des discussions passionnantes autour d’observations physiques qu’il avait faites ont beaucoup soutenu ma curiosité pour la théorie de Markoff. Mon intérêt pour ce sujet venait de considérations sur le codage de l’information. Mais voir apparaître le spectre de Markoff dans les caractéristiques physiques d’un oscillateur à vérouillage de phase a considérablement relancé mes travaux. En observant le comportement d’oscillateurs construits sur mesure, pourrions-nous comprendre certaines parties de cette théorie restant encore énigmatiques, pourrions-nous inversement construire certains modèles de bruit utiles à la physique? Ces questions ont orienté mes travaux.
- Michel Mendès France et Michel Waldschmidt qui se sont à différentes reprises intéressés à mes travaux, et m’ont fourni l’occasion de les perfectionner et de les exposer. Je les remercie très chaleureusement de leurs encouragements et de leurs commentaires sans concession que j’ai toujours considérés comme une source de progrès.
Je voudrais aussi remercier Cécile et les enfants pour leur grande patience à supporter le temps considérable que j’ai passé sur ce travail. $$\begin{array}{cc}
& \\
& \\
&
\end{array}$$ $$\begin{array}{cc}
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
& \\
&
\end{array}$$
Présentation générale
=====================
Le but du présent travail est d’exposer une démarche de recherche conduite autour de la théorie de Markoff, ainsi que les résultats qu’elle a fournis. Cette théorie est une branche de ce que Hermann Minkowski a appelé la ”géométrie des nombres” [@Minkowski][@Cassels2]. Elle fournit une réponse partielle au problème suivant :
Une forme quadratique réelle étant donnée $f(x,y)=ax^2+bxy+cy^2\in \mathbb{R}[x,y]$, quelle est la valeur minimale du nombre $\mid f(x,y)\mid $ lorsque $x$ et $%
y$ sont des entiers non tous deux simultanément nuls ?
Pour une forme définie $f(x,y)$, c’est-à-dire telle que $%
\Delta (f)=b^2-4ac<0$, ce problème a été résolu par Joseph Louis Lagrange. Sa solution se déduit aussi d’un résultat plus général de Charles Hermite [@Hermite] donnant : $$C(f)=\frac{\inf_{(x,y)\in \mathbb{Z}^2-\{(0,0)\}}\mid f(x,y)\mid
}{\sqrt{\mid \Delta (f)\mid }}\leq \frac
1{\sqrt{3}}=C(x^2+xy+y^2).$$ Il a aussi été démontré ([@Cassels2] p.33) que pour tout nombre $\rho \in ]0,(1/\sqrt{3})]$, on peut trouver une forme quadratique définie $f(x,y)\in \mathbb{R}[x,y]$ telle que : $$\rho =C(f).$$ Si la forme $f(x,y)$ est indéfinie, c’est-à-dire telle que $%
\Delta (f)=b^2-4ac>0$, on sait depuis [@Korkine] que l’on a : $$C(f)\leq \frac 1{\sqrt{5}}=C(x^2-xy-y^2).$$ Pour les autres valeurs, on a [@Korkine] : $$C(f)\leq \frac 1{\sqrt{8}}=C(x^2-2y^2).$$ C’est pour mieux comprendre le cas indéfini qu’Andrei A. Markoff a développé sa théorie [@Markoff]. Celle-ci identifie l’infinité des valeurs $C(f)$ comprises entre $(1/\sqrt{
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We present the formation of a Kinematically Decoupled Core (KDC) in an elliptical galaxy, resulting from a major merger simulation of two disk galaxies. We show that although the two progenitor galaxies are initially following a prograde orbit, strong reactive forces during the merger can cause a short-lived change of their orbital spin; the two progenitors follow a retrograde orbit right before their final coalescence. This results in a central kinematic decoupling and the formation of a large-scale ($\sim$2 kpc radius) counter-rotating core (CRC) at the center of the final elliptical-like merger remnant ($M_*=1.3 \times10^{ 11 } ${M$_\odot$}$ $), while its outer parts keep the rotation direction of the initial orbital spin. The stellar velocity dispersion distribution of the merger remnant galaxy exhibits two symmetrical off-centered peaks, comparable to the observed “2-$\sigma$ galaxies”. The KDC/CRC consists mainly of old, pre-merger population stars (older than 5 Gyr), remaining prominent in the center of the galaxy for more than 2 Gyr after the coalescence of its progenitors. Its properties are consistent with KDCs observed in massive elliptical galaxies. This new channel for the formation of KDCs from prograde mergers is in addition to previously known formation scenarios from retrograde mergers and can help towards explaining the substantial fraction of KDCs observed in early-type galaxies.'
author:
- 'Athanasia Tsatsi, Andrea V. Macciò, Glenn van de Ven, and Benjamin P. Moster'
bibliography:
- 'ms.bib'
title: |
A new channel for the Formation of Kinematically Decoupled Cores\
in Early-type galaxies
---
Introduction
============
Early-type galaxies (ETGs) are the end-products of complex assembly and evolutionary processes that determine their shape and dynamical structure. Signatures of such past processes in present-day ETGs are likely to be in the form of peculiar kinematic subsystems that reside in their central regions. Such subsystems are called Kinematically Decoupled Cores (KDCs) and they are defined as central stellar components with distinct kinematic properties from those of the main body of the galaxy [e.g. @McDermid_2006; @Krajnovic_2011; @Toloba_2014].
KDCs were first discovered using one-dimensional long-slit spectroscopic observations of the stellar kinematics of ETGs . More recently, integral-field unit spectroscopic surveys such as SAURON [@Bacon_2001], ATLAS^3D^ [@Cappellari_2011], or CALIFA [@Sanchez_2012], being able to provide full two-dimensional observations of the stellar kinematics, have favored the detection of KDCs and revealed that a substantial fraction of ETGs in the nearby universe show kinematic decoupling in their central regions. This fraction ranges in different surveys, depending mainly on technical and sample-selection biases.
Notably, the fraction of ETGs that host KDCs in the SAURON sample of 48 E+S0 galaxies [@deZeeuw_2002] is substantially high, especially in the centers of slow-rotating ETGs: 8 out of the 12 slow rotators ($\sim67$%) from the main survey host a KDC [@Emsellem_2007]. In the ATLAS^3D^ volume-limited sample of 260 ETGs, this fraction is 47% [@Krajnovic_2011]. The KDCs found in slow rotators are typically “old and large", with stellar populations that show little or no age differences with their host galaxy (older than 8 Gyr) and sizes larger than 1 kpc [@McDermid_2006; @Kuntschner_2010].
KDCs are also detected in fast-rotating ETGs: 25% of fast rotators from the main SAURON survey host KDCs. This type of KDCs are typically “young and compact", with stellar populations younger than 5 Gyr and sizes less than a few hundred parsecs [@McDermid_2006].
We note that these fractions establish a lower limit to the true fraction of ETGs with kinematically decoupled regions, considering projection effects, the fact that young and compact KDCs are subject to technical or observational biases [e.g. @McDermid_2006], while many ETGs with resolved KDCs in their centers are subject to different classifications throughout the literature [e.g. $2\sigma$-galaxies, see @Krajnovic_2011].
While a consensus is reached about the prominent existence of KDCs in luminous ETGs, the physical processes and the rate at which they are formed are still poorly understood. Young and compact KDCs in fast rotators might have formed via star-formation in situ. According to this scenario, the stellar component of the KDC is formed in initially kinematically misaligned gaseous regions, probably originating from externally accreted gas or unequal mass merging , where the orientation of the merging orbit defines the orientation of rotation of the resulting KDC. Following this line of thought, suggested that counter-rotating cores can only result from retrograde mergers.
However, this scenario could not hold for the large and old KDCs found in slow rotators, whose stellar population was probably formed at the same epoch as the main body of the galaxy. In this case, processes such as gas accretion or accretion of low-mass stellar systems are more likely to affect the outer parts of the galaxy and can not be consistent with observations that show no color gradients between the KDC and the surrounding galaxy [@Carollo_1997].
The most plausible formation scenario that could explain the similarity of the stellar content of the KDC and the main body of the galaxy is major merging. This scenario has been confirmed in simulations [e.g. @Bois_2010; @Bois_2011], resulting in elliptical-like and slow-rotating merger remnants hosting KDCs only when the two progenitor galaxies were initially following retrograde merger orbits. However, observations indicate a lower limit to the true rate of occurrence of KDCs in ETGs which can not be explained only by retrograde mergers, pointing to the need of additional KDC formation scenarios.
Here we show that, a KDC might as well result from an initially prograde major merger. The kinematic decoupling in the center of the final elliptical-like merger remnant can result from a short-lived change of the orbital spin of the two progenitor galaxies right after their second encounter. This new channel for the formation of KDCs might serve as an additional mechanism that can help towards explaining their observed rate of occurrence in ETGs.
Simulation parameters
=====================
The simulation we use is described in [@Moster_2011]. It was performed using the TreeSPH-code GADGET-2 [@Springel_2005], including star formation and supernova feedback. The two progenitor disk galaxies are identical and they are composed of a cold gaseous disk, a stellar disk and a stellar bulge, which are embedded in a dark-matter and hot-gas halo.
The gaseous and the stellar disk of each progenitor galaxy have exponential surface brightness profiles and they are rotationally supported, while the spherical stellar bulge follows a [@Hernquist_1990] profile, and is initially non-rotating[^1]. The dark matter halo has a @Hernquist_1990 profile and a spin parameter consistent with cosmological simulations [@Maccio_2008]. The hot gaseous halo follows the $\beta$ -profile [@Cavaliere_FuscoFemiano1976] and is rotating around the spin axis of the disk [see @Moster_2011 for a more detailed description of the galaxy model].
The stellar mass of each progenitor is $M_*=5 \times
10^{ 10 } ${M$_\odot$}$ $ and the bulge-to-disk ratio was chosen to be $B/D=0.22$. The mass of the cold gaseous disk is $M_{g,cold}=1.2 \times 10^{ 10 } ${M$_\odot$}$ $, such that the gas fraction in the disk is 23%. The virial mass of the dark matter halo is $M_{dm}=1.1 \times 10^{ 12 } ${M$_\odot$}$ $, while the mass of the hot gaseous halo is $M_{g,hot}=1.1 \times 10^{ 11 } ${M$_\odot$}$ $. The softening length is 100 pc for stellar, 400 pc for dark matter and 140 pc for gas particles.
The two progenitors are initially employed in a nearly unbound prograde parabolic orbit, with an eccentricity of $e=0.95$ and a pericentric distance of $\ensuremath{r_{p_1}}=13.6$ kpc. Such an orbit is representative for the most common major mergers in $\Lambda$CDM cosmology [@Khochfar_Burkert2006]. The two galaxies have an initial separation of $d_{start}$ = 250 kpc. The orbital and the rotation spin of the first galaxy are aligned, while the spin axis of the second galaxy is inclined by $\theta$=$30\,^{\circ}$ with respect to the orbital plane. The simulation lasts for 5 Gyr, such that the remnant elliptical galaxy is fully relaxed.
Merger Remnant
==============
Structure of the merger remnant
-------------------------------
In order to connect the orbital and mass distribution of our simulated galaxy with observable properties, we create two-dimensional mock stellar mass maps as follows. Stellar particles are projected such that the galaxy is seen edge-on with respect to the initial orbital plane of the merger. Particles are then binned on a regular grid centered on the baryonic center of mass of the galaxy. We
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Using a scenario of a hybridized mixture of localized bipolarons and conduction electrons, we demonstrate for the latter the simultaneous appearance of a pseudogap and of strong incoherent contributions to their quasi-particle spectrum which arise from phonon shake-off effects. This can be traced back to temporarily fluctuating local lattice deformations, giving rise to a double-peak structure in the pair distribution function, which should be a key feature in testing the origin of these incoherent contributions, recently seen in angle resolved photoemission spectroscopy ($ARPES$).'
address:
- |
$^{(a)}$ Centre de Recherches sur les Très Basses Températures, Laboratoire Associé á l’Université Joseph Fourier,\
Centre National de la Recherche Scientifique, BP 166, 38042, Grenoble Cédex 9, France
- |
$^{(b)}$ Dipartimento di Scienze Fisiche “E.R. Caianiello”, Università di Salerno, I-84081 Baronissi (Salerno), Italy\
Unità I.N.F.M. di Salerno
author:
- 'J. Ranninger$^{(a)}$ and A. Romano$^{(b)}$'
date: 'March 31, 1998'
title: '**Interrelation between the pseudogap and the incoherent quasi-particle features of high-$T_c$ superconductors**'
---
[2]{}
The appearance of a pseudogap, accompanied by a predominantly incoherent quasi-particle spectrum in certain parts of the Brillouin zone[@ARPES], is considered to be amongst the most significant signatures of high-$T_c$ superconductors ($HT_cSC$) which may contain the key of our understanding of these materials. As suggested earlier, the large incoherent part of the quasi-particle spectrum might come from a coupling of the electrons to collective modes such as spin fluctuations[@Schrieffer-97]. We shall discuss and defend in this Letter a similar point of view based on a scenario of a mixture of intrinsically localized bipolarons and coexisting itinerant electrons, hybridized with each other via charge exchange, permitting bipolarons to disintegrate into pairs of conduction electrons and vice-versa to reconstitute themselves in an inverse process. The location of the bipolarons in high-$T_c$ materials might be sought in the highly polarizable dielectric layers adjacent to the $CuO_2$ planes or possibly inside the polaronic stripes[@Bianconi-97] in those planes themselves - the remainder of those $CuO_2$ planes forming the subsystem housing the itinerant electrons. Taking the bipolarons as quasi-particles without any internal structure, such a scenario is described by the so called Boson-Fermion model ($BFM$) which has led us to the prediction of pseudogap features in the quasi-particle spectrum[@Ranninger-95], driven by strong local electron-pair correlations. In the present Letter we extend our previous studies by taking into account the internal polaronic structure of the bipolaronic Bosons, as being composed of charge and lattice vibrational degrees of freedom, locked together in a coherent quantum state. A bipolaronic Boson localized on a site $i$ is represented by $$b^{+}_i~e^{-\alpha(a_i-a^{+}_i)}|0\rangle | \Phi(X) ) =
b^{+}_i|0\rangle | \Phi(X-X_0) ). \quad \label{equ1}$$ where the phonon operators $a_i^{(+)}$ correspond to local lattice deformations. The hard-core Bose operators $b_i^{(+)}$ describe pairs of electrons which are self-trapped inside locally deformed clusters of atoms, characterized by deformed harmonic oscillator states $|\Phi(X-X_0))$ with equilibrium positions shifted by $X_0=2\alpha\sqrt{\hbar/2M\omega_0}$ ($\omega_0$ denotes the characteristic frequency and $M$ the mass of the oscillators). The strength of the coupling of the charge carriers to local lattice deformations, ultimately leading to bipolaron formation, is given by $\hbar \omega_0 \alpha$. Such physics is described in terms of the following generalization of the original $BFM$: $$\begin{aligned}
H & = & (D-\mu)\sum_{i,\sigma}c^+_{i\sigma}c_{i\sigma}
-t\sum_{\langle i\neq j\rangle,\sigma}c^+_{i\sigma}c_{j\sigma}
\nonumber \\
& + & (\Delta_B-2\mu) \sum_ib^+_ib_i
+v\sum_i [b^+_ic_{i\downarrow}c_{i\uparrow}
+c^+_{i\uparrow}c^+_{i\downarrow}b_i] \nonumber \\
& - & \hbar \omega_0 \alpha \sum_ib^+_ib_i(a_i+a_i^{+})
+\hbar \omega_0 \sum_i \left(a^{+}_i a_i +\frac{1}{2}\right).
\label{eq2}\end{aligned}$$ Here $c_{i\sigma}^{(+)}$ are Fermionic operators referring to itinerant electrons with spin $\sigma$. The bare hopping integral for the electrons is given by $t$, the bare Fermionic half band width by $D$, the Boson energy level by $\Delta_B$ and the Boson-Fermion pair-exchange coupling constant by $v$. The chemical potential $\mu$ is common to Fermions and Bosons. The indices $i$ denote effective sites involving molecular units made out of adjacent molecular clusters of the metallic Fermionic and dielectric Bosonic subsystems. Because of the small overlap of the oscillator wave functions at different sites we may, to within a first approximation, consider the Boson and Fermion operators as commuting with each other.
The original $BFM$, given by the first two lines in Eq.(2), has been investigated in great detail as far as the opening of the pseudogap is concerned and as far as this affects the thermodynamic, transport and magnetic properties[@Ranninger-95]. The opening of the pseudogap in the Fermionic density of states was shown to be driven by the onset of local electron pairing without any superconducting long range order. Even without treating the generalized $BFM$ within the self-consistent conserving approximation, used in those studies of the original $BFM$, we find that the atomic limit of this generalized $BFM$ already gives us clear indications on the interrelation between the opening of the pseudogap and the appearance of predominantly incoherent quasi-particle features, as seen in ARPES studies.
In order to set the scale of the various parameters in this model we measure them in units of $D$, which for typical $HT_cSC$ is of the order of $0.5~eV$. As in our previous calculations of the original $BFM$, we choose $v$ such that the pseudogap opens up at temperatures of the order of a hundred degrees $K$. We take $v=0.25$ for the present study. We furthermore choose $\alpha=2.5$ such that together with a typical local phonon frequency of the order of $\omega_0=0.1$ we have a reasonable bipolaron binding energy $\varepsilon_{BP}=\alpha^2 \hbar \omega_0$ which pins the chemical potential at about half the renormalized Bosonic level $\tilde\Delta_B= \Delta_B- \hbar \omega_0 \alpha^2$. We choose $\tilde\Delta_B$ to lie close to the band center such that the number of electrons is slightly below half-filling (typically around $0.75$ per site, which is the physically relevant regime of concentrations). For larger binding energies the bipolaronic level would drop below the band of the electrons leading to a situation of [*bipolaronic superconductivity*]{}, which is clearly not realized in $HT_cSC$ since they definitely show a Fermi surface.
The idea behind applying the Boson-Fermion scenario to $HT_cSC$ is that we are confronted with inhomogeneous systems consisting of highly polarizable substructures on which localized bipolarons are formed. These local substructures are embedded in the rest of the lattice[@Roehler-97] which is occupied by electrons having a large either hole or electron-like Fermi surface[@Ding-97; @Ino-98], depending on doping. In such a two-component scenario the electrons scatter in a resonant fashion in and out of the Bosonic bipolaronic states. It is that which is at the origin of the opening of the pseudogap in the normal state of these materials, driven by a precursor of electron pairing[@Ranninger-95], rather than magnetic interactions [@Ding-97]. Generalizing this scenario in the way described above provides a mechanism by which the electrons acquire polaronic features (which, unlike for Bosons, are not of intrinsic nature) via the charge exchange term. This term thus not only controls the opening of the pseudogap as in the original $BFM$ but also the appearance of the strong incoherent contributions to the electron spectrum arising from phonon shake-off effects. Given the two-subsystem picture on which the Boson-Fermion model is based, doping leads primarily to the creation of localized bipolarons which beyond a certain critical concentration are exchanged with the itinerant electrons. For a system such as, for instance, $YBCO$ the number $n_B=\langle b_i^+ b_
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We discuss the implications of the recent discovery of CP violation in two-body SCS $D$ decays by LHCb. We show that the result can be explained within the SM without the need for any large $SU(3)$ breaking effects. It further enables the determination of the imaginary part of the ratio of the $\Delta U=0$ over $\Delta U=1$ matrix elements in charm decays, which we find to be $(0.65\pm 0.12)$. Within the standard model, the result proves the non-perturbative nature of the penguin contraction of tree operators in charm decays, similar to the known non-perturbative enhancement of $\Delta I=1/2$ over $\Delta I=3/2$ matrix elements in kaon decays, that is, the $\Delta I=1/2$ rule. As a guideline for future measurements, we show how to completely solve the most general parametrization of the $D \to P^+P^-$ system.'
author:
- Yuval Grossman
- Stefan Schacht
bibliography:
- 'uspin-DeltaACP.bib'
title: 'The Emergence of the $\Delta U=0$ Rule in Charm Physics'
---
Introduction \[sec:intro\]
==========================
In a recent spectacular result, LHCb discovered direct CP violation in charm decays at 5.3$\sigma$ [@Aaij:2019kcg]. The new world average of the difference of CP asymmetries [@Aitala:1997ff; @Link:2000aw; @Csorna:2001ww; @Aubert:2007if; @Staric:2008rx; @Aaltonen:2011se; @Collaboration:2012qw; @Aaij:2011in; @Aaij:2013bra; @Aaij:2014gsa; @Aaij:2016cfh; @Aaij:2016dfb] $$\begin{aligned}
\Delta a_{CP}^{\mathrm{dir}} &\equiv
a_{CP}^{\mathrm{dir}}(D^0\rightarrow K^+K^-) - a_{CP}^{\mathrm{dir}}(D^0\rightarrow \pi^+\pi^-)\,, \end{aligned}$$ where $$\begin{aligned}
a_{CP}^{\mathrm{dir}}(f) &\equiv \frac{
\vert \mathcal{A} (D^0\to f)\vert^2 - \vert {\mathcal{A}}(\overline{D}^0\to f)\vert^2
}{
\vert \mathcal{A}(D^0\to f)\vert^2 + \vert {\mathcal{A}}(\overline{D}^0\to f)\vert^2
}\,, \end{aligned}$$ and which is provided by the Heavy Flavor Averaging Group (HFLAV) [@Amhis:2016xyh], is given as [@Carbone:2019] $$\begin{aligned}
\Delta a_{CP}^{\mathrm{dir}} &= -0.00164\pm 0.00028\,. \label{eq:HFLAVav} \end{aligned}$$ Our aim in this paper is to study the implications of this result. In particular, working within the Standard Model (SM) and using the known values of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements as input, we see how Eq. (\[eq:HFLAVav\]) can be employed in order to extract low energy QCD quantities, and learn from them about QCD.
The new measurement allows for the first time to determine the CKM-suppressed amplitude of singly-Cabibbo-suppressed (SCS) charm decays that contribute a weak phase difference relative to the CKM-leading part, which leads to a non-vanishing CP asymmetry. More specifically, $\Delta a_{CP}^{\mathrm{dir}}$ allows to determine the imaginary part of the $\Delta U=0$ over $\Delta U=1$ matrix elements.
As we show, the data suggest the emergence of a $\Delta U=0$ rule, which has features that are similar to the known $\Delta I=1/2$ rule in kaon physics. This rule is the observation that in $K \to \pi\pi$ the amplitude into a $I=0$ final state is enhanced by a factor $\sim 20$ with respect to the one into a $I=2$ final state [@Tanabashi:2018oca; @GellMann:1955jx; @GellMann:1957wh; @Gaillard:1974nj; @Bardeen:1986vz; @Buras:2014maa; @Bai:2015nea; @Blum:2015ywa; @Boyle:2012ys; @Buras:2015yba; @Kitahara:2016nld]. This is explained by large non-perturbative rescattering effects. Analogous enhancements in charm decays have previously been discussed in Refs. [@Einhorn:1975fw; @Abbott:1979fw; @Golden:1989qx; @Brod:2012ud; @Grinstein:2014aza; @Bhattacharya:2012ah; @Franco:2012ck; @Hiller:2012xm]. For further recent theoretical work on charm CP violation see Refs. [@Nierste:2017cua; @Nierste:2015zra; @Muller:2015rna; @Grossman:2018ptn; @Buccella:1994nf; @Grossman:2006jg; @Artuso:2008vf; @Khodjamirian:2017zdu; @Buccella:2013tya; @Cheng:2012wr; @Feldmann:2012js; @Li:2012cfa; @Atwood:2012ac; @Grossman:2012ry; @Buccella:2019kpn; @Yu:2017oky; @Brod:2011re].
In Sec. \[sec:decomposition\] we review the completely general U-spin decomposition of the decays $D^0\rightarrow K^+K^-$, $D^0\rightarrow \pi^+\pi^-$ and $D^0\rightarrow K^{\pm}\pi^{\mp}$. After that, in Sec. \[sec:solving\] we show how to completely determine all U-spin parameters from data. Our numerical results which are based on the current measurements are given in Sec. \[sec:numerics\]. In Sec. \[sec:deltau0rule\] we interpret these as the emergence of a $\Delta U=0$ rule, and in Sec. \[sec:DeltaI12inKDB\] we compare it to the $\Delta I=1/2$ rules in $K$, $B$ and $D$ decays. The different effect of $\Delta U=0$ and $\Delta I=1/2$ rules on the phenomenology of charm and kaon decays, respectively, is discussed in Sec. \[sec:UandIrules\]. In Sec. \[sec:conclusions\] we conclude.
Most general amplitude decomposition \[sec:decomposition\]
==========================================================
The Hamiltonian of SCS decays can be written as the sum $$\begin{aligned}
\mathcal{H}_{\mathrm{eff}} \sim \Sigma (1,0) - \frac{\lambda_b}{2} (0,0)\,,\end{aligned}$$ where $(i,j) = \mathcal{O}^{\Delta U=i}_{\Delta U_3=j}$, and the appearing combination of CKM matrix elements are $$\begin{aligned}
\Sigma &\equiv \frac{V_{cs}^* V_{us} - V_{cd}^* V_{ud}}{2}\,, \qquad
-\frac{\lambda_b}{2} \equiv -\frac{V_{cb}^* V_{ub}}{2} = \frac{V_{cs}^* V_{us} + V_{cd}^* V_{ud} }{2}\,, \end{aligned}$$ where numerically, $|\Sigma| \gg |\lambda_b|$. The corresponding amplitudes have the structure $$\begin{aligned}
\mathcal{A} = \Sigma ( A_{\Sigma}^s - A_{\Sigma}^d ) - \frac{\lambda_b}{2} A_b\,,\end{aligned}$$ where $A_{\Sigma}^s$, $A_{\Sigma}^d$ and $A_b$ contain only strong phases and we write also $A_{\Sigma}\equiv A_{\Sigma}^s - A_{\Sigma}^d$.
For the amplitudes we use the notation $$\begin{aligned}
\mathcal{A}(K\pi) &\equiv \mathcal{A}(\overline{D}^0 \rightarrow K^+\pi^-)\,, \\
\mathcal{A}(\pi\pi) &\equiv \mathcal{A}(\overline{D}^0 \rightarrow \pi^+\pi^-)\,, \\
\mathcal{A}(KK) &\equiv \mathcal{A}(\overline{D}^0 \rightarrow K^+K^-)\,, \\
\mathcal{A}(\pi K) &\equiv \mathcal{A}(\overline{D}^0 \rightarrow \pi^+ K^-)\,.\end{aligned}$$ The U-spin related quartet of charm meson decays into charged final states can then be written as [@Brod:2012ud; @Muller:2015lua; @Muller:2015rna] $$\begin{aligned}
{\mathcal{A}}(K\pi) &= V_{cs} V_{ud}^* \left(t_0
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'In this work, we consider the inverse problem of reconstructing the internal structure of an object from limited x-ray projections. We use a Gaussian process prior to model the target function and estimate its (hyper)parameters from measured data. In contrast to other established methods, this comes with the advantage of not requiring any manual parameter tuning, which usually arises in classical regularization strategies. Our method uses a basis function expansion technique for the Gaussian process which significantly reduces the computational complexity and avoids the need for numerical integration. The approach also allows for reformulation of come classical regularization methods as Laplacian and Tikhonov regularization as Gaussian process regression, and hence provides an efficient algorithm and principled means for their parameter tuning. Results from simulated and real data indicate that this approach is less sensitive to streak artifacts as compared to the commonly used method of filtered backprojection.'
address: |
$^1$Department of Electrical Engineering and Automation, Aalto University, Finland\
$^2$Department of Information Technology, Uppsala University, Sweden\
author:
- 'Zenith Purisha$^1$, Carl Jidling$^2$, Niklas Wahlstr[ö]{}m$^2$, Thomas B. Sch[ö]{}n$^2$, Simo S[ä]{}rkk[ä]{}$^1$'
bibliography:
- 'sample.bib'
title: 'Probabilistic approach to limited-data computed tomography reconstruction'
---
2019
[*Keywords*]{}: computed tomography, limited data; probabilistic method; Gaussian process; Markov chain Monte Carlo
Introduction
============
X-ray computed tomography (CT) imaging is a non-invasive method to recover the internal structure of an object by collecting projection data from multiple angles. The projection data is recorded by a detector array and it represents the attenuation of the x-rays which are transmitted through the object. Since the 1960s, CT has been used to a deluge of applications in medicine [@cormack1963representation; @cormack1964representation; @herman1979image; @kuchment2013radon; @national1996mathematics; @shepp1978computerized] and industry [@akin2003computed; @cartz1995nondestructive; @de2014industrial].
Currently, the so-called filtered back projection (FBP) is the reconstruction algorithm of choice because it is very fast [@avinash1988principles; @buzug2008computed]. This method requires dense sampling of the projection data to obtain a satisfying image reconstruction. However, for some decades, the limited-data x-ray tomography problem has been a major concern in, for instance, the medical imaging community. The limited data case—also referred to as [*sparse projections*]{}—calls for a good solution for several important reasons, including:
- the needs to examine a patient by using low radiation doses to reduce the risk of malignancy or to [*in vivo*]{} samples to avoid the modification of the properties of living tissues,
- geometric restrictions in the measurement setting make it difficult to acquire the complete data [@riis2018limited], such as in [*mammography*]{} [@niklason1997digital; @rantala2006wavelet; @wu2003tomographic; @zhang2006comparative] and electron imaging [@fanelli2008electron],
- the high demand to obtain the data using short acquisition times and to avoid massive memory storage, and
- the needs to avoid—or at least minimize the impact of—the moving artifacts during the acquisition.
Classical algorithms—such as FBP—fail to generate good image reconstruction when dense sampling is not possible and we only have access to limited data. The under-sampling of the projection data makes the image reconstruction (in classical terms) an [*ill-posed*]{} problem [@natterer1986mathematics]. In other words, the inverse problem is sensitive to measurement noise and modeling errors. Hence, alternative and more powerful methods are required. Statistical estimation methods play an important role in handling the ill-posedness of the problem by restating the inverse problem as a [*well-posed extension*]{} in a larger space of probability distributions [@kaipio2006statistical]. Over the years there have been a lot of work on tomographic reconstruction from limited data using statistical methods (see, e.g., [@rantala2006wavelet; @bouman1996unified; @haario2017shape; @kolehmainen2003statistical; @siltanen2003statistical; @sauer1994bayesian]). In the statistical approach, incorporation of [*a priori*]{} knowledge is a crucial part in improving the quality of the image reconstructed from limited projection data. That can be viewed as an equivalent of the regularization parameter in classical regularization methods. However, statistical methods, unlike classical regularization methods, also provide a principled means to estimate the parameters of the prior (i.e., the hyperparameters) which corresponds to automatic tuning of regularization parameters.
In our work we build the statistical model by using a Gaussian process model [@Rasmussen2006] with a hierarchical prior in which the (hyper)parameters in the prior become part of the inference problem. As this kind of hierarchical prior can be seen as an instance of a Gaussian process (GP) regression model, the computational methods developed for GP regression in machine learning context [@Rasmussen2006] become applicable. It is worth noting that some works on employing GP methods for tomographic problems have also appeared before. An iterative algorithm to compute a maximum likelihood point in which the prior information is represented by GP is introduced in [@tarantola2005inverse]. In [@hendriks2018implementation; @jidling2018probabilistic], tomographic reconstruction using GPs to model the strain field from neutron Bragg-edge measurements has been studied. Tomographic inversion using GP for plasma fusion and soft x-ray tomography have been done in [@li2013bayesian; @svensson2011non]. Nevertheless, the proposed approach is different from the existing work.
Our aim is to employ a hierarchical Gaussian process regression model to reconstruct the x-ray tomographic image from limited projection data. Due to the measurement model involving line integral computations, the direct GP approach does not allow for closed form expressions. The first contribution of this article is to overcome this issue by employing the basis function expansion method proposed in [@SolinSarkka2015], which makes the line integral computations tractable as it detaches the integrals from the model parameters. This approach can be directly used for common GP regression covariance functions such as Matérn or squared exponential. The second contribution of this article is to point out that the we can also reformulate classical regularization, in particular Laplacian and Tikhonov regularization, as Gaussian process regression where only the spectral density of the process (although not the covariance function itself) is well defined. As the basis function expansion only requires the availability of the spectral density, we can build a hierarchical model off a classical regularization model as well and have a principles means to tune the regularization parameters. Finally, the third contribution is to present methods for hyperparameter estimation that arise from the machine learning literature and apply the methodology to the tomographic reconstruction problem. In particular, the proposed methods are applied to simulated 2D chest phantom data available in <span style="font-variant:small-caps;">Matlab</span> and real carved cheese data measured with $\mu$CT system. The results show that the reconstruction images created using the proposed GP method outperforms the FBP reconstructions in terms of image quality measured as relative error and as peak signal to noise ratio.
Constructing the model
======================
The tomographic measurement data
--------------------------------
Consider a physical domain $\Omega \subset {{\mathbb R}}^2$ and an attenuation function $f:\Omega\rightarrow{{\mathbb R}}$. The x-rays travel through $\Omega$ along straight lines and we assume that the initial intensity (photons) of the x-ray is $I_0$ and the exiting x-ray intensity is $I_d$. If we denote a ray through the object as function $s \mapsto (x_1(s),x_2(s))$ Then the formula for the intensity loss of the x-ray within a small distance $ds$ is given as:
$$\label{calibration1}
\frac{dI(s)}{I(s)}= -f(x_1(s),x_2(s)) ds,$$
and by integrating both sides of , the following relationship is obtained $$\label{calibration2}
\int_{-R}^{R} f(x_1(s),x_2(s)) ds = \log\frac{I_0}{I_d},$$ where $R$ is the radius of the object or area being examined.
In x-ray tomographic imaging, the aim is to reconstruct $f$ using measurement data collected from the intensities $I_d$ of x-rays for all lines through the object taken from different angles of view. The problem can be expressed using the Radon transform, which can be expressed as $$\label{Measurement Model}
\mathcal{R} f(r,\theta) = \int f(x_1,x_2) d{\mathbf x}_L,
$$ where $d{\mathbf x}_L$ denotes the $1$-dimensional Lebesgue measure along the line defined by $L=\{(x_1,x_2) \in {{\mathbb R}}^2 : x_1\cos \theta + x_2\sin\theta = r \}$, where $\theta\in[0,\pi)$ is the angle and $r\in{{\mathbb R}}$ is the distance of $L$ from the origin as shown in Figure \[Radon
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
- 'E. Bravo'
bibliography:
- '../../ebg.bib'
date: 'Received ; accepted '
title: '[$^{16}$O$($p,$\alpha)^{13}$N ]{}makes explosive oxygen burning sensitive to the metallicity of the progenitors of type Ia supernovae'
---
Introduction
============
The nucleosynthesis resulting from type Ia supernovae (SNIa) reflects the thermodynamical history of the progenitor white dwarf (WD) during the explosion and its initial chemical composition. Thus, nucleosynthetic constraints coming from observations of supernovae and their remnants are an important source of knowledge of the conditions achieved during the explosion. The optical properties, spectra, and light curves of SNIa over a few weeks around maximum brightness have been used to infer the chemical profile of the ejecta [@2005sth; @2008maz; @2011tan; @2014sas; @2016ash]. However, the ability to constrain the nucleosynthetic products based on optical data is hampered by the complex physics that governs the formation of spectral features in the visible, ultraviolet, and infrared bands.
Observations of sufficiently close supernova remnants (SNRs) are an alternative to obtain information about the chemical composition of the ejecta [e.g. @1988ham; @1988fes]. Hundreds to a few thousands of years after the explosion, the ejected elements emit strongly in the X-ray band due to shock heating, and their emission lines can be detected and measured by current X-ray observatories [e.g. @1995hug; @1995van; @2008bad; @2014yam; @2015yam]. Recently, the high spectral resolution of [*Suzaku*]{} has allowed the relative mass ratio of calcium to sulfur, $M_\mathrm{Ca}/M_\mathrm{S}$, to be measured in a few SNRs with a precision of $\sim5\%-16\%$ [@2017mar], with the result that this ratio spans the range $0.17 - 0.28$, with an uncertainty of 0.04 in both limits [for reference, this mass ratio is 0.177 in the solar system; @2003lod]. These results have been interpreted in terms of metallicity-dependent yields during explosive oxygen burning.
There are two effects to account for in relation with $\alpha$-rich oxygen burning: first, the strength of the enhancement of the yield of calcium at all metallicities, and second, the metallicity dependence of the mass ratio of calcium to sulfur, $M_\mathrm{Ca}/M_\mathrm{S}$, in the ejecta. Both calcium and sulfur are a product of explosive oxygen burning, and they are synthesized in proportion to their ratio in conditions of quasi-statistical equilibrium, which depends on the quantity of $\alpha$ particles available: $M_\mathrm{Ca}/M_\mathrm{S}\propto X_\alpha^2$ [@2014de]. [@1973woo] studied the conditions under which explosive oxygen burning would reproduce the solar-system abundances. They explained that oxygen burning can proceed through two different branches: $\alpha$-poor and $\alpha$-rich. The $\alpha$-poor branch has the net effect that for every two [${}^{16}$O]{} nuclei destroyed, one [${}^{28}$Si]{} nuclei and one $\alpha$ particle are created. This branch proceeds mainly through the fusion reaction of two [${}^{16}$O]{} nuclei, but it is contributed as well by the chain [${}^{16}$O]{}$(\gamma,\alpha)$[${}^{12}$C]{}$($[${}^{16}$O]{}$,\gamma)$[${}^{28}$Si]{}. On the other hand, the $\alpha$-rich branch involves the photo-disintegration of two [${}^{16}$O]{} nuclei to give two [${}^{12}$C]{} plus two $\alpha$ particles, followed by the fusion reaction [${}^{12}$C]{}$($[${}^{12}$C]{}$,\alpha)$[${}^{20}$Ne]{}$(\gamma,\alpha)$[${}^{16}$O]{}, which releases a total of four $\alpha$ particles for each [${}^{16}$O]{} nuclei destroyed. [@1973woo] included the chain [$^{16}$O$($p,$\alpha)^{13}$N$(\gamma$,p$)^{12}$C ]{}in the $\alpha$-rich branch and listed these two reactions (and their inverses) among the most influential reactions for explosive oxygen burning. [@2012bra] found that the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate and its inverse are among the ones that impact most the abundance of [${}^{40}$Ca]{}, in agreement with [@1973woo].
@2014de and @2016mil noticed that $M_\mathrm{Ca}/M_\mathrm{S}$ can be used to infer the metallicity, $Z$, of the progenitor of SNIa, but they did not identify the source of the metallicity dependence of the calcium and sulfur yields. Later, @2017mar used the measured $M_\mathrm{Ca}/M_\mathrm{S}$ in a few type Ia SNRs of the Milky Way and the LMC to determine the progenitor metallicity, and concluded that there had to be an unknown source of neutronization of the WD matter before the thermal runaway besides that produced during carbon simmering [@2008chm; @2008pir; @2016mar; @2017pie]. They also pointed out that SNIa models that used the standard set of reaction rates were unable to reproduce the high calcium-to-sulfur mass ratio measured in some remnants.
In the present work, it is shown that the origin of the metallicity dependence of $M_\mathrm{Ca}/M_\mathrm{S}$ has to be ascribed to the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction. In the following section, the mechanisms by which the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction controls the $\alpha$ particle abundance as a function of the progenitor metallicity are explained. If the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction is switched off, the value of $M_\mathrm{Ca}/M_\mathrm{S}$ remains insensitive to metallicity. In Section \[s:limits\], the uncertainty of the [$^{16}$O$($p,$\alpha)^{13}$N ]{}rate is reported along with the limits to its value that can be obtained from the measured $M_\mathrm{Ca}/M_\mathrm{S}$ in SNRs. The conclusions of this work are presented in Section \[s:conclusions\].
[$^{16}$O$($p,$\alpha)^{13}$N ]{}and metallicity {#s:workings}
================================================
The chain $^{16}$O(p,$\alpha$)$^{13}$N($\gamma$,p)$^{12}$C provides a route alternative to [$^{16}$O$(\gamma,\alpha)^{12}$C ]{}to convert [${}^{16}$O]{} to [${}^{12}$C]{} and feed the $\alpha$-rich branch of explosive oxygen burning [@1973woo]. The chain neither consumes nor produces protons, however its rate depends on the abundance of free protons. In the shells that experience explosive oxygen burning in SNIa, the neutron excess is closely linked to the progenitor metallicity. At small neutron excess, hence low progenitor metallicity, there are enough protons to make [$^{16}$O$($p,$\alpha)^{13}$N ]{}operational. At large neutron excess, hence large progenitor metallicity, the presence of free neutrons neutralizes the protons, and undermines the chain [$^{16}$O$($p,$\alpha)^{13}$N$(\gamma$,p$)^{12}$C ]{}efficiency. This is because in explosive oxygen burning, quasi-statistical equilibrium holds for the abundances of nuclei between silicon and calcium [@1970tru]. In quasi-statistical equilibrium, a large neutron-excess leads to a large abundance of neutronized intermediate-mass nuclei such as, for instance, $^{34}$S or $^{38}$Ar, which react much more efficiently with protons than the $\alpha$-nuclei such as $^{32}$S or $^{36}$Ar that are produced in low-neutron-excess conditions.
To illustrate the above ideas, Figs. \[f:1\]-\[f:3\] show the evolution of key quantities related to the branching of explosive oxygen burning into either the $\alpha$-rich or the $\alpha$-poor tracks. Specifically, the plots show the evolution of a mass shell reaching a peak temperature of $4\times10^9$ K in models 1p06\_Z2p25e-4\_$\xi_\mathrm{CO}$0p9 and 1p06\_Z2p25e-2\_$\xi_\mathrm{CO}$0p9, described in @2019bra. In short, both models simulate the detonation of a WD with mass $1.06$ [ ]{}made of carbon and oxygen, whose progenitor metallicities are respectively $Z=2.25\times10^{-4}$ (strongly sub-solar metallicity, hereafter the low-$Z$ case) and $Z=0.0225$ (about 1.6 times solar, hereafter the high-$Z$ case). In both models, the rate of the fusion reaction [$^{12}\mathrm{C}+^{16}\mathrm{O}$ ]{}has been scaled down by a factor 0.1 as suggested by @2017mar [see also Bravo et al. 2019].
A larger proton abundance in the low-$Z$ case at the same temperature and similar oxygen abundance as in the high-$Z
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Effective bounds for the finite number of surjective holomorphic maps between canonically polarized compact complex manifolds of any dimension with fixed domain are proven. Both the case of a fixed target and the case of varying targets are treated. In the case of varying targets, bounds on the complexity of Chow varieties are used.'
address: |
Ruhr-Universität Bochum\
Fakultät für Mathematik\
D-44780 Bochum\
Germany
author:
- Gordon Heier
title: Effective finiteness theorems for maps between canonically polarized compact complex manifolds
---
Effective bounds for automorphism groups {#autsection}
========================================
Hurwitz proved the following effective finiteness theorem on Riemann surfaces.
\[Hurbound\] Let $X$ be a smooth compact complex curve of genus $g\geq 2$. Then the group $\operatorname{{Aut}}(X)$ of holomorphic automorphisms of $X$ satisfies $$\#\operatorname{{Aut}}(X)\leq84(g-1).$$
For many years after Hurwitz’s proof, this bound has been known to be sharp only for $g=3$ and $g=7$, in which cases there exist, respectively, the classical examples of the Klein quartic in ${{\mathbb{P}}}^2$ given by the homogeneous equation $X^3Y+Y^3Z+Z^3X=0$ and the Fricke curve with automorphism group ${\rm PSL}(2,8)$. Using the theory of finite groups, it was established only in the 1960’s by Macbeath that there are infinitely many $g$ for which the above bound is sharp (see [@Macbeath]).
Xiao was able to establish the following direct (and clearly sharp due to the above) generalization of Hurwitz’s theorem.
Let $X$ be a $2$-dimensional minimal compact complex manifold of general type. Then $$\#\operatorname{{Aut}}(X)\leq(42)^2K_X^2.$$
In arbitrary dimension, the automorphism group of a smooth compact complex manifold of general type is still known to be finite because of the finiteness theorem of Kobayashi-Ochiai ([@KobOch]), which we shall state in the next section. One is tempted to conjecture that in the case of the canonical line bundle being big and nef or even ample, there is an upper bound of the form $C_nK_X^n$. The preprint [@Ts] makes an attempt to prove this conjecture.
In the paper [@Sza], Szabó was able to establish the following effective polynomial upper bound in arbitrary dimension.
\[szabobound\] Let $X$ be an $n$-dimensional compact complex manifold whose canonical line bundle is big and nef. Then the number of birational automorphisms of $X$ is no more than $$(2(n+1)(n+2)!(n+2)K_X^n)^{16n3^n}.$$
The multiple $2(n+1)(n+2)!(n+2)K_X$ is large enough to give a birational morphism from $X$ to projective space. This is proven in [@CaSchn page 8], using results of Demailly [@DBound] and Kollár [@Koeffbase] on effective base point freeness of adjoint line bundles. The goal of [@CaSchn] is to obtain a polynomial bound for the special case of automorphism groups that are abelian.
In arbitrary dimension, effective pluricanonical (birational) embeddings are essential in proving finiteness statements of the type considered in this paper. They enable us to bring the problem into the context of projective varieties and to establish uniform boundedness. In the case of $K_X$ being ample, the following effective theorem on pluricanonical embeddings is available.
\[effpluri\] If $X$ is a compact complex manifold of complex dimension $n$ whose canonical line bundle $K_X$ is ample, then $mK_X$ is very ample for any integer $$\label{pluricaneffbound}
m\geq (e+\frac 1 2)n^\frac 7 3+\frac 1 2 n^\frac 5 3 + (e+\frac 1
2)n^\frac 4 3 + 3n+ \frac 1 2 n^\frac 2 3+5,$$ where $e \approx 2.718$ is Euler’s number.
From now on, we will set $k=k(n)$ to be the round-up of the effective very ampleness bound given in .
To our knowledge, Szabó’s theorem is the one that provides the best bound at this point. However, its proof relies on several methods previously introduced by other authors (e.g., see [@HuckSauer]) and uses the classification of finite simple groups in an essential way. In light of this, the much more straightforward method of Howard and Sommese, which was introduced in [@HoSo], still deserves to be noticed. Their method is actually not only applicable to automorphisms (see next section), and it represents an instance of a proof based entirely on boundedness and rigidity, which, technically speaking, is the main focus of the present paper.
Howard and Sommese prove for the case of a canonically polarized manifold that $\#Aut(X)$ is bounded from above by a number which depends only on the Chern numbers of $X$. Based on their result, we now state the following effective finiteness theorem.
\[HoSoAutbound\] Let $X$ be a compact complex manifold of dimension $n$ whose canonical line bundle is ample. Then $$\#\operatorname{{Aut}}(X) \leq \left((n+1)^2k^nn!2^{n^2}(2k)^{\frac 1 2 n(n+1)}(1+2kn)^nK_X^n\right)^{((k^nK_X^n+n)^2-1} .$$
Before we prove this theorem, we need to prove two auxiliary propositions which make the method of Howard and Sommese entirely effective. The first proposition will be used to bound the dimension of the target projective space for the pluricanonical embedding given by $kK_X$. It is a standard argument.
\[proph0bound\] Let $X$ be an $n$-dimensional compact complex manifold and $L$ a very ample line bundle on $X$. Then
$$h^0(X,L)\leq L^n+n.$$
We proceed by induction.
The case $n=1$ follows immediately from the Riemann-Roch Theorem.
Let $D$ be an effective divisor on $X$ such that $\operatorname{{\mathcal O}}_X(D)=L$. One has the standard short exact sequence $$0\to\operatorname{{\mathcal O}}_X\to\operatorname{{\mathcal O}}_X(L)\to\operatorname{{\mathcal O}}_D(L)\to 0.$$ From this exact sequence, we obtain $$h^0(X,L)\leq h^0(X,\operatorname{{\mathcal O}}_X)+h^0(D,\operatorname{{\mathcal O}}_D(L)).$$ By induction, we find that $$h^0(X,L)\leq 1+(L_{|D})^{n-1}+n-1 = L^n+n.$$
Secondly, we use a result of Demailly, Peternell and Schneider in [@DPS] to compute a bound for the Chern class intersection numbers that occur in the well-known formula for the degree of the ($1$-codimensional) dual of a projective variety. Our effective result is the following.
\[chernintersection\] Let $X$ be a compact complex manifold of dimension $n$ whose canonical line bundle is ample. Let $k$ denote again the round-up of the constant defined in Then the following holds for $i=1,\ldots,n$. $$|c_i(\Omega_X^1).K_X^{n-i}|\leq i!2^{in}(2k)^{\frac 1 2 i(i+1)}(1+2kn)^iK_X^n.$$
Recall that $k$ is such that $kK_X$ is very ample. It follows from the Castelnuovo-Mumford theory of regularity that $\Omega_X^1(2kK_X)$ is generated by global sections and therefore nef. We may thus apply [@DPS Corollary 2.6] to obtain $$\begin{aligned}
0&\leq&c_i(\Omega_X^1(2kK_X))K_X^{n-i}\nonumber\\
&\leq& (c_1(\Omega_X^1(2kK_X)))^iK_X^{n-i}\nonumber\\
&=&(c_1(\Omega_X^1)+2knK_X)^iK_X^{n-i}\nonumber\\
&=&(1+2kn)^iK_X^n\label{B}\end{aligned}$$ for $i=1,\ldots,n$.
In [@Ful page 56], one finds the formula $$\label{chernclassformula}
c_i(\Omega_X^1(2kK_X))=\sum_{\nu=0}^{i}{n-\nu\choose i-\nu}c_\nu(\Omega_X^1)(2kK_X)^{i-\nu},$$ which enables us to prove the Proposition by an induction.
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'In the framework of the search of dark matter in galactic halos in form of massive compact halo object (MACHOs), we discuss the status of microlensing observations towards the Magellanic Clouds and the Andromeda galaxy, M31. The detection of a few microlensing events has been reported, but an unambiguous conclusion on the halo content in form on MACHOs has not been reached yet. A more detailed modelling of the expected signal and a larger statistics of observed events are mandatory in order to shed light on this important astrophysical issue.'
author:
- 'S. Calchi Novati'
title: Microlensing in Galactic Halos
---
Introduction
============
Gravitational microlensing, as first noted in [@ref:pacz86], is a very efficient tool for the detection and the characterisation of massive astrophysical halo compact objects (MACHOs), a possible component of dark matter halos. Following the first exciting detection of microlensing events [@ref:macho93; @ref:eros93; @ref:ogle93], by now the detection of $\sim 30$ events have been reported towards the Magellanic Clouds and our nearby galaxy, M31, and first interesting conclusions on this issue have been reported (Section \[sec:LMC\] and Section \[sec:M31\]). Soon enough, however, the Galactic bulge probed to be an almost an interesting target [@ref:pacz91], and indeed by now the number of observed microlensing events along this line of sight exceeds by two order of magnitudes that observed towards the Magellanic Clouds and M31. In that case the contribution from the dark matter halo is expected to be extremely small compared to that of either bulge or disc (faint) stars [@ref:griest91]. Microlensing searches towards the Galactic bulge are therefore important as they allow to constrain the inner Galactic structure [@ref:pacz94]. Recently, the MACHO [@ref:popowski05], OGLE [@ref:sumi06] and EROS [@ref:hamadache06] collaborations presented the results of their observational campaigns towards this target. A remarkable conclusion is the agreement among these different searches as for the observed value of the optical depth and the agreement with the theoretical expectations [@ref:evans02; @ref:hangould03]. For a more recent discussion see also [@ref:novati07], where the issue of the bulge mass spectrum is treated.
The microlensing quantities {#sec:ml}
===========================
Microlensing events are due to a lensing object passing near the line of sight towards a background star. Because of the event configuration, the observable effect during a microlensing event is an apparent transient amplification of the star’s flux (for a review see e.g. [@ref:roulet97]).
The *optical depth* is the instantaneous probability that at a given time a given star is amplified so to give rise to an observable event. This quantity is the probability to find a lens within the “microlensing tube”, a tube around the line of sight of (variable) radius equal to the *Einstein radius*, $R_\mathrm{E}=\sqrt{4G\mu_l/c^2\, D_l D_{ls}/D_s}$, where $\mu_l$ is the lens mass, $D_l,\,D_s$ are the distance to the lens and to the source, respectively, and $D_{ls}=D_s-D_l$. The optical depth reads $$\label{eq:tau}
\tau = \frac{4\pi G D_s^2}{c^2}\int_{0}^{D_s} \mathrm{d}x \rho(x) x(1-x)\,,$$ where $\rho$ is the *mass* density distribution of lenses and $x\equiv D_l/D_s$. The optical depth provides valuable informations on the overall density distribution of the lensing objects, but it can not be used to further characterise the events, in particular, it does not depend on the lens mass. This is because lighter (heavier) objects are, for a given total mass of the lens population, more (less) numerous but their lensing cross section is smaller (larger), and the two effects cancel out. The optical depth turns out to be an extremely small quantity, of order of magnitude $\sim 10^{-6}$. This implies that one has to monitor extremely large sets of stars to achieve a reasonable statistics.
The experiments measure the number of the events and their characteristics, in particular their durations. To evaluate these quantities one makes use of the microlensing *rate* that expresses the number of lenses that pass through the volume element of the microlensing tube $\mathrm{d}^3x$ in the time interval $\mathrm{d}t$ for a given lens number density distribution $n(\vec{x})$ and velocity distribution $f(\vec{v})$ $$\label{eq:rate}
\mathrm{d} \Gamma = \frac{n_l\,\mathrm{d}^3 x}{\mathrm{d}t}
\times f(\vec{v}_l) \mathrm{d}^3 v_l\,.$$ The volume element of the microlensing tube is $\mathrm{d}^3 x=(\vec{v}_{r\bot} \cdot \hat{\vec{n}}) \mathrm{d}t \mathrm{d}S$. $\mathrm{d}S=\mathrm{d}l\mathrm{d}D_l$ is the portion of the tube external surface, and $\mathrm{d}l=u_t R_\mathrm{E} \mathrm{d}\alpha$, where $u_t$ is the maximum impact parameter, $\vec{v}_{r}$ is the lens relative velocity with respect to the microlensing tube and $\vec{v}_{r\bot}$ its component in the plane orthogonal to the line of sight, and $\hat{\vec{n}}$ is the unit vector normal to the tube inner surface at the point where the microlensing tube is crossed by the lens. The velocity of the lenses entering the tube is $\vec{v}_l=\vec{v}_r+\vec{v}_t$. $\vec{v}_t$ is the tube velocity.
The differential rate is directly related to the number of expected microlensing events as $\mathrm{d}N=N_\mathrm{obs} T_\mathrm{obs} \mathrm{d}\Gamma$, where $N_\mathrm{obs}, T_\mathrm{obs}$ are the number of monitored sources and the whole observation time, respectively. Furthermore, the distribution for the duration of the microlensing events, the *Einstein time*, $t_\mathrm{E}=R_\mathrm{E}/v_{r\bot}$, can also be deduced from the differential microlensing rate, as $\mathrm{d}\Gamma/\mathrm{d}t_\mathrm{E}$. Besides on the lens mass, the key quantity one is usually interested into, $t_\mathrm{E}$ depends also on other usually unobservable quantities. It is therefore suitable to observe a large enough number of events so to be able to deal statistically with the degeneracies intrinsic to the parameter space of microlensing events.
Eventually note that, in calculating the microlensing quantities, the optical depth and the rate, one can also take into account the source spatial and velocity distributions.
Microlensing towards the Magellanic Clouds {#sec:LMC}
==========================================
![Top left: projection on the sky plane of the column density of the LMC disc and bar. The numerical values on the contours are in $10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$ units. The three innermost contours correspond to 10, 20 and $30\times 10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$. The location of the MACHO (black stars and empty diamonds) and EROS (triangles) microlensing candidates are shown. The $x-y$ axes are directed towards West and North respectively. From top right to bottom left: contours maps of the optical depth for lenses in the Galactic halo, LMC halo and self lensing, respectively. The numerical values are in $10^{-8}$ units. Also shown, the location of the fields observed by the MACHO collaboration. (Figures adapted from [@ref:mancini04].)[]{data-label="fig:lmc-tau"}](lmc1 "fig:"){width="7cm"} ![Top left: projection on the sky plane of the column density of the LMC disc and bar. The numerical values on the contours are in $10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$ units. The three innermost contours correspond to 10, 20 and $30\times 10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$. The location of the MACHO (black stars and empty diamonds) and EROS (triangles) microlensing candidates are shown. The $x-y$ axes are directed towards West and North respectively. From top right to bottom left: contours maps of the optical depth for lenses in the Galactic halo, LMC halo and self lensing, respectively. The numerical values are in $10^{-8}$ units. Also shown, the location of the fields observed by the MACHO collaboration. (Figures adapted from [@ref:mancini04].)[]{data-label="fig:lmc-tau"}](lmc2 "fig:"){width="7cm"} ![Top left: projection on the sky plane of the column density of the LMC disc and bar. The numerical values on the contours are in
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We present a fair and optimistic [@ben:90; @aso:97] quantum contract signing protocol between two clients that requires no communication with the third trusted party during the exchange phase. We discuss its fairness and show that it is possible to design such a protocol for which the probability of a dishonest client to cheat becomes negligible, and scales as $N^{-1/2}$, where $N$ is the number of messages exchanged between the clients. Our protocol is not based on the exchange of [*signed*]{} messages: its fairness is based on the laws of quantum mechanics. Thus, it is abuse-free [@abuse-free_1], and the clients do not have to generate new keys for each message during the Exchange phase. We discuss a real-life scenario when the measurement errors and qubit state corruption due to noisy channels occur and argue that for real, good enough measurement apparatus and transmission channels, our protocol would still be fair. Our protocol could be implemented by today’s technology, as it requires in essence the same type of apparatus as the one needed for BB84 cryptographic protocol [@bb84]. Finally, we briefly discuss two alternative versions of the protocol, one that uses only two states (based on B92 protocol [@b92]) and the other that uses entangled pairs (based on [@ekert:91]), and show that it is possible to generalize our protocol to an arbitrary number of clients.'
author:
- 'N. Paunković$^{1}$, J. Bouda$^{2}$ and P. Mateus$^{1}$'
title: Fair and optimistic quantum contract signing
---
I. Introduction {#sec:introduction}
===============
Contract signing [@Even.Yacobi:Relationsamongpublic-1980] is an important security task with many applications, namely to stock market and others [@CSapplications]. It is a two party protocol between Alice and Bob who share a common contract and want to exchange each others’ commitments to it, thus binding to the terms of the contract. Usually, commitment is done by signing the contract on the spot.
With the technology development, situations when parties involved are physically far apart become more relevant every day – distant people can communicate using ordinary or e-mail, internet, etc. This poses new challenges to the problem. Forcing spatially distant parties to exchange signatures opens the possibility of a fraud: Bob may get the commitment from Alice (a copy of the contract with her signature on it) without committing himself, which creates an [*unfair situation*]{}. Indeed, having Alice’s commitment enables Bob to appeal to a judge to bind the contract (i.e., to enforce; declare it valid), by showing Alice’s commitment to the contract (together with his signature stamped on it). On the other hand, although Alice did commit, she cannot prove it (prove that she sent her commitment to Bob) and thus cannot appeal to a judge. Moreover, she cannot prove that she did not receive Bob’s commitment, so he can safely claim that he behaved honestly and sent his commitment to Alice. Note that initially Bob did not commit, but having Alice’s commitment puts him in a position to [*later in time*]{} choose whether to sign the contract or not, and thus bind it or not, while Alice has no power to do either of the two.
This situation is particularly relevant in a stock market, for example, where prices of stocks may range over time, but agents must commit [*beforehand*]{} to sell/buy at a certain time in the [*future*]{}, for previously fixed prices. The unfairness allows Bob to make risky decisions in the stock market without being bound to them, unless he decides so. The problem when distant parties wish to commit to a common contract lies in the impossibility for an agent, say Alice, to prove whether she has indeed committed to it or not. Often, the contract signing problem is said to be a variant of the so-called [*return receipt*]{} (or [*certified mail*]{}) [*problem*]{}, in which parties involved exchange mails between each other asking for the proof confirming whether the other side received the message, or not.
A simple solution to this unfair situation is to have a trusted third party (usually referred to as Trent) mediating the transaction - Alice and Bob send their commitments to Trent, who then returns the receipts to the senders, and performs the message exchange [*only*]{} upon receiving both of the commitments. However, Trent’s time and resources are expensive and should be avoided as much as possible. Unfortunately, it has been shown that there is no fair and viable contract signing protocol [@Even.Yacobi:Relationsamongpublic-1980; @fis:lyn:pat:85], unless during the signature exchange phase the signing parties communicate with a common trusted agent, i.e., Trent. By [*fair*]{} protocol we mean that either both parties get each other’s commitment or none gets. By [*viable*]{} protocol we mean that, if both parties behave honestly, they will both get each others’ commitments.
The essence of the proof of the above impossibility result is rather simple, and is related with the impossibility of establishing distributed consensus in asynchronous networks [@fis:lyn:pat:85]. The simple assumption we need for the proof is the following: the protocol consists of a number of messages exchanged between the two parties, so that eventually, upon the termination of the protocol, both Alice and Bob acquire the signature of the other. This can be done by either sending pieces of signatures in each message, or in a more sophisticated scenarios, by sending partial information needed to calculate, upon running a complex algorithm, the signature needed [@even:82; @even:goldreich:lempel:85]. We see that if such protocol existed, it would have a final step where one message is exchanged, say, from Bob to Alice. In that case, before sending his last message, Bob would already have all the information required for him to compute Alice’s signature of the contract (and Alice does not). Therefore, if he does not send the last message, the protocol reaches an unfair state. Note the essential importance of the asynchronousity – it is not possible for [*distant*]{} parties to arrange in advance that messages are sent [*simultaneously*]{}.
One way to come around this difficulty is to consider [*optimistic*]{} protocols that do not require communication with Trent unless something wrong comes up (some message is missing, etc.) [@aso:97]. In such protocols, the clients contact Trent regarding the given contract before the actual signing takes place. Trent notifies the request and assigns the particular contract with the clients, this way [*initializing*]{} the signing protocol. After that, the clients [*exchange*]{} messages between each other such that, if the protocol is executed correctly by both sides, both will end up with signed messages. In case something goes wrong (message that is not according to the protocol is sent, or communication interrupted), Trent is contacted in order to [*bind*]{} the contract.
Another workaround is to relax the fairness condition probabilistically. [*Probabilistic fairness*]{} allows one agent to have at most $\varepsilon$ more probability of binding the contract over the other agent, at each step of the protocol. In this case, for an arbitrarily small $\varepsilon$, classical solutions have been found where the number of exchanged messages between the agents is minimized [@rabin:83; @ben:90]. In addition to being (probabilistically) fair, in the protocol by Rabin [@rabin:83] the joint probability that one agent can bind the contract, while the other cannot, is also always smaller than given $\varepsilon$. Moreover, the second protocol by Ben-Or [*et. al*]{} [@ben:90] satisfies even stronger condition: the conditional probability that an agent cannot bind the contract, given that the other can, can be made arbitrarily small. Note that the two notions are not exclusive: the protocol [@ben:90] is both fair and optimistic.
In this paper, we present a (probabilistically) [*fair*]{} contract signing protocol where [*no information*]{} with a trusted third party (Trent) is exchanged during the exchange phase. This way, it avoids possible communication bottlenecks that are otherwise inherent when involving Trent. Information exchange takes place during the initialization phase and possibly later during the (contract) binding phase (the protocol is [*optimistic*]{} [@aso:97]: Trent is rarely asked to bind the contract due to protocol fairness - cheating does not pay off). Unlike previous classical proposals, in our quantum protocol the messages exchanged between the clients (Alice and Bob) during the exchange phase do [*not*]{} have to be [*signed*]{}. Consequently, our protocol is abuse-free [@abuse-free_1]: a client has no proof that (s)he communicated with the other client, attempting to sign a [*given*]{} contract. In our protocol only two signed messages are exchanged. This is very important when one wants to achieve unconditional security. In the case of classical protocols, digital pseudo-signatures [@Chaum.Roijakkers-Unconditionally-SecureDigitalSignatures-1991] should be used, where key is one-use and expensive to generate. Finally, our protocol can be used in solving some purely quantum protocols involving timely decisions between spatially distant parties, such was the case of simultaneous dense coding and teleportation [@daowen:11].
In classical cryptography the contract exchange is done in the way that respective participants are learning some information (signed message, etc.) bit by bit, thus increasing their knowledge. In order to bind the contract they have to present the (complete) information to Trent. Our approach is somewhat different, and is based on the laws of quantum physics.
Quantum systems obey the laws of quantum physics, which exhibit some counterintuitive features that are quite distinct from those of classical physics. The principle of quantum superposition
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
address: |
DESY, Notkestr. 85, D-22603 Hamburg, Germany\
E-mail: fpschill@mail.desy.de
author:
- 'F.-P. Schilling \[H1 Collaboration\]'
title: |
Diffractive Jet Production in DIS\
– Testing QCD Factorisation
---
Overview
========
At HERA, colour singlet exchange or [*diffractive*]{} processes are studied in deep-inelastic $ep$ scattering (DIS), where the exchanged photon with virtuality $Q^2$ provides a probe to determine the QCD (i.e. quark and gluon) structure of diffractive exchange. In [@collins], it was proven that QCD hard scattering factorisation is valid in diffractive DIS, so that [*diffractive parton distributions*]{} $p_i^D$ in the proton can be defined as quasi-universal objects. The hypothesis of a factorising $x_{{I\!\!P}}$ dependence ([*Regge factorisation*]{}) is often used in addition.
Measurements of inclusive diffractive DIS in terms of the [ *diffractive structure function*]{} $F_2^{D(3)}(x_{{I\!\!P}},\beta,Q^2)$ mainly constrain the diffractive quark distribution. By contrast, diffractive dijet production is directly sensitive to the gluon distribution $g^D(z,\mu^2)$ (Fig. \[fig2\]), which can be inferred only indirectly from the scaling violations of $F_2^{D(3)}$. QCD factorisation can be tested by predicting the dijet cross sections using the pdf’s extracted from $F_2^{D(3)}$.
![Inclusive diffractive scattering at HERA [*(left)*]{} and diffractive dijet production [*(right)*]{}, viewed in a partonic picture.[]{data-label="fig2"}](djets-kine.grey.eps){height="3.0cm"}
Furthermore, the predictions of a variety of phenomenological models for diffractive DIS such as soft colour neutralisation or 2-gluon exchange can be confronted with the dijet cross sections.
Data Selection and Cross Section Measurement
============================================
The data sample corresponds to an integrated luminosity of $\mathcal{L}=18.0 \rm\ pb^{-1}$ and was obtained with the H1 detector at HERA. Dijet events were identified using the CDF cone algorithm and diffractive events were selected by the requirement of a large rapidity gap in the outgoing proton direction. The kinematic range of the measurement is $4<Q^2<80 \ \mathrm{GeV^2}$, $p^*_{T, jet}>4 \
\mathrm{GeV}$, $x_{{I\!\!P}}<0.05$, $M_Y<1.6 \ \mathrm{GeV}$ and $|t|<1.0 \
\mathrm{GeV^2}$. The cross sections were corrected for detector and QED radiative effects and the systematic uncertainties, which dominate the total errors, were carefully evaluated.
Diffractive Parton Distributions
================================
Parton distributions for the diffractive exchange[^1] were extracted from DGLAP QCD fits to $F_2^{D(3)}(x_{{I\!\!P}},\beta,Q^2)$ in [@h1f2d94]. The parton distributions were found to be dominated by gluons ($80-90\%$ of the exchange momentum).
If these parton distributions, which evolve according to the DGLAP equations, are used to predict the diffractive dijet cross sections, a very good agreement is obtained (Fig. \[fig5ab\]). Fig. \[fig7\] shows the measurement of the dijet cross section as a function of $z_{{I\!\!P}}^{(jets)}$, an estimator for the parton momentum fraction of the diffractive exchange which enters the hard process (Fig. \[fig2\] right). A very good agreement in shape and normalisation is obtained if the [ *fit 2*]{} parton distributions from [@h1f2d94] are used. The [*fit 3*]{} parameterisation, in which the gluon distribution is peaked towards high $z_{{I\!\!P}}$ values, is disfavoured[^2]. Using different factorisation scales ($\mu^2=Q^2+p_T^2$ in Fig. \[fig7\]a, $\mu^2=p_T^2$ in Fig. \[fig7\]b) or including a resolved virtual photon contribution (Fig. \[fig7\]a) in the model prediction does not alter these conclusions. The dijet data thus strongly support the validity of QCD factorisation in diffractive DIS and give tight constraints on the diffractive gluon distribution in both shape and normalisation.
The measured $z_{{I\!\!P}}^{(jets)}$ cross sections in bins of the scale $\mu^2=Q^2+p_T^2$ (Fig. \[fig8\]a) are in good agreement with the prediction based on a DGLAP evolution of the diffractive parton distributions. The $z_{{I\!\!P}}^{(jets)}$ cross sections in bins of $x_{{I\!\!P}}$ (Fig. \[fig8\]b) demonstrate consistency with Regge factorisation.
In a Regge framework, the energy dependence of the cross section is determined in terms of an effective [*pomeron intercept*]{} $\alpha_{{I\!\!P}}(0)=1.17\pm0.07$ (stat.+syst.) from the $x_{{I\!\!P}}$ cross section (Fig. \[fig6\]a), consistent with the result from [@h1f2d94]. The cross section as a function of $\beta$ is shown in Fig. \[fig6\]b.
Soft Colour Neutralisation Models
=================================
In Fig. \[fig10\], the cross sections are compared with models based on the ideas of soft colour neutralisation to produce large rapidity gaps. These are the original version of the ‘soft colour interactions’ model (SCI) [@sci], the improved version of SCI based on a generalised area law [@scinew] and the ‘semi-classical model’ [@semicl]. The original SCI and the semi-classical models give good descriptions of the differential distributions. However, none of these models is yet able to simultaneously reproduce shapes and normalisations of the dijet cross sections.
Colour Dipole and 2-gluon Exchange Models
=========================================
Models for diffractive DIS based on the diffractive scattering of $q\bar{q}$ or $q\bar{q}g$ photon fluctuations off the proton by 2-gluon exchange are confronted with the data in Fig. \[fig11\] for the limited kinematic range of $x_{{I\!\!P}}<0.01$, where contributions from quark exchange can be neglected. The ‘saturation model’ [@sat], which takes only $k_T$-ordered configurations of the final state partons into account, reproduces the shapes of the differential distributions, but underestimates the cross sections by a factor of two. The model of Bartels et al. [@bartels], in which also non-$k_T$-ordered configurations are taken into account, is found to be in reasonable agreement with the data if a free parameter $p_{T,g}^{cut}$ is fixed to $1.5
\rm\ GeV$[^3].
Conclusions
===========
Diffractive dijet production has been shown to be a powerful tool to gain insight into the underlying QCD dynamics of diffraction, in particular the role of gluons. Factorisable, gluon-dominated diffractive parton distributions successfully describe diffractive jet production and inclusive diffraction in DIS at the same time, in agreement with QCD factorisation.
[99]{}
H1 Collaboration, C. Adloff [*et al.*]{}, .
F.-P. Schilling, Ph.D. thesis, Univ. Heidelberg (2001), DESY-THESIS-2001-010.
J. Collins, , err.-ibid. [**D 61**]{} (2000) 19902.
H1 Collaboration, C. Adloff [*et al.*]{}, .
A. Edin, G. Ingelman, J. Rathsman, .
J. Rathsman, .
W. Buchmüller, T. Gehrmann, A. Hebecker, .
K. Golec-Biernat, M. Wüsthoff, .
J. Bartels, H. Lotter, M. Wüsthoff, ;\
J. Bartels, H. Jung, M. Wüsthoff, .
[^1]: The assumption of Regge factorisation was found to be compatible with the data.
[^2]: The corresponding gluon distributions are shown above the cross sections.
[^3]: $p_{T,g}^{cut}$ corresponds to the minimum $p_T$ of the final state gluon in the case of $q\bar{q}g$ production.
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'This article continues a discussion raised in previous publications (LANL preprint server, nucl-th/0202006 and nucl-th/0202020). I try to convince my opponents that general arguments are not “my case" and may be applied to their model.'
author:
- 'V. Yu. Ponomarev[@byline2]'
title: 'On “the authentic damping mechanism” of the phonon damping model. II [^1]'
---
To remind in brief a discussion which is already distributed over several publications:
A damping mechanism of giant resonances (GR) is well established and represents now basic knowledge in nuclear structure physics. Calculations performed by many groups of authors within different microscopic approaches confirm that a spreading width (due to a coupling of collective modes, phonons, to complex configurations) is the main part of the total GR width in medium and heavy nuclei. In light nuclei, a coupling to continuum (an escape width) also plays an essential role.
A damping mechanism of GRs in a phenomenological phonon damping model (PDM) in its PDM-1 version is different from that (see, an important clarification in [@note0]). A collective phonon fragments within PDM-1 as a result of coupling to simple and not to complex configurations, i.e. only the so-called Landau damping mechanism is accounted for. A coupling strength is a phenomenological model parameter which is adjusted to reproduce the GR width known from experiment. Agreement with data provided by fits within the PDM may be defined as from very good to excellent.
In a recent article [@n9] which raised the present discussion, it has been concluded that these type of fits confirm “the [**authentic**]{} damping mechanism” of the PDM as “the result of coupling between collective phonon and non-collective $p$-$h$ configurations” (i.e. the well established knowledge on the GR properties was put in doubt). This conclusion has been criticized in my article [@m]. It has been argued that this model has the Breit-Wigner (BW) form for the phonon distribution as an [*ad hoc*]{} input and thus, even excellent description of the data available is not surprising. A fruitfulness of an idea to make conclusions from fits in which model parameters are adjusted to described physical observables has been put in doubts.
Although my evaluation of the PDM in [@m] was made for the point of view of general physical grounds, Dang [*et al.*]{} did not agree with me in the forthcoming publication [@nn]. They claim that I consider some specific case (“his case") which cannot be attached to the PDM and all my arguments “[*are either wrong or irrelevant*]{}". I cannot agree with their conclusion and present below additional arguments in a sequence following the paragraphs in [@nn]:
[**2.**]{} For the giant dipole resonance (GDR), the energy scale associated with variations in a coupling matrix between a phonon and uncorrelated $1p1h$ states is of the order of a few hundred keV. The width of the GDR strength function is of the order of a few MeV. So, I do not agree that the condition cited in [@nn] from [@Boh69] is satisfied in the GDR region: why are a few MeV small compared to a few hundred keV?
I know only one PDM-1 article [@n1] in which it is assumed that a phonon interacts 40 times stronger with some specific configurations than with other ones (see more on this article in [**9.**]{} below). In all other PDM-1 papers, we find a single phonon which interacts equally with all $1p1h$ configurations. I do not want to discuss here the PDM fits at non-zero temperature. To keep on reproducing the data in hot nuclear, Dang [*et al.*]{} have to assume for unclear reasons that a phonon prefers to interact with $1p1p$ and $1h1h$ configurations about 10 times stronger than with $1p1h$ configurations. Again, as in the case of cold nuclei, an idea to provide the best fits is preferred to understanding of the physics. I think it is a blind way for theory.
It is true that PDM equations are presented in a general form in many papers by this group with different $V_{q_1 s_1}$. But the point is that they are never used in actual calculations in this form. For this reason, I prefer to discuss what is used in calculations rather than what is written and not used even by the PDM authors themselves.
[**3.**]{} It is very simple to transform Eq. (1) in [@nn] for $m_q^{(2)}$ into Eq. (1) in [@m] for $W_2$, although Dang [*et al.*]{} claim it is impossible. For that, one needs to switch off an additional PDM smearing, i.e., consider the limit $\varepsilon \to 0$. This would bring immediately to the first line of Eq. (2D-14) in [@Boh69]. Eq. (1) in [@m] (for a constant coupling strength) or its general form in [@Boh69]: $$\hspace*{60mm}W_2 = \sum_{a,\alpha} (V_{a \alpha})^2 \hspace*{60mm}
\mbox{(2D-14)}$$ for the second moment $W_2$ is relevant to the PDM as well as to any model which deals with interactive systems.
Of course, to perform this transformation one should use the PDM strength function introduced in Ref. [@o1]: $$S_q(E) = \frac{1}{\pi}
\frac{\gamma_q(E)}{\left(E-\omega_{q}-P_q(E)\right)^2+\gamma_q^2(E)}~
\label{e1}$$ where $\gamma_q(E)$ is the PDM damping, $P_q(E)$ is the polarization operator (see, e.g., Ref. [@o1] for definitions), and $\omega_{q}$ is a phonon energy, a model parameter. The strength function $S_q(E)$ presents fragmentation properties of a PDM phonon over eigen-states of the PDM Hamiltonian smeared with an additional parameter $\varepsilon$. Parameter $\varepsilon$ appears in $\delta(E) = \varepsilon/[\pi \cdot (E^2+ \varepsilon^2)]$ for $\delta$-functions in $\gamma_q(E)$.
I point this out because the strength function (\[e1\]) has been replaced in the forthcoming PDM articles [@n9; @nn; @n1; @n2; @n3; @n4; @n5; @n6; @n7; @n8; @n10; @n11; @n12; @n13], by its approximate form: $$S_q'(E) = \frac{1}{\pi}
\frac{\gamma_q(E)}{\left(E-E_{GDR}\right)^2+\gamma_q^2(E)}
\label{e2}$$ where $E_{GDR}$ should be taken as a solution of $$f(E) \equiv E-\omega_{q}-P_q(E)=0~.
\label{e3}$$ Eq. (\[e2\]) has been obtained from Eq. (\[e1\]) by expanding $P_q(E)$ near a solution of Eq. (\[e3\]), $E_{GDR}$, and then extrapolating the properties of this approximation far away from $E_{GDR}$. In the limit $\varepsilon \to 0$, Eq. (\[e3\]) has $N+1$ solutions corresponding to eigen-energies of the PDM Hamiltonian.
[**4.**]{} I never claimed that the BW form for the phonon distribution is assumed within the PDM. But it is indeed an [*ad hoc*]{} input for PDM calculations. I may refer again to [@Boh69] where we read that “[*the Breit-Wigner form for the strength function is an immediate consequence of the assumption of a constant coupling to the other degrees of freedom of the system*]{}". The BW under discussion has nothing to do with definition of the PDM strength function. Indeed, in the limit $\varepsilon \to 0$, $S_q(E)$ turns into a set of infinitely narrow lines while their envelope still remains the BW.
[**5.**]{} I do not agree that the calculation with random values of $E_{\alpha}$ in [@m] “[*no longer corresponds to the PDM*]{}". I have used the PDM Hamiltonian and details on a spectrum and model parameters are only technical details of a calculation. The purpose of my calculation is to demonstrate that “[*the crucial feature of the PDM is the use of realistic single-particle energies*]{}" [@nn] is of marginal importance when a configuration space is not small; everything is determined by the BW discussed above.
$E_0$ in Ref. [@nn]belongs to the Lorentz line in a hypothetical nucleus and not to my PDM fits. Eigen energies in my calculation in Ref. [@m] were obtained from Eq. (\[e3\]) in the limit $\varepsilon \to 0$.
[**6.**]{} I agree that if something “[*is by no mean\[s\] obvious*]{}" it has to be checked. My experience of microscopic calculations tells me that the increase of collectivity tends to the increase of a coupling strength. Of course, it is not necessary that everybody should trust my experience. But then, there are no other alternatives: the one, who puts it in doubt
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We predict a non-monotonous temperature dependence of the persistent currents in a ballistic ring coupled strongly to a stub in the grand canonical as well as in the canonical case. We also show that such a non-monotonous temperature dependence can naturally lead to a $\phi_0/2$ periodicity of the persistent currents, where $\phi_0$=h/e. There is a crossover temperature $T^*$, below which persistent currents increase in amplitude with temperature while they decrease above this temperature. This is in contrast to persistent currents in rings being monotonously affected by temperature. $T^*$ is parameter-dependent but of the order of $\Delta_u/\pi^2k_B$, where $\Delta_u$ is the level spacing of the isolated ring. For the grand-canonical case $T^*$ is half of that for the canonical case.'
address: |
$^a$Fl.48, 93-a prospect Il’icha, 310020 Khar’kov, Ukraine\
$^b$S.N. Bose National Centre for Basic Sciences, JD Block, Sector 3, Salt Lake City, Calcutta 98, India.
author:
- 'M. V. Moskalets$^a$ and P. Singha Deo$^{b,}$[@eml]'
title: '**[Temperature enhanced persistent currents and “$\phi_0/2$ periodicity”]{}**'
---
Introduction
============
Although the magnitude of persistent current amplitudes in metallic and semiconductor mesoscopic rings [@but] has received experimental attention [@exp], much attention has not been given to qualitative features of the persistent current. Qualitative features reflect the underlying phenomena, and are more important than the order of magnitude. Incidentally, the order of magnitude and sign of the persistent currents in metallic rings is still not understood.
With this background in mind, we study the temperature dependence of persistent currents in a ring strongly coupled to a stub [@buet]. We predict a non-monotonous temperature dependence of the amplitude of persistent currents in this geometry both for the grand-canonical as well as for the canonical case. We show that there is a crossover temperature ($T^*$) above which it decreases with temperature and below which it increases with temperature, and energy scales determining this crossover temperature are quantified. This is in contrast to the fact that in the ring, temperature monotonously affects the amplitude of persistent currents. However, so does dephasing and impurity scattering, which are again directly or indirectly temperature dependent [@but; @Cheung], except perhaps in very restrictive parameter regimes where it is possible to realize a Luttinger liquid in the ring in the presence of a potential barrier [@krive]. Recent study, however, shows that in the framework of a Luttinger liquid, a single potential barrier leads to a monotonous temperature dependence of the persistent currents for non-interacting as well as for interacting electrons [@mos99]. We also show a temperature-induced switch over from $\phi_0$ periodicity to $\phi_0/2$ periodicity. This is a very non-trivial temperature dependence of the fundamental periodicity that cannot be obtained in the ring geometry.
There is also another motivation behind studying the temperature dependence of persistent currents in this ring-stub system. In the ring, the monotonous behavior of the persistent current amplitude with temperature stems from the fact that the states in a ring pierced by a magnetic flux exhibit a strong parity effect [@Cheung]. There are two ways of defining this parity effect in the single channel ring (multichannel rings can be generalized using the same concepts and mentioned briefly at the end of this paragraph). In the single-particle picture (possible only in the absence of electron-electron interaction), it can be defined as follows: states with an even number of nodes in the wave function carry diamagnetic currents (positive slope of the eigenenergy versus flux) while states with an odd number of nodes in the wave function carry paramagnetic currents (negative slope of the eigenenergy versus flux) [@Cheung]. In the many-body picture (without any electron-electron interaction), it can be defined as follows: if $N$ is the number of electrons (spinless) in the ring, the persistent current carried by the $N$-body state is diamagnetic if $N$ is odd and paramagnetic if $N$ is even [@Cheung]. Leggett conjectured [@leg] that this parity effect remains unchanged in the presence of electron-electron interaction and impurity scattering of any form. His arguments can be simplified to say that when electrons move in the ring, they pick up three different kinds of phases: 1) the Aharonov-Bohm phase due to the flux through the ring, 2) the statistical phase due to electrons being Fermions and 3) the phase due to the wave-like motion of electrons depending on their wave vector. The parity effect is due to competition between these three phases along with the constraint that the many-body wave function satisfy the periodic boundary condition (which means if one electron is taken around the ring with the other electrons fixed, the many-body wave function should pick up a phase of 2$\pi$ in all). Electron-electron interaction or simple potential scattering cannot introduce any additional phase, although it can change the kinetic energy or the wave vector and hence modify the third phase. Simple variational calculations showed that the parity effect still holds [@leg]. Multichannel rings can be understood by treating impurities as perturbations to decoupled multiple channels, which means small impurities just open up small gaps at level crossings within the Brillouin zone and keep all qualitative features of the parity effect unchanged. Strong impurity scattering in the multichannel ring can, however, introduce strong level correlations, which is an additional phenomenon. Whether and how the parity effect gets modified by these correlations is an interesting problem.
In a one-dimensional (1D) system where we have a stub of length $v$ strongly coupled to a ring of length $u$ (see the left bottom corner in Fig. 1), we can have a bunching of levels with the same sign of persistent currents, [@Deo95] i.e., many consecutive levels carry persistent currents of the same sign. This is essentially a breakdown of the parity effect. The parity effect breaks down in this single channel system because there is a new phase that does not belong to any of the three phases discussed by Leggett and mentioned in the preceding paragraph. This new phase cancels the statistical phase and so the N-body state and the (N+1)-body state behave in similar ways or carry persistent currents of the same sign [@deo96; @sre]. When the Fermi energy is above the value where we have a node at the foot of the stub (that results in a transmission zero in transport across the stub), there is an additional phase of $\pi$ arising due to a slip in the Bloch phase [@deo96] (the Bloch phase is the third kind of phase discussed above, but the extra phase $\pi$ due to slips in the Bloch phase is completely different from any of the three phases discussed above because this phase change of the wave function is not associated with a change in the group velocity or kinetic energy or the wave vector of the electron [@deo96; @sre]). The origin of this phase slip can be understood by studying the scattering properties of the stub structure. One can map the stub into a $\delta$-function potential of the form $k \cot (kv) \delta (x-x_0)$ [@deo96]. So one can see that the strength of the effective potential is $k \cot (kv)$ and is energy dependent. Also the strength of the effective potential is discontinuous at $kv=n \pi$. Infinitesimally above $\pi$ an electron faces a positive potential while infinitesimally below it faces a negative potential. As the effective potential is discontinuous as a function of energy, the scattering phase, which is otherwise a continuous function of energy, in this case turns out to be discontinuous as the Fermi energy sweeps across the point $kv=\pi$. As the scattering phase of the stub is discontinuous, the Bloch phase of the electron in the ring-stub system is also discontinuous. This is pictorially demonstrated in Figs. 2 and 3 of Ref. [@deo96]. In an energy scale $\Delta_u\propto 1/u$ (typical level spacing for the isolated ring of length $u$) if there are $n_b\sim\Delta_u/\Delta_v$ (where $\Delta_v\propto 1/v$, the typical level spacing of the isolated stub) such phase slips, then each phase slip gives rise to an additional state with the same slope and there are $n_b$ states of the same slope or ithe same parity bunching together with a phase slip of $\pi$ between each of them [@deo96]. The fact that there is a phase slip of $\pi$ between two states of the same parity was generalized later, arguing from the oscillation theorem, which is equivalent to Leggett’s conjecture for the parity effect [@lee]. Transmission zeros are an inherent property of Fano resonance generically occurring in mesoscopic systems and this phase slip is believed to be observed [@the] in a transport measurement [@sch]. For an elaborate discussion on this, see Ref. [@tan]. A similar case was studied in Ref. [@wu], where they show the transmission zeros and abrupt phase changes arise due to degeneracy of “dot states” with states of the “complementary part” and hence these are also Fano-type resonances.
The purpose of this work is to show a very non-trivial temperature dependence of persistent currents due to the breakdown of the parity effect. The temperature effects predicted here, if observed experimentally, will further confirm the
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: |
Detailed Monte Carlo Inversion analysis of the spectral lines from three Lyman limit systems (LLS) \[$N$(H[i]{}) $\ga 1.0\times10^{17}$ [cm$^{-2}\,$]{}\] and nine lower $N$(H[i]{}) systems \[$2\times10^{14}$ [cm$^{-2}\,$]{}$\la N$([H]{}[i]{}) $\la
2\times10^{16}$ [cm$^{-2}\,$]{}\] observed in the VLT/UVES spectra of Q0347–3819 (in the range $2.21 \leq z \leq 3.14$) and of APM BR J0307–4945 (at $z = 4.21$ and 4.81) is presented. Combined with the results from a previous work, the analyzed LLSs show that they are a [*heterogeneous*]{} population originating in different environments. A functional dependence of the line-of-sight velocity dispersion $\sigma_{\rm v}$ on the absorber size $L$ is confirmed: the majority of the analyzed systems follow the scaling relation $\sigma_{\rm v} \sim (N_{\rm H}\,L)^{0.3}$ (with $N_{\rm H}$ being the total gas column density). This means that most absorbers may be related to virialized systems like galaxies or their halos. Previously noted enhancement of the metal content in small size systems is also confirmed: metallicities of $Z \sim (1/3-1/2)\,Z_\odot$ are found in systems with $L \la 0.4$ kpc, whereas we observe much lower metal abundances in systems with larger linear sizes. For the first time in LLSs, a pronounced \[$\alpha$-element/iron-peak\] enrichment is revealed: the absorber at [$z_{\rm abs}\,$]{}= 2.21 shows \[O/Fe\] = $0.65\pm0.11$, \[Si/Fe\] = $0.51\pm0.11$, and \[Mg/Fe\] = $0.38\pm0.11$. Several absorption systems exhibit characteristics which are very similar to that observed in high-velocity clouds in the Milky Way and may be considered as high-redshift counterparts of Galactic HVCs.
author:
- 'S. A. Levshakov, I. I. Agafonova, S. D’Odorico, A. M. Wolfe,'
- 'M. Dessauges-Zavadsky'
title: 'Metal abundances and kinematics of quasar absorbers – II. Absorption systems toward Q0347–3819 and APM BR J0307–4945 '
---
Introduction
============
With the present paper we continue to study the chemical composition and the kinematic characteristics of quasar absorption systems using a new computational procedure, – the Monte Carlo Inversion algorithm (MCI), – developed earlier in a series of papers \[see Levshakov, Agafonova & Kegel (2000); hereafter LAK\]. The MCI technique allows us to recover self-consistently the physical parameters of the intervening gas cloud (such as the average gas number density $n_0$, the column densities for different species $N_{\rm a}$, the kinetic temperature $T_{\rm kin}$, the metal abundances $Z_{\rm a}$, and the linear size $L$), the statistical characteristics of the underlying hydrodynamical fields (such as the line-of-sight velocity dispersion $\sigma_{\rm v}$, and the density dispersion $\sigma_{\rm y}$), and the line of sight density $n_{\rm H}(x)$ and velocity $v(x)$ distributions (here $x$ is the dimensionless coordinate in units of $L$). Having this comprehensive information we are able to classify the absorbers more reliably and hence to obtain important clues concerning the physical conditions in intervening galaxies, galactic halos and large scale structure objects at high redshifts. Besides, it will also be possible to constrain the existing theories of the origin of galaxy formation since the observed statistics of the damped Ly$\alpha$ (DLA) and Lyman limit (LLS) systems is believed to be a strong test of different cosmological models (e.g. Gardner et al. 2001; Prochaska & Wolfe 2001).
In the first part of our study (Levshakov et al. 2002a, hereafter Paper I) we reported results on the absorption systems at [$z_{\rm abs}\,$]{}= 1.87, 1.92 and 1.94 toward the HDF-South quasar J2233–606. These systems exhibit many metal lines with quite complex structures. It was found that all profiles can be well described with an assumption of a homogeneous metallicity and a unique photoionizing background. According to the estimated sizes, velocity dispersions and metal contents the absorbers at [$z_{\rm abs}\,$]{}= 1.92 and 1.87 were related to the galactic halos whereas the system at [$z_{\rm abs}\,$]{}= 1.94 was formed, more likely, in an irregular star-forming galaxy. It was also found, that the linear size and the line-of-sight velocity dispersion for all three absorbers obey a scaling relation of the same kind that can be expected for virialized systems.
The present paper deals with absorbers observed in the spectra of Q0347–3819 ($z_{\rm em} = 3.23$) and APM BR J0307–4945 ($z_{\rm em} = 4.75$, see § 2.1). Both spectra include several dozens of systems containing metals, but most of them are weak and severely blended and hence do not allow to estimate the underlying physical parameters with a reasonable accuracy. After preliminary analysis only 12 systems were chosen for the inversion with the MCI and their properties are described below.
The structure of the paper is as follows. § 2 describes the data sets. In § 3 our model assumptions and basic equations are specified. The estimated parameters for individual systems are given in § 4. The implication of the obtained results to the theories of LLS origin are discussed in § 5 and our conclusions are reported in § 6. Appendix contains a table with typical parameters of different absorbers which are referred to in the present study.
Observations
============
The spectroscopic observations of Q0347–3819 and APM BR J0307–4945 obtained with the UV-Visual Echelle Spectrograph UVES (D’Odorico et al. 2000) on the VLT/Kueyen 8.2 m telescope are described in detail by D’Odorico, Dessauges-Zavadsky & Molaro (2001) and by Dessauges-Zavadsky et al. (2001), respectively. Both spectra were observed with the spectral resolution FWHM $\simeq 7$ [km s$^{-1}\,$]{}. For the analysis of metal systems from the Q0347–3819 spectrum with lines in the range 4880 – 6730 Å which was not covered by the VLT observations, we used a portion of the Q0347–3819 spectrum obtained with the High-Resolution Echelle Spectrograph HIRES (Vogt et al. 1994) on the 10 m W. M. Keck I telescope (Prochaska & Wolfe 1999). The spectral resolution in this case was about 8 [km s$^{-1}\,$]{}. The VLT/UVES data are now available for public use in the VLT data archive.
The majority of the metal systems in the spectrum of Q0347–3819 were identified in Levshakov et al. (2002b), whereas the [$z_{\rm abs}\,$]{}= 4.21 system toward APM BR J0307–4945 was distinguished by Dessauges-Zavadsky et al.[^1] (2001) as consisting of two sub-systems : one at [$z_{\rm abs}\,$]{}= 4.211 and the other at [$z_{\rm abs}\,$]{}= 4.218. A new system at [$z_{\rm abs}\,$]{}= 4.81 is analyzed here for the first time.
Emission redshift of APM BR J0307–4945
--------------------------------------
The emission redshift of this distant quasar $z_{\rm em} = 4.728\pm0.015$ was previously measured by Péroux et al. (2001) from the +\] $\lambda1400.0$ and $\lambda1549.1$ lines observed in the $\sim 5$ Å resolution spectrum obtained with the 4 m Cerro Tololo Inter-American Observatory telescope.
In our VLT/UVES spectrum of this quasar a few additional lines can be identified which are useful for the redshift measurements. The most important of them is the weak $\lambda1304$ line. From earlier studies (see, e.g., Tytler & Fan 1992 and references cited therein) it is known that ‘low-ionization lines’ such as $\lambda1304$ are systematically redshifted and narrower than ‘high-ionization’ lines such as , , and Ly$\alpha$.
In Fig. 1 we compare the profile with those of and of a wide blend of the Ly$\alpha$++ lines. All these lines are shown in the same velocity scale which is defined by the $\lambda1304$ center corresponding to $z_{\rm em} = 4.7525$. This line is redshifted with respect to the $z_{\rm em}$ value deduced by Péroux et al. from the measurements of the +\] and profiles. Because the Ly$\alpha$ emission line is blended with other emission
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'In this short article we develop recent proposals to relate Yang-Baxter sigma-models and non-abelian T-duality. We demonstrate explicitly that the holographic space-times associated to both (multi-parameter)-$\beta$-deformations and non-commutative deformations of ${\cal N}=4$ super Yang-Mills gauge theory including the RR fluxes can be obtained via the machinery of non-abelian T-duality in Type II supergravity.'
---
[**Marginal and non-commutative deformations\
via non-abelian T-duality**]{}
[Ben Hoare$^{a}$ and Daniel C. Thompson$^{b}$]{}
[*$^{a}$ Institut für Theoretische Physik, ETH Zürich,\
Wolfgang-Pauli-Strasse 27, 8093 Zürich, Switzerland.*]{}
[*$^{b}$ Theoretische Natuurkunde, Vrije Universiteit Brussel & The International Solvay Institutes,\
Pleinlaan 2, B-1050 Brussels, Belgium.*]{}
[*E-mail: *]{} [<bhoare@ethz.ch>, <Daniel.Thompson@vub.ac.be>]{}
Introduction {#sec:intro}
============
There is a rich interplay between the three ideas of T-duality, integrability and holography. Perhaps the most well studied example of this is the use of the TsT transformation to ascertain the gravitational dual space-times to certain marginal deformations of ${\cal N}=4$ super Yang-Mills gauge theory [@Lunin:2005jy]. Whilst this employs familiar T-dualities of $U(1)$ isometries in space-time, T-duality can be extended to both non-abelian isometry groups and to fermionic directions in superspace. Such generalised T-dualities also have applications to holography. Fermionic T-duality [@Berkovits:2008ic; @Beisert:2008iq] was critical in understanding the scattering amplitude/Wilson loop duality at strong coupling. T-duality of non-abelian isometries has been employed as a solution generating technique in Type II supergravity [@Sfetsos:2010uq], relating for instance $AdS_5\times S^5$ to (a limit[^1] of) the space-times corresponding to ${\cal N}=2$ non-Lagrangian gauge theories. Developing the recent results of [@Hoare:2016wsk; @Borsato:2016pas] this note will investigate further the role generalised notions of T-duality can play in holography.
A new perspective on deformations of the $AdS_5 \times S^5$ superstring has come from the study of Yang-Baxter deformations of string $\sigma$-models [@Klimcik:2002zj; @Klimcik:2008eq; @Klimcik:2014bta; @Delduc:2013fga; @Delduc:2013qra]. These are integrable algebraic constructions which deform the target space of the $\sigma$-model through the specification of an antisymmetric $r$-matrix solving the (modified) classical Yang-Baxter equation ((m)cYBE).
If the $r$-matrix solves the mcYBE then, applied to the supercoset formulation of strings in $AdS_5\times S^5$ [@Metsaev:1998it; @Berkovits:1999zq], these give rise to $\eta$-deformed space-times which are conjectured to encode a quantum group $q$-deformation of ${\cal N}=4$ super Yang-Mills with a deformation parameter $q \in \mathbb{R}$ [@Delduc:2014kha; @Arutyunov:2013ega; @Arutyunov:2015qva]. However the $\eta$-deformed worldsheet theory appears to be only globally scale invariant [@Hoare:2015gda; @Hoare:2015wia], the target space-time does not solve exactly the Type II supergravity equations [@Arutyunov:2015qva] but rather a generalisation thereof [@Arutyunov:2015mqj]. Classically $\eta$-deformations are related via a generalised Poisson-Lie T-duality [@Vicedo:2015pna; @Hoare:2015gda; @Sfetsos:2015nya; @Klimcik:2015gba; @Klimcik:2016rov; @Delduc:2016ihq] to a class of integrable deformation of (gauged) WZW models known as $\lambda$-deformations [@Sfetsos:2013wia; @Hollowood:2014rla; @Hollowood:2014qma], which do however have target space-times solving the usual supergravity equations of motion [@Sfetsos:2014cea; @Demulder:2015lva; @Borsato:2016zcf; @Chervonyi:2016ajp]. There is also evidence that the latter class corresponds to a quantum group deformation of the gauge theory, but with $q$ a root of unity [@Hollowood:2015dpa].
If instead the $r$-matrix solves the unmodified cYBE (a homogeneous $r$-matrix), first considered in [@Kawaguchi:2014qwa], the YB $\sigma$-models have been demonstrated to give a wide variety of integrable target space-times including those generated by TsT transformations [@Matsumoto:2014nra; @Matsumoto:2015uja; @Matsumoto:2014gwa; @Matsumoto:2015jja; @vanTongeren:2015soa; @Kyono:2016jqy; @Osten:2016dvf]. For these models the corresponding dual theory can be understood in terms of a non-commutative $\mathcal{N} = 4$ super Yang-Mills with the non-commutativity governed by the $r$-matrix and the corresponding Drinfel’d twist [@vanTongeren:2015uha; @vanTongeren:2016eeb]. Recently it has been shown that such YB $\sigma$-models can be also be understood in terms of non-abelian T-duality: given an $r$-matrix one can specify a (potentially non-abelian) group of isometries of the target space with respect to which one should T-dualise [@Hoare:2016wsk]. The deformation parameter appears by first centrally extending this isometry group and then T-dualising. Following a Buscher-type procedure, the Lagrange multiplier corresponding to the central extension is non-dynamical. In particular it is frozen to a constant value and thereby plays the role of the deformation parameter. This conjecture was proven in the NS sector in [@Borsato:2016pas], where a slightly different perspective was also given. If one integrates out only the central extension, the procedure above can be seen to be equivalent to adding a total derivative $B$-field constructed from a 2-cocycle on the isometry group with respect to which we dualise and then dualising.
In this note we develop this line of reasoning. We begin by outlining the essential features of Yang-Baxter $\sigma$-models and the technology of non-abelian T-duality in Type II supergravity. After demonstrating that a centrally-extended T-duality can be reinterpreted as as non-abelian T-duality of a coset based on the Heisenberg algebra, we show how the machinery of non-abelian T-duality developed for Type II backgrounds can be readily applied to the construction of [@Hoare:2016wsk; @Borsato:2016pas]. We confirm that the centrally-extended non-abelian T-duals produce the full Type II supergravity backgrounds corresponding to $\beta$-deformations (when the duality takes place in the $S^5$ factor of $AdS_5\times S^5$), non-commutative deformations (when performed in the Poincaré patch of $AdS_5$) and dipole deformations (when performed in both the $S^{5}$ and $AdS^{5}$ simultaneously). In appendices \[app:sugra\] and \[app:algconv\] we outline our conventions for supergravity and certain relevant algebras respectively. As a third appendix \[app:furtherexamples\] we include some additional worked examples including one for which the non-abelian T-duality is anomalous and the target space solves the generalised supergravity equations.
The supergravity backgrounds in this note have appeared in the literature in the past but the derivation and technique presented here is both novel, simple and, we hope may have utility in the construction of more general supergravity backgrounds.
Yang Baxter sigma-models {#sec:yangbaxter}
========================
Given a semi-simple Lie algebra $\mathfrak{f}$ (and corresponding group $F$) we define an antisymmetric operator $R$ obeying $$\label{eq:cybe}
[R X , R Y] - R\left([R X, Y]+ [X,RY] \right) = c [ X, Y] \ , \quad X,Y \in \mathfrak{f} \ ,$$ where the cases $c=\pm 1$ and $c=0$ are known as the classical and modified classical Yang Baxter equations (cYBE and mcYBE) respectively. We adopt some notation $X\wedge Y = X\otimes Y - Y \otimes X$ and define
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The dependence of the Lyapunov exponent on the closeness parameter, $\epsilon$, in tangent bifurcation systems is investigated. We study and illustrate two averaging procedures for defining Lyapunov exponents in such systems. First, we develop theoretical expressions for an isolated tangency channel in which the Lyapunov exponent is defined on single channel passes. Numerical simulations were done to compare theory to measurement across a range of $\epsilon$ values. Next, as an illustration of defining the Lyapunov exponent on many channel passes, a simulation of the intermittent transition in the logistic map is described. The modified theory for the channels is explained and a simple model for the gate entrance rates is constructed. An important correction due to the discrete nature of the iterative flow is identified and incorporated in an improved model. Realistic fits to the data were made for the Lyapunov exponents from the logistic gate and from the full simulation. A number of additional corrections which could improve the treatment of the gates are identified and briefly discussed.'
---
[**[Lyapunov Exponents for the Intermittent Transition to Chaos]{}**]{}
and [**Walter Wilcox**]{}$^{a,b}$\
$^{a}$Department of Physics, Baylor University, Waco, TX 76798\
$^{b}$Department of Physics, University of Kentucky, Lexington, KY 40506
Chaos is the study of dynamical systems that have a sensitive dependence on initial conditions. Much attention has been paid to the two main routes to chaos: pitchfork bifurcation and tangent bifurcation. If we consider the general difference equation mapping, $$\begin{aligned}
x_{n+1}=F(x_{n}),\label{recur}\end{aligned}$$ then tangent bifurcation, also called type I intermitency \[Pomeau & Manneville, 1980\], occurs when a tangency develops in iterates of $F(x_{n})$ across the $x_{n}=x_{n+1}$ reflection line. (Pitchfork bifurcations occur when iterates of $F(x_{n})$ possess perpendicular crossings of this line.) Just before the tangency occurs (characterized by the closeness parameter, $\epsilon$, being small), the map is almost tangent to the reflection line and a long channel is formed. When the iterations enter, a long laminar-like flow is established, with nearly periodic behavior. Once the iterations leave the channel, they behave chaotically, then re-enter the channel. The result is a long region of laminar flow that is intermittently interrupted by chaotic intervals. This occurs when $\epsilon$ is near zero and tangency is about to occur, hence the two names: intermittent chaos and tangent bifurcation. Experimentally, type I intermittency has been observed in turbulent fluids \[Bergé [*et. al.*]{}, 1980\], nonlinear oscillators \[Jeffries & Perez, 1982\], chemical reactions \[Pomeau [*et. al.*]{}, 1981\], and Josephson junctions \[Weh & Kao, 1983\]. An excellent introduction to the intermittency route to chaos is given in Schuster \[1995\].
In the pioneering studies \[Manneville & Pomeau, 1979\] and \[Pomeau & Manneville, 1980\], it was found that the number of iterations followed an $\epsilon^{-1/2}$ dependence and that the Lyapunov exponent varied as $\epsilon^{1/2}$ for a logistic mapping ($z=2$). In the work by \[Hirsch [*et. al.*]{}, 1982\], an expression for the number of iterations spent inside the channel was developed. The equation for the third iterate, i.e. $F(F(F(x)))$ or $F^{(3)}(x)$ where $F(x)=Rx(1-x)$, was expanded in a Taylor series about one of the tangency points for $R_{c}=1+\sqrt{8}$. In the case of the logistic map, we get $$\begin{aligned}
F^{(3)}(x)= x_{c}+(x-x_{c})+a_{c}(x-x_{c})^{2}+b_{c}(R_{c}-R),\label{f3}\end{aligned}$$ where $x_{c}$ is one of the three contact points. After a transformation that centers and rescales the system around $x_{c}$ ($y_{n}\equiv
\frac{x_{n}-x_{c}}{b_{c}}$), the recursion relation can be put into the form $$\begin{aligned}
y_{n+1}=ay_{n}^{2}+y_{n}+\epsilon,\label{what}\end{aligned}$$ where $\epsilon\equiv R_{c}-R>0$ and $a\equiv a_{c}b_{c}$. The more general case can be studied as a first, second, or any iterate instead of just the third iterate, as long as a tangency develops. To derive an analytic description of the trajectory, \[Hirsch [*et. al.*]{}, 1982\] switched from a difference equation to a differential equation. Thus, they considered $$\begin{aligned}
\frac{dy}{dn}=ay^{2}+\epsilon.\label{diff}\end{aligned}$$ This approximation is justified as long as the number of iterations in the channel is large enough or, alternately, that the step size between iterations is small compared to the channel length. This is an easy differential equation to solve. One obtains $$\begin{aligned}
n(y_{in})=\frac{1}{\sqrt{a\epsilon}}\left[
\tan^{-1}\left( y_{out}\sqrt{\frac{a}{\epsilon}}\,\right) -\tan^{-1}\left( y_{in}
\sqrt{\frac{a}{\epsilon}}\,\right) \right].\label{firstn}\end{aligned}$$ $y_{in}$ is the entrance to the tangency channel and $y_{out}$ is the exit value and one has that $$\begin{aligned}
-y_{out} \le y_{in} \le y_{out}.\label{limits}\end{aligned}$$ \[Hirsch [*et. al.*]{}, 1982\] observed that the entrance points for the logistic map, $y_{in}$ ($R_{c}\ge R$), had a probability distribution that was roughly uniform. Given this distribution, the average number of iterations to travel the length of the channel is given as $$\begin{aligned}
<n>\equiv
\frac{1}{2y_{out}}\int_{-y_{out}}^{y_{out}}n(y_{in})dy_{in}
=\frac{1}{\sqrt{a\epsilon}}\,\tan^{-1}\left(
y_{out}\sqrt{\frac{a}{\epsilon}}\,\right).\label{it}\end{aligned}$$ \[Hirsch [*et. al.*]{}, 1982\] also derived a form for the average number of iterations for an arbitrary universality class. The universality class, $z$, is given by the lowest non vanishing power of $(x-x_{c})$ in the expansion around the tangency point. For tangency to develop, $z$ must always be an even number: $$\begin{aligned}
y_{n+1}=ay_{n}^{z}+y_{n}+\epsilon.\label{genz}\end{aligned}$$ This leads to the differential equation, $$\begin{aligned}
\frac{dy}{dn}=ay^{z}+\epsilon.\label{genzdiff}\end{aligned}$$ and to the number of iterations, $$\begin{aligned}
n(y_{in})=a^{-1/z}\epsilon^{-1+1/z}\int\limits_{y_{in}
\sqrt[z]{\frac{a}{\epsilon}}}^{y_{out}\sqrt[z]{\frac{a}{\epsilon}}}\frac{d{\bar
y}}{{\bar y}^{z}+1}.\label{n}\end{aligned}$$ The average number of iterations is given by $$\begin{aligned}
<n>=\frac{1}{2}a^{-1/z}\epsilon^{-1+1/z}\int\limits_{-y_{out}
\sqrt[z]{\frac{a}{\epsilon}}}^{y_{out}\sqrt[z]{\frac{a}{\epsilon}}}\frac{d{\bar
y}}{{\bar y}^{z}+1},\label{avgn}\end{aligned}$$ when the entrance distribution is again uniform. The numerical simulations in \[Hirsch [*et. al.*]{}, 1982\] agreed with predicted values quite well.
There are two manners in which Lyanunov exponents may be defined in a simulation with many trajectories. One may define a procedure which measures the Lyanunov exponent on a given trajectory, for example a single channel pass, and then averages over these trajectories. Another possibility is to measure the exponent across many trajectories or channel passes, using a binning procedure to define variances. We will use both procedures here to illustrate the theory. The first procedure will be termed a [*single pass*]{} measurement, the second a [*many pass*]{} measurement. We will develop the theory for the first procedure in the next Section, which will then be illustrated in Section 3 by a simulation in an isolated tangency channel for general $z$. As an illustration of a many pass measurement, a simulation of the intermittent transition in the logistic map will be described in Section 4. The modified theory will be motivated and a simple phenomenological model of the data will then be given in Section 5. In Section 6 an improved expression for the inverse number density, $\frac{dy}{dn}$ due to the discrete nature of the iterative flow will be developed. This will improve the comparison of the model to measurement. Finally, we will summarize our findings and make suggestions for further improvements in the model in the final Section.
Our analysis of
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Monitoring the optical phase change in a fiber enables a wide range of applications where fast phase variations are induced by acoustic signals or vibrations in general. However, the quality of the estimated fiber response strongly depends on the method used to modulate the light sent to the fiber and capture the variations of the optical field. In this paper, we show that distributed optical fiber sensing systems can advantageously exploit techniques from the telecommunication domain, as those used in coherent optical transmission, to enhance their performance in detecting mechanical events, while jointly offering a simpler setup than widespread pulse-cloning or spectral-sweep based schemes with acousto-optic modulators. We periodically capture an overall fiber Jones matrix estimate thanks to a novel probing technique using two mutually orthogonal complementary (Golay) pairs of binary sequences applied simultaneously in phase and quadrature on two orthogonal polarization states. A perfect channel response estimation of the sensor array is achieved, subject to conditions detailed in the paper, thus enhancing the sensitivity and bandwidth of coherent $\phi$-OTDR systems. High sensitivity, linear response, and bandwidth coverage up to $18~\mathrm{kHz}$ are demonstrated with a sensor array composed of 10 fiber Bragg gratings (FBGs).'
address: |
Nokia Bell Labs Paris-Saclay, 1 route de Villejust, 91620 Nozay, FRANCE\
christian.dorize@nokia-bell-labs.com\
elie.awwad@nokia-bell-labs.com
author:
- Christian Dorize and Elie Awwad
title: Enhancing performance of coherent OTDR systems with polarization diversity complementary codes
---
[99]{}
A. Masoudi and T. P. Newson, “Contributed Review: Distributed optical fibre dynamic strain sensing,” Review of Scientific Instruments **87**(1), 011501 (2016). L. Palmieri and L. Schenato, “Distributed optical fiber sensing based on Rayleigh scattering,” The Open Optics Journal **7**(1), 104–127 (2013). Y. Shi, H. Feng and Z. Zeng, “A long distance phase-sensitive optical time domain reflectometer with simple structure and high locating accuracy,” Sensors **15**(9), 21957–21970 (2015). G. Yang, X. Fan, S. Wang, B. Wang, Q. Liu and Z. He, “Long-Range Distributed Vibration Sensing Based on Phase Extraction From Phase-Sensitive OTDR,” IEEE Photonics Journal **8**(3), 1–12 (2016). X. Fan, G. Yang, S. Wang, Q. Liu and Z. He, “Distributed Fiber-Optic Vibration Sensing Based on Phase Extraction From Optical Reflectometry,” J. Lightw. Technol. **35**(16), 3281–3288 (2017). D. Chen, Q. Liu, X. Fan, Z. He, “Distributed fiber-optic acoustic sensor with enhanced response bandwidth and high signal-to-noise ratio,” J. Lightw. Technol. **35**(10), 2037–2043 (2017). H. F. Martins, K. Shi, B. C. Thomsen, S. M.-Lopez, M. G.-Herraez and S. J. Savory, “Real time dynamic strain monitoring of optical links using the backreflection of live PSK data,” Opt. Express **24**(19), 22303–22318 (2016). Q. Yan, M. Tian, X. Li, Q. Yang and Y. Xu, “Coherent $\phi$-OTDR based on polarization-diversity integrated coherent receiver and heterodyne detection,” in IEEE 25th Optical Fiber Sensors Conference (OFS), 1–4 (2017). K. Kikuchi, “Fundamentals of coherent optical fiber communications,” J. Lightw. Technol. **34**(1), 157–179 (2016). F. Zhu, Y. Zhang, L. Xia, X. Wu and X. Zhang, “Improved $\phi$-OTDR sensing system for high-precision dynamic strain measurement based on ultra-weak fiber Bragg grating array,” J. Lightw. Technol. **33**(23), 4775–4780 (2015). F.A.Q. Sun, W. Zhang, T. Liu, Z. Yan and D. Liu, “Wideband fully-distributed vibration sensing by using UWFBG based coherent OTDR,” in IEEE/OSA Optical Fiber Communications Conference and Exhibition (OFC), 1–3 (2017). M. Golay, “Complementary series,” in IRE Transactions on Information Theory **7**(2), 82–87 (1961). M. Nazarathy, S.A. Newton, R.P. Giffard, D.S. Moberly, F. Sischka, W.R. Trutna and S. Foster, “Real-time long range complementary correlation optical time domain reflectometer,” J. Lightw. Technol. **7**(1), 24–38 (1989). X. Huang, “Complementary Properties of Hadamard Matrices,” in International Conference on Communications, Circuits and Systems, 588–592 (2006). R. Posey, G. A. Johnson and S. T. Vohra, “Strain sensing based on coherent Rayleigh scattering in an optical fibre,” Electronics Letters **36**(20), 1688–1689 (2000).
Introduction
============
Fiber optic sensors, being intrinsically immune to electromagnetic interference and fairly resistant in harsh environments, meet a growing interest in monitoring applications (structural health monitoring, railway surveillance, pipeline monitoring...). Distributed fiber optic sensors based on optical reflectometry make use of a variety of light scattering effects occurring in the fiber such as Raman, Brillouin, and Rayleigh backscattering to measure temperature (with any of the three effects) or mechanical variations such as strains (only with the two latter) [@Mas16]. Optical fiber sensors may also be customized or enhanced by periodically inscribing fiber Bragg gratings (FBGs) to amplify the backscattered optical field [@Mas16] resulting in a quasi-distributed system with a resolution fixed by the distance between gratings. The main characteristics of a distributed sensor are its sensitivity, spatial resolution and maximum reach. Another important feature for dynamic phenomena distributed sensing is the bandwidth of the mechanical events that the sensor is able to detect, which is closely related to the targeted sensitivity and the sensor length.
*Detecting* and *quantifying* sound waves and vibrations, known as distributed acoustic sensing (DAS) or distributed vibration sensing (DVS) is critical in areas of geophysical sciences and surveillance of sensitive sites or infrastructures. Phase and coherent optical-time-domain (resp. optical-frequency-domain) reflectometry (OTDR, resp. OFDR) systems are usually based on an interrogator sending one or more short light pulses or frequency sweeps [@Pal13; @Shi15; @Yan16; @Fan17]. The detector consists of a simple photodiode if, for instance, two pulses at slightly different frequencies are separately launched in the sensing fiber [@Mas16]. In case single pulses are sent, an imbalanced Mach-Zehnder interferometer and a phase detector, or a balanced coherent detector that mixes the backscattered pulse with a local oscillator are used at the receiver side to detect relative phase changes in the Rayleigh backscattered optical field [@Mas16; @Shi15; @Yan16; @Fan17]. The main limitations of these phase-OTDR systems are firstly a trade-off between the spatial resolution and the maximum reach, given that a high spatial resolution forces the use of short pulses resulting in a low signal-to-noise ratio, secondly a trade-off between maximum reach and the covered mechanical bandwidth, the latter being equal to half of the scanning rate of the pulses. A reflectometry scheme based on the injection of several linear-frequency-modulated probe pulses was suggested in [@Che17] to relax these two trade-offs, showcasing a $9~\mathrm{kHz}$ bandwidth with a $10~\mathrm{m}$ resolution over a $24.7~\mathrm{km}$-long fiber. However, the interrogators in these schemes all rely on individual probing pulses generated by acousto-optic modulators or even more complex structures. They are also vulnerable to polarization fading effects given that the Rayleigh backscattered light is polarization dependent. A dual-polarization coherent receiver allowing to detect all the backscattered information by projecting the received optical field over two orthogonal polarization states can fix this problem as shown in recent works [@Mar16; @Yan17]. In order to further relax the reach-spatial resolution trade-off, our approach in this paper consists in continuously probing the sensor using a training sequence that modulates the optical carrier injected in the fiber, as done in [@Mar16]. While random binary sequences modulate two polarization states to probe a $500~\mathrm{m}$-long sensor and detect a sinusoidal strain of $500~\mathrm{Hz}$ in [@Mar16], a perfect optical channel estimation can only be reached asymptotically for very large sequences. Hence, we design in this work optimized probing sequences of finite length allowing to extend the covered bandwidth. The proposed DAS scheme consists in transmitting polarization-multiplexed coded sequences designed from complementary Golay pairs, and detecting the backscattered optical signal using a polarization-diversity coherent receiver typically used in optical fiber transmission systems [@Kik16] followed by a correlation-based post-processing to extract the channel response. As is well known
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Let $M$ denote the moduli space of stable vector bundles of rank $n$ and fixed determinant of degree coprime to $n$ on a non-singular projective curve $X$ of genus $g \geq 2$. Denote by ${\mathcal{U}}$ a universal bundle on $X \times M$. We show that, for $x,y \in X,\; x \neq y$, the restrictions ${\mathcal{U}}|\{x\} \times M$ and ${\mathcal{U}}|\{y\} \times M$ are stable and non-isomorphic when considered as bundles on $X$.'
address:
- |
H. Lange\
Mathematisches Institut\
Universität Erlangen-Nürnberg\
Bismarckstraße $1\frac{ 1}{2}$\
D-$91054$ Erlangen\
Germany
- |
P.E. Newstead\
Department of Mathematical Sciences\
University of Liverpool\
Peach Street, Liverpool L69 7ZL, UK
author:
- 'H. Lange'
- 'P. E. Newstead'
title: On Poincaré bundles of vector bundles on curves
---
[^1]
Introduction
============
Let $X$ be a non-singular projective curve of genus $g \geq 2$ over the field of complex numbers. We denote by $M = M(n,L)$ the moduli space of stable vector bundles of rank $n$ with determinant $L$ of degree $d$ on $X$, where gcd$(n,d) = 1$. We denote by ${\mathcal{U}}$ a universal bundle on $X \times M$. For any $x \in X$ we denote by ${\mathcal{U}}_x$ the bundle ${\mathcal{U}}|\{x\} \times M$ considered as a bundle on $M$.
In a paper of M. S. Narasimhan and S. Ramanan [@nr] it was shown that ${\mathcal{U}}_x$ is a simple bundle and that the infinitesimal deformation map $$\label{eq1}
T_{X,x} {\rightarrow}H^1(M, \mbox{End} ({\mathcal{U}}_x))$$ is bijective for all $x \in X$. In [@bbn Proposition 2.4] it is shown that ${\mathcal{U}}_x$ is semistable with respect to the unique polarization of $M$. In fact, ${\mathcal{U}}_x$ is stable; since we could not locate a proof of this in the literature, we include one here.
Let ${\mathcal{M}}$ denote the moduli space of stable bundles on $M$ having the same Hilbert polynomial as ${\mathcal{U}}_x$. Then (\[eq1\]) implies that the natural morphism $$X {\rightarrow}{\mathcal{M}}$$ is étale and surjective onto a component ${\mathcal{M}}_0$ of ${\mathcal{M}}$.
It is stated in [@nr] that it can be easily deduced from the results of that paper that the map $X {\rightarrow}{\mathcal{M}}_0$ is also injective. This would imply that the curve $X$ can be identified with ${\mathcal{M}}_0$. However no proof of this fact seems to be given. There is a proof in a paper of A. N. Tyurin [@tyu Theorem 2], but this seems to us to be incomplete. We offer here a proof which is in the spirit of [@tyu]. To be more precise, our main result is the following theorem.\
[**Theorem**]{} [*Let $X$ be a non-singular projective curve of genus $g \geq 2$. If $x,y \in X, \; x \neq y$, then ${\mathcal{U}}_x \not\simeq {\mathcal{U}}_y$.*]{}\
Note that if $X$ is a general curve of genus $g \geq 3$ or any curve of genus 2, then $X$ does not admit étale coverings $X {\rightarrow}{\mathcal{M}}_0$ of degree $>1$. So for such curves the theorem is immediate. For the proof we can therefore assume that $g \geq 3$. In fact, our proof fails for $g=2$.
In Section 2 we prove the stability of ${\mathcal{U}}_x$. In Sections 3 and 4 we make some cohomological computations, from which a family of stable bundles on $X$ can be constructed. This construction is carried out in Section 5, where we also use the morphism to $M$ given by this family in order to prove the theorem.
Stability of ${\mathcal{U}}_x$
==============================
Let $X$ be a non-singular projective curve of genus $g \geq 2$. Let $n \geq 2$ and $d$ be integers with gcd$(n,d) = 1$. There are uniquely determined integers $l$ and $e$ with $0<l<n$ and $0 \leq e<d$ such that $$\label{eq2}
ld-en = 1.$$ The bundles ${\mathcal{U}}_x$ were shown to be semistable in [@bbn Proposition 2.4], but the proof does not seem to imply stability directly, even though we know also by [@nr] that ${\mathcal{U}}_x$ is simple.
\[propos2.1\] For all $x \in X$, the vector bundle ${\mathcal{U}}_x$ is stable with respect to the unique polarization of $M$.
By [@bbn Proposition 2.4] the bundle ${\mathcal{U}}_x$ is semistable. By [@ram Remark 2.9] and possibly after tensoring ${\mathcal{U}}$ by a line bundle on $M$, $$c_1({\mathcal{U}}_x) = l \alpha,$$ where $\alpha$ is the positive generator of $H^2(M)$. By (\[eq2\]), $l$ and $n$ are coprime. It follows that ${\mathcal{U}}_x$ is stable.
Cohomological constructions
===========================
Let $l$ and $n$ be as in (\[eq2\]). Let $V$ be a semistable vector bundle of rank $l$ and degree $l(n-l)+e$ and $W$ a semistable bundle of rank $n-l$ and degree $d-e-l(n-l)$ on $X$. Then $$\deg (W^* \otimes V) = nl(n-l) -1.$$ Let $q_i, \; i=1,2,$ denote the projections of $X \times X$ on the two factors, $\Delta$ the diagonal of $X \times X$ and write for brevity $$U = q_1^*(W^* \otimes V).$$
\[lem2.1\] For $ n \geq 2$ and $1 \leq i \leq n$,\
[*(a)*]{} $h^0(U(-i\Delta)|\Delta) = (n+(2i-1)(g-1))l(n-l) -1$;\
[*(b)*]{} $h^1(U(-i\Delta)|\Delta) = 0$.
Identifying $\Delta$ with $X$, we have $U(-i\Delta)|\Delta = W^* \otimes V \otimes K_X^i$. Since $$\deg (W^* \otimes V \otimes K_X^i) = (n+(2g-2)i)l(n-l) - 1 > l(n-l)(2g-2)$$ and $W^* \otimes V$ is semistable, (b) holds and Riemann-Roch gives (a).
\[lem2.2\] For $n \geq 2$, $$h^1(U(-n\Delta)) = gh^0(W^* \otimes V) + l(n-l)(n-1)(g(n-1) + 1) -(n-1).$$
For $0 \leq i \leq n$, consider the exact sequence $$\label{eqn12}
0 {\rightarrow}U(-(i+1)\Delta) {\rightarrow}U(-i\Delta) {\rightarrow}U(-i\Delta)|\Delta {\rightarrow}0$$ on $X \times X$. For $i=0$, this sequence gives $$0 {\rightarrow}H^1(U(-\Delta)) {\rightarrow}H^1(U) \stackrel{\psi}{{\rightarrow}} H^1(U|\Delta),$$ since the restriction map $H^0(U) {\rightarrow}H^0(U|\Delta)$ is an isomorphism. The map $\psi$ is surjective, since its restriction to the Künneth component $H^1(W^* \otimes V) \otimes H^0({\mathcal{O}}) \subset H^1(U)$ is an isomorphism. Hence $$\begin{array}{ll}
h^1(U(-\Delta)) &= h^1(U) - h^1(U|\Delta)\\
&= h^1(W^* \otimes V) h^0({\mathcal{O}}) + h^0(W^* \otimes V)h^1({\mathcal{O}}) - h^1(W^* \otimes V)\\
&= g\cdot h^0(W^* \otimes V).
\end{array}$$ For $1 \leq i \leq n-1$, the sequence (\[eqn12\]) gives, by Lemma \[
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'In this paper the stability of a closed-loop cascade control system in the trajectory tracking task is addressed. The considered plant consists of underlying second-order fully actuated perturbed dynamics and the first order system which describes dynamics of the input. The main theoretical result presented in the paper concerns stability conditions formulated based on the Lyapunov analysis for the cascade control structure taking advantage of the active rejection disturbance approach. In particular, limitations imposed on a feasible set of an observer bandwidth are discussed. In order to illustrate characteristics of the closed-loop control system simulation results are presented. Furthermore, the controller is verified experimentally using a two-axis telescope mount. The obtained results confirm that the considered control strategy can be efficiently applied for mechanical systems when a high tracking precision is required.'
author:
- |
Rados[ł]{}aw Patelski, Dariusz Pazderski\
Poznań University of Technology\
Institute of Automation and Robotics\
ul. Piotrowo 3a 60-965 Poznań, Poland
bibliography:
- 'bibDP.bib'
title: 'Tracking control for a cascade perturbed control system using active disturbance rejection paradigm[^1]'
---
Introduction
============
Set-point regulation and trajectory tracking constitute elementary tasks in control theory. It is well known that a fundamental method of stabilisation by means of a smooth static state feedback has significant limitations which come, among others, from the inability to measure the state as well as the occurrence of parametric and structural model uncertainties. Thus, for these reasons, various adaptive and robust control techniques are required to improve the performance of the closed-loop system. In particular, algorithms used for the state and disturbance estimation are of great importance here.
The use of high gain observers (HGOs) is well motivated in the theory of linear dynamic systems, where it is commonly assumed that state estimation dynamics are negligible with respect to the dominant dynamics of the closed-loop system. A similar approach can be employed successfully for a certain class of nonlinear systems where establishing a fast convergence of estimation errors may be sufficient to ensure the stability, [@KhP:2014]. In a natural way, the HGO observer is a basic tool to support a control feedback when a plant model is roughly known. Here one can mention the free-model control paradigm introduced by Fliess and others, [@Fliess:2009; @FlJ:2013] as well as the active disturbance rejection control (ADRC) proposed by Han and Gao, [@Han:1998; @Gao:2002; @Gao:2006; @Han:2009].
It turns out that the above-mentioned control methodology can be highly competitive with respect to the classic PID technique in many industrial applications, [@SiGao:2005; @WCW:2007; @MiGao:2005; @CZG:2007; @MiH:2015; @NSKCFL:2018]. Furthermore, it can be regarded as an alternative control approach in comparison to the sliding control technique proposed by Utkin and others, [@Utk:77; @Bartol:2008], where bounded matched disturbances are rejected due to fast switching discontinuous controls. Thus, it is possible to stabilise the closed-loop control system, in the sense of Filippov, on a prescribed, possibly time-varying, sliding surface, [@Bart:96; @NVMPB:2012]. Currently, also second and higher-order sliding techniques for control and state estimations are being explored, [@Levant:1993; @Levant:1998; @Bartol:1998; @Cast:2016]. It is noteworthy to recall a recent control algorithm based on higher-order sliding modes to solve the tracking problem in a finite time for a class of uncertain mechanical systems in robotics, [@Gal:2015; @Gal:2016]. From a theoretical point of view, some questions arise regarding conditions of application of control techniques based on a disturbance observer, with particular emphasis on maintaining the stability of the closed-loop system. Recently, new results concerning this issue have been reported for ADRC controllers, [@SiGao:2017; @ACSA:2017]. In this paper we further study the ADRC methodology taking into account a particular structure of perturbed plant. Basically, we deal with a cascade control system which is composed of two parts. The first component is represented by second-order dynamics which constitute an essential part of the plant. It is assumed that the system is fully actuated and subject to matched-type disturbances with bounded partial derivatives. The second component is defined by an elementary first-order linear system which describes input dynamics of the entire plant. Simultaneously, it is supposed that the state and control input of the second order dynamics are not fully available.
It can be seen that the considered plant well corresponds to a class of mechanical systems equipped with a local feedback applied at the level of actuators. As a result of additional dynamics, real control forces are not accessible directly which may deteriorate the stability of the closed-loop system.
In order to analyse the closed-loop system we take advantage of Lyapunov tools. Basically, we investigate how an extended state observer (ESO) affects the stability when additional input dynamics are considered. Further we formulate stability conditions and estimate bounds of errors. In particular, we show that the observer gains cannot be made arbitrarily large as it is commonly recommended in the ADRC paradigm. Such an obstruction is a result of the occurrence of input dynamics which is not explicitly taken into account in the feedback design procedure.
According to the best authors’ knowledge, the Lyapunov stability analysis for the considered control structure taking advantage of the ADRC approach has not been addressed in the literature so far.
Theoretical results are illustrated by numerical simulations and experiments. The experimental validation are conducted on a real two-axis telescope mount driven by synchronous gearless motors, [@KPKJPKBJN:2019]. Here we show that the considered methods provide high tracking accuracy which is required in such an application. Additionally, we compare the efficiency of compensation terms, computed based on the reference trajectory and on-line estimates in order to improve the tracking performance.
The paper is organised as follows. In Section 2 the model of a cascade control process is introduced. Then a preliminary feedback is designed and a corresponding extended state observer is proposed. The stability of the closed-loop system is studied using Lyapunov tools and stability conditions with respect to the considered control structure are formulated. Simulation results are presented in Section 3 in order to illustrate the performance of the controller. In Section 4 extensive experimental results are discussed. Section 5 concludes the paper.
Controller and observer design
==============================
Dynamics of a perturbed cascaded system
---------------------------------------
Consider a second order fully actuated control system defined as follows $$\left\{ \begin{array}{cl}
\dot{x}_{1} & =x_{2},\\
\dot{x}_{2} & =Bu+h(x_{1},x_{2})+q(x_{1},x_{2},u,t),
\end{array}\right.\label{eq:general:nominal system}$$ where $x_{1},\,x_{2}\in\mathbb{R}^{n}$ are state variables, $B\in\mathbb{R}^{n\times n}$ is a non-singular input matrix while $u\in\mathbb{R}^{n}$ stands for an input. Functions $h:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{n}$ and $q:\mathbb{R}^{2n}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}$ denote known and unknown components of the dynamics, respectively. Next, it is assumed that input $u$ in is not directly accessible for a control purpose, however, it is governed by the following first order dynamics $$\dot{u}=T^{-1}\left(-u+v\right),\label{eq:general:input dynamics}$$ where $v\in\mathbb{R}^{n}$ is regarded as a real input and $T\in\mathbb{R}^{n\times n}$ is a diagonal matrix of positive time constants. In fact, both dynamics constitute a cascaded third order plant, for which the underlying component is represented by , while corresponds to stable input dynamics.
Control system design
---------------------
The control task investigated in this paper deals with tracking of a reference trajectory specified for an output of system - which is determined by $y:=x_1$. Simultaneously, it is assumed that variables $x_2$ and $u$ are unavailable for measurement and the only information is provided by the output.
To be more precise, we define at least $C^3$ continuous reference trajectory $x_{d}(t):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ and consider output tracking error $\tilde{y}:=x_d-x_1$. Additionally, to quantify a difference between $u$ and $v$, we introduce error $\tilde{u}:=v-u$. Since $v$ is viewed as an alternative input of , one can rewrite as $$\left\{ \begin{array}{cl}
\dot{x}_{1} & =x_{2},\\
\dot{x}_{2} & =Bv-B\tilde{u}+h+q.
\end{array}\right.\label{eq:general:nominal_system_input_v}$$ For control design purposes, the tracking error will be considered with respect to the state of system . Consequently, one defines $$e = \begin{bmatrix}e_1\\ e_2\end{bmatrix}:=\begin{bmatrix}\tilde{
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
-
title: 'DeepBall: Deep Neural-Network Ball Detector'
---
{#sec:introduction}
An ability to accurately detect and track the ball in a video sequence is a core capability of any system aiming to automate analysis of the football matches or players’ progress. Our method aims to solve the problem of fast and accurate ball detection. It is developed as a part of the computer system for football clubs and academies to track and analyze player performance during both training session and regular games. The system is intended to help professional football analysts to evaluate the players’ performance, by allowing automatic indexing and retrieval of interesting events.
Detecting the ball from long-shot video footage of a football game is not trivial to automate. The object of interest (the ball) has very small size compared to other objects visible in the observed scene. Due to the perspective projection, its size varies depending on the position on the play field. The shape is not always circular. When a ball is kicked and moves at high velocity, its image becomes blurry and elliptical. Perceived colour of the ball changes due to shadows and lighting variation. The colour is usually similar to the colour of white lines on the pitch and sometimes to players’ jerseys. Other objects with similar appearance to the ball can be visible, such as small regions near the pitch lines and regions of players’ bodies such as a head. Situations when the ball is in player’s possession or partially occluded are especially difficult. Figure \[jk:fig:ball\_images\] shows exemplary image patches illustrating high variance in the ball appearance and difficulty of the ball detection task.
{width="100.00000%"}
Traditional ball detection methods, e.g. based on variants of circular Hough transform, deal well with situations where ball is visible as a single object, separated from the player body. They have problems to detect the ball when it’s possessed or partially occluded by a player. But for players performance analysis purposes, the most informative are frames showing players in close contact with the ball. In this paper we present a ball detection method expanding upon the state-of-the-art deep convolutional object detection network. The method operates on a single video frame and is intended as the first stage in the ball tracking pipeline. Our method does not have limitations associated with earlier methods based on a circular Hough transform. It can deal with situations where the perceived ball shape is not circular due to the motion blur. It detects the ball when it’s in a close contact with or partially occlude by a player’s body. It can detect multiple balls, located relatively close to each other, in the same image. Another benefit of the proposed method is its flexibility. Due to the fully convolutional design it can operate on images of any size and produces the ball confidence map of a size proportional to the input image. The detection network is designed with performance in mind. Evaluation performed in Section \[jk:ev\_results\] proves that our method can efficiently process high definition video input in a real time.
{#section}
The first step in the traditional ball detection methods, is usually the process of background subtraction. It prevents ball detection algorithms from producing false detections on the static part of the image such as stadium advertisement. The most commonly used background subtraction approaches are based on chromatic features [@Gong95; @Ali12; @Kia16] or motion detection [@DOr02; @DOr04; @Leo08; @Mazz12]. Segmentation methods based on chromatic features use domain knowledge about the visible scene: football pitch is mostly green and the ball mostly white. The colour of the pitch is usually modelled using a Gaussian Mixture Model and hardcoded in the system or learned. When the video comes from the static camera, motion-based segmentation is often used. For computational performance reasons, a simple approach is usually applied based on an absolute difference between consecutive frames or the difference between the current frame and the mean or median image obtained from a few previously processed frames [@High16].
After the background segmentation, heuristic criteria based on chromatic or morphological features are applied on the resulting blobs to locate the ball. These criteria include blob size, colour and shape (circularity, eccentricity) [@Gong95]. Variants of Circle Hough Transform [@Yuen90], modified to detect spherical rather than circular objects, may be used to verify if a blob contains the ball [@DOr02; @DOr04; @Leo08; @Popp10; @Halb15]. A two-stage approach may be employed to achieve real-time performance and high detection accuracy [@DOr02; @Leo08; @Mazz12]. In this scenario the regions that probably contain the ball are found (*ball candidates extraction*). Then, the candidates are validated (*ball candidate validation*). In [@Ali12] straight lines are detected using kernel-based Hough transform and removed from the foreground image to overcome problem of ball interfusing with white lines on the pitch. Very similar method is proposed in [@Rao15]. [@Gong95; @Pall08; @Halb15] use multiple successive frames to improve the detection accuracy. In [@Gong95], detection is confirmed by searching a neighbourhood area of each ball candidate in the successive frame. If the white area with similar size and circularity is found in the next frame, the ball candidate is validated. In [@Pall08] authors extract ball candidate positions using morphological features (shape and size of the ball). Then, a directed weighted graph is constructed from ball candidates in successive frames. The vertices of the graph correspond to candidate ball positions and edges link candidates found in consecutive frames. The longest path in the graph is computed to give the ball trajectory.
Ball detection methods using morphological features to analyze shape of blobs produced by background segmentation, fail if a ball is touching a player. See bottom row of Fig. \[jk:fig:ball\_images\] for exemplary images where these methods are likely to fail. [@Halb15] addresses this limitation by using two-stage approach. First, the ball is detected in not occluded situations, where it appears as a single object. This is done by applying background subtraction to filter out temporally static part of the image. Then, foreground blobs are filtered by size and shape to produce ball candidates. Ball candidates are verified by examining a few successive frames and detecting robust partial ball trajectories (tracklets). When the first stage detector is not able to locate the ball, the second stage detector specialized for partially occluded situations is used. Ball candidates are found using a Hough circle detector. Foreground object contours are extracted and their Freeman chain code is examined. If a ball candidate corresponds to a ’bump’ in the foreground object silhouette it is retained as a true match. In recent years a significant progress was made in the area of neural-network based object detection. Deep neural-network based YOLO detector [@Redm16] achieves 63.4 mean Average Precision (mAP) on PASCAL VOC 2007 dataset, whereas traditional Deformable Parts Models (DPM) detector [@Felz10] scores only 30.4. Current state-of-the-art object detectors can be categorized as one-stage or two-stage. In two-stage detector, such as: Fast R-CNN [@Girs15] or Faster R-CNN [@Ren15], the first stage generates a sparse set of candidate object locations (region proposals). The second stage uses deep convolutional neural network to classify each candidate location as one of the foreground classes or as a background. One-stage detectors, RetinaNet [@Lin17], SSD [@Liu16] or YOLO [@Redm16], do not include a separate region-proposal generation step. A single detector based on deep convolutional neural network is applied instead. [@Spec17] uses convolutional neural networks (CNN) to localize the ball under varying environmental conditions. The first part of the network consists of multiple convolution and max-pooling layers which are trained on the standard object classification task. The output of this part is processed by fully connected layers regressing the ball location as probability distribution along x- and y-axis. The network is trained on a large dataset of images with annotated ground truth ball position. The network is reported to have 87% detection accuracy on the custom made dataset. The limitation of this method is that it fails if more than one ball, or object very similar to the ball, is present in the image. Our method does not have this limitation.
[@Reno18] presents a deep neural network classifier, consisting of convolutional feature extraction layers followed by fully connected classification layer. It is trained to classify small, rectangular image patches as ball or no-ball. The classifier is used in a sliding window manner to generate a probability map of the ball occurrence. The method has two drawbacks. First, the set of negative training examples (patches without the ball) must be carefully chosen to include sufficiently hard examples. Also the rectangular patch size must be manually selected to take into account all the possible ways the ball appears on the scene: big or small due to the perspective, sharp or blurred due to its speed. The method is also not optimal from the performance perspective. Each rectangular image patch is separately processed by the neural network using a sliding-window approach. Then, individual results are combined to produce a final ball probability map. Our method, in contrast, requires only a single pass of an entire image through the fully convolutional detection network.
{#section-1}
The method presented in this paper, called *DeepBall*, is inspired by recent advances in a single-pass deep neural network based object detection methods, such as SSD
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
- 'Simon Caron-Huot,'
- 'Einan Gardi,'
- 'Joscha Reichel,'
- Leonardo Vernazza
bibliography:
- 'main.bib'
title: 'Two-parton scattering amplitudes in the Regge limit to high loop orders'
---
Introduction {#intro}
============
The study of QCD scattering in the Regge limit has been an active area of research for over half a century, e.g. [@Kuraev:1977fs; @Balitsky:1978ic; @Lipatov:1985uk; @Mueller:1993rr; @Mueller:1994jq; @Brower:2006ea; @Moult:2017xpp]. While the general problem of high-energy scattering is non-perturbative, in the regime where the exchanged momentum $-t$ is high enough, i.e. $s\gg-t\gg\Lambda_{\rm QCD}^2$ (see figure \[setup\_fig\]), perturbation theory offers systematic tools to analyse this limit. Central to this is the Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution equation [@Kuraev:1977fs; @Balitsky:1978ic], which provides a systematic theoretical framework to resum high-energy (or rapidity) logarithms, $\ln (s/(-t))$, to all orders in perturbation theory. This approach was used extensively to study a range of physical phenomena including the small-$x$ behaviour of deep-inelastic structure functions and parton densities, and jet production with large rapidity gaps. Furthermore, non-linear generalisations of BFKL, known as the Balitsky-JIMWLK equation [@Balitsky:1995ub; @Balitsky:1998kc; @Kovchegov:1999yj; @JalilianMarian:1996xn; @JalilianMarian:1997gr; @Iancu:2001ad], are today a main tool in the theoretical description of dense states of nuclear matter, notably in the context of heavy-ion collisions.
While many applications of rapidity evolution equations to phenomenology require the scattering particles to be colour-singlet objects, in the present paper we are concerned with the more theoretical problem of understanding *partonic* scattering amplitudes in the high-energy limit, similarly to refs. [@Sotiropoulos:1993rd; @Korchemsky:1993hr; @Korchemskaya:1996je; @Korchemskaya:1994qp; @DelDuca:2001gu; @DelDuca:2013ara; @DelDuca:2014cya; @Bret:2011xm; @DelDuca:2011ae; @Caron-Huot:2013fea; @Caron-Huot:2017fxr; @Caron-Huot:2017zfo]. This is part of a more general programme of understanding the structure of gauge-theory amplitudes and the underlying physical and mathematical principles governing this structure. The basic observation is that gauge dynamics drastically simplifies in the high-energy limit, which renders the amplitudes computable to all orders in perturbation theory, to a given logarithmic accuracy.
The present paper continues our recent study [@Caron-Huot:2013fea; @Caron-Huot:2017fxr; @Caron-Huot:2017zfo] of $2\to 2$ partonic amplitudes ($qq\to qq$, $gg\to gg$, $qg\to qg$) in QCD and related gauge theories.
![The $t$-channel exchange dominating the high-energy limit, $s\gg -t>0$. The figure also defines our conventions for momenta assignment and Mandelstam invariants. We shall assume that particles 2 and 3 (1 and 4) are of the same type and have the same helicity.[]{data-label="setup_fig"}](./img/setup_fig-crop.pdf)
A key ingredient in these studies is provided once again by rapidity evolution equations, BFKL and its generalisations, which are used to compute high-energy logarithms in these amplitudes order-by-order in perturbation theory.
Scattering amplitudes of quarks and gluons are dominated at high energies by the $t$-channel exchange (figure \[setup\_fig\]) of effective degrees of freedom called *Reggeized gluons*. $2\to 2$ amplitudes are conveniently decomposed into *odd* and *even* signature characterising their symmetry properties under $s\leftrightarrow u$ interchange, or crossing symmetry: $$\label{Odd-Even-Amp-Def}
{\cal M}^{(\pm)}(s,t) = \tfrac12\Big( {\cal M}(s,t) \pm {\cal M}(-s-t,t) \Big)\,,$$ where odd (even) amplitudes ${\cal M}^{(-)}$ (${\cal M}^{(+)}$) are governed by the exchange of an odd (even) number of Reggeized gluons. Furthermore, as shown in ref. [@Caron-Huot:2017fxr], these have respectively *real* and *imaginary* coefficients, when expressed in terms of the natural signature-even combination of logarithms, $$\label{L-def}
\frac12\left(\log\frac{-s-i0}{-t}+\log\frac{-u-i0}{-t}\right)
\simeq \log\left|\frac{s}{t}\right| -i\frac{\pi}{2} \equiv L\,.$$
The real part of the amplitude, ${\cal M}^{(-)}$, is governed, at leading logarithmic (LL) accuracy, by the exchange of a single Reggeized gluon in the $t$ channel. To this accuracy, high-energy logarithms admit a simple exponentiation pattern, namely $$\label{Mreal}
{\cal M}^{(-)}_{\rm LL} = (s/(-t))^{\alpha_g(t)} \times {\cal M}^{\rm tree}$$ where the exponent is the *gluon Regge trajectory* (corresponding to a Regge pole in the complex angular momentum plane), $\alpha_g(t)=\frac{\alpha_s}{\pi} C_A
\alpha_g^{(1)}(t)+{\cal O}(\alpha_s^2)$, whose leading order coefficient $\alpha_g^{(1)}(t)$ is infrared singular, $\alpha_g^{(1)}(t)\sim \frac{1}{2\epsilon}$ in dimensional regularization with $d=4-2\epsilon$ (see eq. (\[alphag1\]) below). Infrared singularities are well-known to exponentiate, independently of the high-energy limit. Importantly, however, eq. (\[Mreal\]) illustrates the fact that the exponentiation high-energy logarithms must be compatible with that of infrared singularities, which is a nontrivial constraint on both. This observation and its extension to higher logarithmic accuracy underpins a long line of investigation in refs. [@Sotiropoulos:1993rd; @Korchemsky:1993hr; @Korchemskaya:1996je; @Korchemskaya:1994qp; @DelDuca:2001gu; @DelDuca:2013ara; @DelDuca:2014cya; @Bret:2011xm; @DelDuca:2011ae; @Caron-Huot:2013fea; @Caron-Huot:2017fxr; @Caron-Huot:2017zfo].
The key property of the Reggeized gluon being signature-odd greatly constrains the structure of higher-order corrections. For the real part of amplitude, the simple exponentiation pattern generated by a single Reggeized gluon is preserved at the next-to-leading logarithmic (NLL) accuracy, except that it requires ${\cal O}(\alpha_s^2)$ corrections to the trajectory and also the introduction of ($s$-independent) impact factors. This simple picture only breaks down when three Reggeized gluons can be exchanged, which first occurs at NNLL accuracy and leads to Regge cuts. This contribution was computed in ref. [@Caron-Huot:2017fxr] through three-loops, by constructing an iterative solution of the non-linear Balitsky-JIMWLK equation which tested the mixing between one and three Reggeized gluons.
In this paper we focus on the imaginary part of the amplitude, ${\cal M}^{(+)}$, extending our work [@Caron-Huot:2017zfo]. Here the leading tower of logarithms, in which we are interested, is generated by the exchange of *two* Reggeized gluons, starting with a non-logarithmic term at one loop: $$\label{MevenOneloop}
{\cal M}^{(+)}_{\rm NLL}\simeq i\pi
\left[\frac{1}{2\epsilon} \frac{\alpha_s}{\pi}
+{\cal O}\left(\alpha_s^{2} L\right)\right]
{\mathbf T}^2_{s-u} {\cal M}^{\rm tree}\,.$$ Here we suppressed subleading terms in $\epsilon$ as well as multiloop corrections, which take the form $\alpha_s^{\ell}
L^{\ell-1}$ at $\ell$ loops; because the power of the energy logarithm $L$ is one less than that of the coupling, these are formally next-to-leading logarith
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
author:
- 'Ari Belenkiy$^a$, Steve Shnider$^a$, Lawrence Horwitz$^{b,c}$'
date: '[a) Department of Mathematics, Bar-Ilan University, Ramat Gan, Israel; b) School of Physics, Tel Aviv University, Ramat Aviv, Israel; c) Department of Physics, College of Judea and Samaria, Ariel, Israel]{}'
title: The Geometry of Stochastic Reduction of an Entangled System
---
\[section\]
epsf.sty
[**PACS**]{}: 02.40.Dr, 04.60.Pp, 75.10.Dg
[**Keywords**]{}: stochastic reduction, disentanglement, geometric quantum mechanics, projective geometry of states.
Introduction
============
A pure quantum state of a system is a vector in a Hilbert space, which may be represented as a linear combination of a basis of eigenstates of an observable (self-adjoint operator) or of several commuting observables. Let us suppose that the eigenvalues corresponding to the eigenstates of the Hamiltonian operator of a system are the physical quantities measured in an experiment. If the action of the experiment is modelled by a dynamical interaction induced by a term in the Hamiltonian of the system, and its effect is computed by means of the standard evolution according to the Schrödinger equation, the final state would retain the structure of the original linear superposition. One observes, however, that the experiment provides a final state that is one of the basis eigenstates and the superposition has been destroyed. The resulting process is called reduction or collapse of the wave function. The history of attempts to find a systematic framework for the description of this process goes back very far in the development of quantum theory (e.g., the problem of Schrödinger’s cat [@sch]). In recent years significant progress has been made. Rather than invoking some random interaction with the environment and attributing the observed decoherence, i.e. collapsing of a linear superposition, to the onset of some uncontrollable phase relation, more rigorous methods have been developed, which add to the Schrödinger equation stochastic terms corresponding to Brownian fluctuations of the wave function. Since a pure quantum state of a system corresponds to an equivalence class of vectors modulo scaling by a non-zero complex number, corresponding to the norm and an overall phase factor [@wig; @mac], it is natural to develop models for collapse in the setting of a projective space [@k; @bh1]. Associated to an $N$-dimensional complex Hilbert space, we have the projective space ${\bf CP}^{n-1}$ equipped with the canonical Fubini-Study metric.
In this paper, we shall apply some of these methods of state reduction to the phenomena considered in the famous paper by Einstein-Podolsky-Rosen [@epr] explored experimentally by Aspect [@asp], and analyzed by Bell for its profound implications in quantum theory [@be1; @be2]. The system to be studied consists of a two particle quantum state, where each particle has spin $\frac12$. The two body state of total spin zero has the special property known as “entangled" for which a determination of the state of one particle implies with certainty the state of the second. The problems recognized by EPR and studied extensively by Bell arise when the two entangled particles are very far apart.
The states of the two particle system which we shall consider are the equivalence classes of vectors in the tensor product of two spin $\frac12$ representation spaces ${\cal H}\otimes {\cal H}$, where ${\cal H}$ corresponds to the states of one of the constituents. We shall describe the experimental detection of the entangled states in terms of mathematical models recently developed for describing the reduction, or collapse, of the wave function. One begins with an entangled state, corresponding to the $1$-dimensional spin $0$ representation with basis vector the linear superposition: $$\label{1}
|s=0\rangle:=\frac1{\sqrt 2}(|\uparrow \rangle_1\otimes|\downarrow \rangle_2
-|\downarrow \rangle_1\otimes |\uparrow\rangle_2.$$ Here $1,2$ refer to the two spin $\frac12$ representations, each one with a basis $\{|\uparrow\rangle$, $|\downarrow\rangle\}$, corresponding to spin up and spin down, resp., relative to an arbitrary but fixed direction. The full tensor product representation is a sum of this spin $0$ representation and a complementary spin $1$ representation.
The first stage of reduction, using the stochastic evolution model developed by Diosi, Ghirardi, Pearle, Rimini, Brody and Hughston [@bh1; @di; @hu; @gpr], and references therein, gives rise to a density matrix, a linear combination of projections on disentangled states with Born probability coefficients. The second stage of reduction is the detection of the configuration of disentangled states, which we will not discuss in detail here. Assume that one initially has an entangled spin $0$ state of a two particle system and then by some physical process the two particles become separated and far apart. Measurement of the first particle in the spin down state then implies with certainty that the second particle is in the spin up state, measured in the same direction. For the spin $0$ state this direction is arbitrary. The question is often raised as to how the state of the second particle can respond to the arbitrary choice of direction in the measurement of the first. This question is dealt with here by the addition of an additional term to the Hamiltonian, which we attribute to the presence of the measurement apparatus. On this basis, we shall attempt here to give a mathematical description of the process underlying such a measurement.
The state $|s=0\rangle$ is represented in equation (\[1\]) as a linear superposition. As noted above, recently developed methods for describing state reduction can account for a reduction of this superposition to one or the other of the product states occurring on the right hand side of eq. (\[1\]) in a simple way if these states are eigenstates of the self-adjoint infinitesimal generator (Hamiltonian) of the evolution.
Suppose, for example, that the Hamiltonian has the form, $$\label{2}
H=H_0+H_1$$ where $H_0$ contains the spin-independent kinetic energy of the two particles, $$\label{3}
H_0= p_1^2/2m_1 + p_2^2/2m_2,$$ describing the free motion, but $H_1$ has the special form $$\begin{aligned}
\label{4}
H_1&=&\sum \lambda_{i,j} P_{i,j}\\\
&=&\sum \lambda_{i,j}(|v_i\rangle_1\otimes |v_j\rangle_2)\otimes
(_1\langle v_i|\otimes _2\langle v_j|),\nonumber\end{aligned}$$ where the sum is over $i,j=1,2$ and $v_1=\uparrow, v_2=\downarrow$. We show in the next section that, applying the method of adding a Brownian term to the Schrödinger equation, [@hu; @gpr; @di], causes the system to evolve into one or the other of the eigenstates $|v_i\rangle_1\otimes |v_j\rangle_2$ with the correct Born [*a priori*]{} probabilities [@ah; @gpr]. In the case of an initial state of the form (\[1\]), the resulting asymptotic state is either $|\uparrow\rangle_1\otimes |\downarrow\rangle_2)$ or $|\downarrow\rangle_1\otimes |\uparrow\rangle_2)$, each with probablity $\frac12$. Such a configuration is called a mixed state.
We should remark that if the two particles correspond to identical fermions, then indices $1,2$ are basically indistinguishable and the two states $|\uparrow\rangle_1\otimes |\downarrow\rangle_2)$ and
$|\downarrow\rangle_1\otimes |\uparrow\rangle_2)$ should appear with equal weights. However, since the particles are located far apart when the measurement takes place, there is no overlap of the one particle wave functions, and the Fermi antisymmetry is not required. Thus the presence of two widely separated detectors can split the degeneracy into distinct states, which can, in fact, imply that $\lambda_{1,2}\neq \lambda_{2,1}$.
The second stage of reduction, as pointed out above, corresponds to the destruction of the two body state by one-particle filters. The state actually measured is a “separated system" of two particles. We assume that the two filters, which we denote $M_u$ and $M_d$ have the property that if the state has the form $|\uparrow\rangle_1\otimes |\downarrow\rangle_2$, then $M_u$ applied to particle $1$ and $M_d$ applied to particle $2$ succeed with certainty. We shall not discuss the extensive literature dealing with the problem of representing separated systems [@ae; @pi]. We take as our primary task the description of the first stage of this reduction process.
In the application of the technique of state reduction, it is usually assumed that the evolution is governed by the physical nature of the system before the measurement process. However, in an undisturbed quantum system the linear supposition of states evolves according to a one parameter group of unitary operators which preserves the superposition and for which there is no collapse. One may understand the Brownian fluctuations leading to collapse as induced by the presence of measurement apparatus. In the same way, the component $H_1$ of the Hamiltonian may be thought of as induced by the measurement apparatus, which, in our formulation of the problem, disentangles the states, even to
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Kinetic Inductance Detectors (KIDs) are superconductive low–temperature detectors useful for astrophysics and particle physics. We have developed arrays of lumped elements KIDs (LEKIDs) sensitive to microwave photons, optimized for the four horn–coupled focal planes of the OLIMPO balloon–borne telescope, working in the spectral bands centered at , , , and . This is aimed at measuring the spectrum of the Sunyaev–Zel’dovich effect for a number of galaxy clusters, and will validate LEKIDs technology in a space–like environment. Our detectors are optimized for an intermediate background level, due to the presence of residual atmosphere and room–temperature optical system and they operate at a temperature of . The LEKID planar superconducting circuits are designed to resonate between 100 and , and to match the impedance of the feeding waveguides; the measured quality factors of the resonators are in the $10^{4}-10^{5}$ range, and they have been tuned to obtain the needed dynamic range. The readout electronics is composed of a *cold part*, which includes a low noise amplifier, a dc–block, coaxial cables, and power attenuators; and a *room–temperature part*, FPGA–based, including up and down-conversion microwave components (IQ modulator, IQ demodulator, amplifiers, bias tees, attenuators). In this contribution, we describe the optimization, fabrication, characterization and validation of the OLIMPO detector system.'
address:
- '$^1$ Dipartimento di Fisica, *Sapienza* Università di Roma, P.le A. Moro 2, 00185 Roma, Italy'
- '$^2$ Istituto Nazionale di Fisica Nucleare, Sezione di Roma, P.le A. Moro 2, 00185 Roma, Italy'
- '$^3$ Istituto di Fotonica e Nanotecnologie - CNR, Via Cineto Romano 42, 00156 Roma, Italy'
- '$^4$ School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA'
- '$^5$ Department of Physics, Arizona State University, Tempe, AZ 85257, USA'
author:
- |
A Paiella$^{1,2}$, E S Battistelli$^{1,2}$, M G Castellano$^3$, I Colantoni$^{3,}$,\
F Columbro$^{1,2}$, A Coppolecchia$^{1,2}$, G D’Alessandro$^{1,2}$,\
P de Bernardis$^{1,2}$, S Gordon$^4$, L Lamagna$^{1,2}$, H Mani$^4$, S Masi$^{1,2}$,\
P Mauskopf$\:^{4,5}$, G Pettinari$^3$, F Piacentini$^{1,2}$ and G Presta$^1$
bibliography:
- 'bib\_abbr.bib'
title: Kinetic Inductance Detectors and readout electronics for the OLIMPO experiment
---
Introduction
============
In the last thirty years, precision cosmology has achieved important goals through measurements of the Cosmic Microwave Background (CMB) radiation such as its spectrum [@0004-637X-473-2-576], the anisotropies [@refId01], the E–mode component of the polarization [@refId0], and the B–mode component of the polarization due to gravitational lensing from dark matter structure [@0004-637X-833-2-228]. Yet, the B-mode power spectrum from inflation and the spectral distortions still remain elusive as well as the spectroscopic measurement of the Sunyaev–Zel’dovich (SZ) effect.
The OLIMPO experiment [@Coppolecchia2013] is aimed at measuring the SZ effect which is a CMB anisotropy in the direction of galaxy clusters, due to the inverse Compton scattering of low energy CMB photons by the high energy electrons of the hot gas present in the intra–cluster medium. SZ effect measurements represent an interesting tool to study the morphological and dynamical state of clusters, to probe the CMB temperature evolution with the redshift, to constraint cosmological parameters, and to search for previously unknown clusters by looking at their SZ signature in the microwave sky [@1475-7516-2018-04-020; @1475-7516-2018-04-019].
OLIMPO measures SZ signals with a technique so far unattempted in this kind of obervations: it performs a spectroscopic map of the SZ effect with a differential interferometric instrument, working above the atmosphere, and provides efficient and unbiased decontamination of the SZ and CMB signals from all the foregrounds along the same line of sight [@deBernardis], thus increasing the accuracy on the estimate of the astrophysical quantities involved in the physics of the effect.
The OLIMPO experiment has been, therefore, designed as a large balloon–borne mm–wave observatory, with a aperture telescope, equipped with a room–temperature differential Fourier transform spectrometer (DFTS) [@schillaci2014], and four low–temperature detector arrays, centered at 150, 250, 350, and , exploring the negative, zero, and positive regions of the SZ spectrum. The detector arrays, consisting of horn–coupled lumped element kinetic inductance detectors (LEKIDs), are cooled to about by a $^{3}$He fridge, accomodated inside a wet N$_2$ plus $^{4}$He cryostat. The detector arrays are fed and read out coupled by means of two independent bias–readout lines and two FPGA–based electronics.
Kinetic inductance detectors are superconductive photon detectors, where the radiation is detected by sensing changes of the kinetic inductance. A superconductor, cooled below its critical temperature $T_{c}$, presents two populations of electrons: quasiparticles and Cooper pairs, bound states of two electrons with binding energy $2\Delta_{0}=3.528\:k_{B}T_{c}$. If pair-breaking radiation ($h\nu>2\Delta_{0}$) is absorbed in a superconducting film, it breaks Cooper pairs, producing a change in the population relative densities, and thus in the kinetic inductance. For these reasons, in the lumped element configuration, a superconducting strip is properly shaped and sized in order to perform like a radiation absorber, and this structure, which is an inductor as well, is coupled to a capacitor to form a superconductive high quality factor resonator. In this way, the change in kinetic inductance, due to the incident radiation, produces a change in the resonant frequency and in the quality factor, which can be sensed by measuring the change in the amplitude and phase of the bias signal of the resonator, transmitted past the resonator through a feedline.
The KID design and readout scheme are intrinsically multiplexable for large–format arrays, provided that the resonant frequencies of the individual resonators coupled to the same feedline are adjusted to unique values, for instance by changing the capacitor size. In this way, entire arrays can be fed and read out thanks to an electronics chain made of *cold components*, including low noise amplifiers (LNAs), dc–blocks, coaxial cables, and power attenuators; and a *room–temperature stage*, where an FPGA-based electronics, coupled to an ADC/DAC board, is used to generate one bias tone per resonator. This solution allows to feed and monitor the amplitude and phase of the bias signals of all the resonators at the same time, while physically connecting the cold stage to the room–temperature with one cable only.
KID technology has been already proven in ground–based experiments [@Ritacco], and for its features seems to be the optimal solution for next–generation space–borne CMB experiments [@1475-7516-2018-04-014; @1475-7516-2018-04-015], but it still needs to be demonstrated in a representative environment for space applications. OLIMPO, which was operated from the stratosphere, is therefore a natural testbed for KIDs in space–like conditions.
Detectors and *cold electronics*
================================
The first constraint in the optimization process of a detector system is always the target science for which it will be built. In the OLIMPO case, moreover, it has to fit an already developed cryogenic and optical system. This implies that the first step is the choice of the material of the superconducting film and the dielectric substrate, the size of the detector arrays, the geometry and size of the absorbers, the geometry and size of the radiation couplers, the number of detectors per array, and the illumination configuration. These steps have been performed through optical simulations.
The second step concerns the optimization of the readout scheme: the geometry and size of the feedline; the geometry and size of the capacitors, on which the resonant frequencies of the resonators depend; and the coupling between the resonators and the feedline. This optimization has been done through electrical simulations.
The last step regards the optimization of the *cold electronics*: the choice of the material and size of the coaxial cables; the magnitude of the power attenuators; the gain, noise, and operation temperature of the cryogenic amplifier.
KID optimization, fabrication and results
-----------------------------------------
The detailed description of the optimization and fabrication of the OLIMPO detector systems and the measurement results can be found in [@Paiella2017; @Paiella2018].
All the four arrays are fabricated in a thick aluminum film deposited on silicon substrates of different thickness depending on the observed radiation frequency. The substrate acts also as a backshort, since
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'High-resolution simulations of supermassive black holes in isolated galaxies have suggested the importance of short ($\sim$10 Myr) episodes of rapid accretion caused by interactions between the black hole and massive dense clouds within the host. Accretion of such clouds could potentially provide the dominant source for black hole growth in high-z galaxies, but it remains unresolved in cosmological simulations. Using a stochastic subgrid model calibrated by high-resolution isolated galaxy simulations, we investigate the impact that variability in black hole accretion rates has on black hole growth and the evolution of the host galaxy. We find this clumpy accretion to more efficiently fuel high-redshift black hole growth. This increased mass allows for more rapid accretion even in the absence of high-density clumps, compounding the effect and resulting in substantially faster overall black hole growth. This increased growth allows the black hole to efficiently evacuate gas from the central region of the galaxy, driving strong winds up to $\sim$2500 km/s, producing outflows $\sim$10x stronger than the smooth accretion case, suppressing the inflow of gas onto the host galaxy, and suppressing the star formation within the galaxy by as much as a factor of two. This suggests that the proper incorporation of variability is a key factor in the co-evolution between black holes and their hosts.'
author:
- |
C. DeGraf$^{1}$ A. Dekel$^{1}$, J. Gabor$^{2}$, F. Bournaud$^{2}$\
[1]{} [Center for Astrophysics and Planetary Science, Racah Institute of Physics, The Hebrew University, Jerusalem 91904 Israel]{}\
[2]{} [CEA-Saclay, 91190 Gif-sur-Yvette, France]{}
bibliography:
- 'astrobibl.bib'
date: Submitted to MNRAS
title: Black hole growth and AGN feedback under clumpy accretion
---
quasars: general — galaxies: active — black hole physics — methods: numerical — galaxies: haloes
Introduction {#sec:Introduction}
============
Observations suggest that supermassive black holes are to be found at the centers of most galaxies [@KormendyRichstone1995], and properties of the black hole and the host galaxies are strongly correlated [@Magorrian1998; @FerrareseMerritt2000; @Gebhardt2000; @Tremaine2002; @Novak2006; @GrahamDriver2007; @Cattaneo2009; @KormendyHo2013; @McConnellMa2012]. These correlations suggest that the growth of a black hole and the evolution of its host galaxy influence one another. As such, black holes provide a means to better understand the evolution of galaxies, and may provide a key aspect to this evolution. One of the most common explanations for this correlation is that quasar feedback from the central black hole may influence the host galaxy [e.g. @BurkertSilk2001; @Granato2004; @Sazonov2004; @Springel2005; @Churazov2005; @KawataGibson2005; @DiMatteo2005; @Bower2006; @Begelman2006; @Croton2006; @Malbon2007; @CiottiOstriker2007; @Sijacki2007; @Hopkins2007; @Sijacki2009; @DiMatteo2012; @DeGraf2012; @Dubois2013a; @Dubois2013b]. This feedback energy may be sufficient to unbind gas within the galaxy, driving strong outflows [@SilkRees1998; @WyitheLoeb2003]. Observations of galactic-scale outflows have been made [e.g. @Fabian2006; @Spoon2013; @Veilleux2013; @Cicone2014], showing that such outflows certainly exist. Furthermore, there is evidence that the strongest velocities are located in the central-most region of the galaxy [@Rupke2005; @RupkeVeilleux2011], possibly suggesting that the driving force behind them is indeed a centrally-located AGN rather than more widely-distributed feedback sources such as stars and supernovae.
Driving these large-scale outflows necessarily requires a large energy output from the AGN, which in turn requires a significant source of gas which can reach the black hole at the galactic center. The angular momentum loss required for this infall can pose a challenge. One of the more commonly-posed explanations is that a gas-rich merger can drive gas toward the black hole. Theoretical work suggests that mergers should drive significant AGN activity [e.g. @Hernquist1989; @DiMatteo2005; @Hopkins2005d; @Hopkins2005b; @Hopkins2008; @Johansson2009; @Debuhr2010; @Debuhr2011] and some observations support this [@Ellison2011]. However, there have also been many studies which find that, although mergers may drive some AGN activity, the majority of AGN are found in isolated galaxies [@Schmitt2001; @ColdwellLambas2006; @Grogin2005; @Georgakakis2009; @Gabor2009; @Cisternas2011; @Kocevski2012], suggesting that an alternate, secular mechanism may be the primary driving force in AGN activity. Theoretical work has suggested that in high-z, gas-rich galaxies, violent disk instabilities can drive gas inflow and produce dense clumps of gas which can be driven in toward the galactic center [@Dekel2009b; @Ceverino2010; @Bournaud2011; @Mandelker2014], which may be a primary cause of AGN activity [@Bournaud2012].
In a companion paper, @GaborBournaud2013 used high (6 pc) resolution simulations to show that accretion onto black holes in gas-rich galaxies can be highly variable, with strong bursts of accretion caused by dense infalling gas clouds. These accretion events were found to generate strong outflows, but without significant effect on the host galaxy [@GaborBournaud2014], at least over short ($\sim 100$ Myr) timescales and in the absence of cosmological gas flows and mergers. In this paper we investigate the impact of periodic bursts of accretion on the growth of black holes and the corresponding effect they have on the host galaxy in a cosmological context, in which the black holes grow by several orders of magnitude (spanning both quiescent AGN phases and stronger quasar phases of extended Eddington growth). We use zoom-in simulations to achieve $\sim 100$ pc resolution for galaxies in a cosmological environment, utilizing a stochastic subgrid model to incorporate the accretion of unresolved high-density gas clouds. We investigate how, in the context of cosmological gas inflow and galaxy mergers, the inclusion of periodic, high-accretion events affects black hole growth, and the impact this has on the host galaxy morphology and star formation rate, and on galactic gas inflow and outflow.
The paper is organized as follows: In Section \[sec:Method\] we describe the simulations used and detail the subgrid model for the periodic accretion bursts. In Section \[sec:bhgrowth\] we investigate the impact of these periodic accretion bursts on black hole growth. In Section \[sec:host\] we show how AGN feedback from these accretion bursts can affect the host, specifically host morphology (\[sec:hostmorphology\]), gas properties of the host (\[sec:gas\_impact\]), and gas inflows/outflows (\[sec:inflow\_outflow\]). In Section \[sec:earlytime\] we compare the impact at earlier times, providing a more direct comparison to the high-resolution isolated galaxy run. Finally, we summarize our results in Section \[sec:Conclusions\].
Method {#sec:Method}
======
RAMSES Code {#sec:RAMSES}
-----------
For this work we ran cosmological zoom-in simulations using the Adaptive Mesh Refinement (AMR) code RAMSES [@Teyssier2002], which uses particles (acting as collisionless fluid) to model dark matter and stars, while gas is modeled by solving the hydrodynamic equations on a cubic grid of cells which vary in size. This code incorporates cooling, star formation, stellar feedback, and black holes. Cooling is performed as a sink term in the thermal energy of the gas. We allow gas to cool to a minimum temperature floor of $10^4$ K, together with a density-dependent temperature floor requiring the local Jeans length always be resolved to at least 4 grid cells [see, e.g., @Truelove1997].
Star formation is performed in gas cells above the critical density $n_H > 0.1 \: \rm{cm}^{-3}$. The star formation rate is $\dot{\rho} = \epsilon_* \rho_{\rm{gas}}/t_{ff}$, where $\rho_{\rm{gas}}$ is the gas density in the cell, $t_{ff} = (3 \pi/32G\rho_{\rm{gas}})^{1/2}$ is the local free-fall time of the gas, and $\epsilon_* = 0.01$ is the star formation efficiency [@Kennicutt1998; @KrumholzTan2007]. New star particles are then formed stochastically according to the star formation rate of the cell [@RaseraTeyssier2006], initially given the position and velocity of the host cell, but uncoupled from the cell. Supernova feedback is modeled by depositing $20\%$ of a star particles initial mass into the local cell 10 Myr after formation. The energy released is $10^{50} \rm{erg}/M_\odot$, which is deposited thermally onto the gas.
We use the same supermassive black hole prescription as @
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We present a study of the Galactic Center region as a possible source of both secondary gamma-ray and neutrino fluxes from annihilating dark matter. We have studied the gamma-ray flux observed by the High Energy Stereoscopic System (HESS) from the J1745-290 Galactic Center source. The data are well fitted as annihilating dark matter in combination with an astrophysical background. The analysis was performed by means of simulated gamma spectra produced by Monte Carlo event generators packages. We analyze the differences in the spectra obtained by the various Monte Carlo codes developed so far in particle physics. We show that, within some uncertainty, the HESS data can be fitted as a signal from a heavy dark matter density distribution peaked at the Galactic Center, with a power-law for the background with a spectral index which is compatible with the Fermi-Large Area Telescope (LAT) data from the same region. If this kind of dark matter distribution generates the gamma-ray flux observed by HESS, we also expect to observe a neutrino flux. We show prospective results for the observation of secondary neutrinos with the Astronomy with a Neutrino Telescope and Abyss environmental RESearch project (ANTARES), Ice Cube Neutrino Observatory (Ice Cube) and the Cubic Kilometer Neutrino Telescope (KM3NeT). Prospects solely depend on the device resolution angle when its effective area and the minimum energy threshold are fixed.'
address:
- 'Departamento de Física Teórica I, Universidad Complutense de Madrid, E-28040 Madrid, Spain'
- 'Instituto de Física Corpuscular (CSIC-Universitat de València), Apdo. 22085, E-46071 Valencia, Spain'
author:
- 'V. Gammaldi[^1]'
- 'J. A. R. Cembranos'
- 'A. de la Cruz-Dombriz'
- 'R. A. Lineros'
- 'A. L. Maroto'
title: |
Gamma-ray and neutrino fluxes from Heavy Dark Matter\
in the Galactic Center
---
Dark Matter ,Galactic Center ,gamma rays ,neutrinos ,Monte Carlo phenomenology
Introduction {#1 .unnumbered}
============
Astrophysical evidences for Dark Matter (DM) exist from galactic to cosmological scales, but the interactions with ordinary matter have not been probed beyond gravitational effects. In this sense, both direct and indirect DM searches are fundamental to explore particle models of DM. If DM annihilate or decay in Standard Model (SM) particles, we may indirectly detect the secondary products of such processes in astrophysical sources where the DM density is dominant. In this context, the observation of secondary particles is highly affected by astrophysical uncertainties, such as the DM densities and distribution in the Galaxy and the astrophysical backgrounds. In particular, the Galactic Center (GC) represents an interesting source due to its closeness to the Earth, but also a complex region because of the large amount of sources present. In this work, we review the analysis of the data collected by the HESS collaboration during the years 2004, 2005, and 2006 associated to the HESS J1745-290 GC gamma-ray source as a combination of a DM signal with an astrophysical power-law background. The best fits are obtained for the $u\bar u$ and $d\bar d$ quark channels and for the $W^+W^-$ and $ZZ$ gauge bosons with large astrophysical factors $\approx 10^3$ [@Cembranos:2012nj; @HESS]. Such a parameter is affected not only by the astrophysical uncertainty, but also by the error introduced by the use of differential fluxes simulated by means of Monte Carlo event generator software. The exact estimation of the last effect depends on several factors, such as the annihilation channel, the energy of the process and the energy range of interest [@MC]. In this contribution we focus on the $W^+W^-$ annihilation channel. In addiction to the gamma rays study, we present some prediction on the prospective neutrino flux that may be originated by the same source.\
This work is organized as follows. In the first section we revisit the equations able to describe both the gamma ray and neutrino fluxes from Galactic sources. The second section focuses on gamma rays phenomenology. There we show the fit of the HESS data for the $W^+W^-$ annihilation channel. Although the analysis is model independent, such annihilation channel possesses some interest for heavy dark matter models [@WIMPs], such as branons among others [@branons]. In order to give an estimation of the error introduced by the Monte Carlo simulations, we analyze the case of photon spectra generated by both [[PYTHIA]{}]{}and [[HERWIG]{}]{}packages, both in Fortran and C++. In particular we show results for $2$ TeV center-of-mass events in the $W^+W^-$ channel (see [@MC] for more cases). In section 3, we consider the expected neutrino signal from the annihilation of the heavy DM required to produce the HESS gamma-ray signal.\
Astrophysical flux {#2}
==================
In general, both gamma ray and neutrino flux for one particular annihilation channel can be described by the equation for uncharged particles that travel without deviation due to galactic magnetic fields: $$\left(\frac{{\rm d}\Phi^{}_{}}{{\rm d}E_{}}\right)_j^i\,=\,\frac{\langle\sigma_i v \rangle}{8\pi M_{{\rm DM}_i}^2}\left( \frac{{\rm d}N}{{\rm d}E}\right)_j^i\times \langle J\rangle^i_{\Delta\Omega_j}\, \, {\rm GeV}^{-1}{\rm cm}^{-2}{\rm s}^{-1}{\rm sr}^{-1}\,,
\label{nuflux}$$ where $j=\gamma,\nu_k$ is the secondary uncharged particle. When $j=\nu_k$, $k=\mu,\tau,e$ is the neutrino flavor. The DM annihilation channel is fixed by the $i=i$-th SM-particle. Because we performed single channel model independent fits, the astrophysical factor depends on the annihilation channel. Here, we present the results for the $i=W^{\pm}$ boson channel. The differential number of particles ${\rm d}N/{\rm d}E$ is simulated by means of the Monte Carlo event generator software, as discussed in section $2.1$. Unlike gamma rays, the composition of the neutrino flux produced at the source can differ from that detected on the Earth because of the combination of different flavors produced by oscillations [@Neutrinos].\
Gamma-ray flux {#2}
==============
As introduced before, the gamma rays signal observed by HESS between $200$ GeV and $10$ TeV from the GC direction may be a combination of a DM signal with a simple power-law background. The total fitting function for the observed differential gamma ray flux is:
$$\frac{{\rm d}\Phi_{\gamma-Tot}}{{\rm d}E}=\frac{{\rm d}\Phi_{\gamma-Bg}}{{\rm d}E}+\frac{\Phi_{{\rm d}\gamma-DM}}{dE}=B^2\cdot \left(\frac{E}{\text{GeV}}\right)^{-\Gamma}+ A_i^2 \cdot \frac{{\rm d}N^i_{\gamma}}{{\rm d}E}\,,
\label{gen}$$
where $$\label{A}
A_i^2=\frac{\langle \sigma_i v \rangle\, \Delta\Omega_\gamma^{{\rm HESS}}\, \langle J \rangle_{\Delta\Omega_\gamma^{{\rm HESS}}}}{8\pi M_{\rm DM}^2}$$ needs to be fitted together with the DM particle mass $M_{\rm DM}$, the background amplitude $B$ and spectral index $\Gamma$. By means of the fit of the parameters $A_i$, the astrophysical factor $$\begin{aligned}
{\langle J \rangle}^i_{\Delta\Omega}\,=\, \frac{1}{\Delta\Omega}\int_{\Delta\Omega}\text{d}\Omega\int_0^{l_{max}(\Psi)} \rho^2 [r(l)] \,{\rm d}l(\Psi)\,,
\label{J}\end{aligned}$$ is also indirectly fitted. In the previous expression, $l$ holds for the distance from the Sun to any point in the halo. It is related with the radial distance $r$ from the GC as $r^2 = l^2 + D_\odot^2 -2D_\odot l \cos \Psi$, where $D_\odot \simeq 8.5$ kpc is the distance from the Sun to the center of the Galaxy. The maximum distance from the Sun to the edge of the halo in the direction $\Psi$ is $l_{max} = D_\odot \cos\Psi+ \sqrt{r^2-D_\odot^2 \sin \Psi}$. Moreover, the photon flux must be averaged over the solid angle of the detector. For the HESS telescope observing gamma rays in the TeV energy scale, it is of order $\Delta \Omega_\gamma^{\rm HESS} = 2 \pi ( 1 - \cos\theta) \simeq 10^{-5}$. The DM density distribution in the Galaxy halo is usually modeled by the NFW profile [@Navarro:1996gj]: $$\rho(r)\equiv\frac{N}{r(r-r_s)^2}\;,
\label{NFW}$$ where $N$ is the overall normalization and $r_s$ the scale radius. This profile is in good
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We study the geometrical conditions for stabilizing magnetic skyrmions in cylindrical nanostrips and nanotubes of ferromagnetic materials with Dzyaloshinskii-Moriya interactions. We obtain the ground state of the system implementing a simulation annealing technique for a classical spin Hamiltonian with competing isotropic exchange and chiral interactions, radial anisotropy and an external field. We address the impact of surface curvature on the formation, the shape and the size of magnetic skyrmions. We demonstrate that the evolution of the skyrmion phase with the curvature of the nanoshell is controlled by the competition between two characteristic lengths, namely the curvature radius, $R$ (geometrical length) and the skyrmion radius, $R_{Sk}$ (physical length). In narrow nanotubes ($R<R_{Sk}$) the skyrmion phase evolves to a stripe phase, while in wide nanotubes ($R>R_{Sk}$) a mixed skyrmion-stripe phase emerges. Most interestingly, the mixed phase is characterized by spatially separated skyrmions from stripes owing to the direction of the applied field relative to the surface normal. Below the instability region ($R \lesssim R_{Sk}$) skyrmions remain circular and preserve their size as a consequence of their topological protection. Zero-field skyrmions are shown to be stable on curved nanoelements with free boundaries within the same stability region ($R\gtrsim R_{Sk}$). The experimental and technological perspectives from the stability of skyrmions on cylindrical surfaces are discussed.'
author:
- 'D. Kechrakos'
- 'A. Patsopoulos'
- 'L. Tzannetou'
title: Magnetic skyrmions in cylindrical ferromagnetic nanostructures with chiral interactions
---
Introduction
============
Magnetic skyrmions are self-localized vortex-like spin structures with axial symmetry [@bog94a]. They have been mainly studied in noncentrosymmetric bulk crystals and their thin films[@muhl09; @pap09; @yux10], as well as, in ultrathin ferromagnetic (FM) films on heavy metal (HM) substrates [@hein11; @rom13], in which a sizable Dzyaloshinskii-Moriya interaction (DMI) [@dzi58; @mor60] leads to their formation. From the technological point of view, two-dimensional magnetic skyrmions formed in ferromagnetic-heavy metal interfaces have potentials for a variety of innovative robust and high-density spintronics applications due to their protected topology and nanoscale size [@fer13]. In particular, they can be driven by lateral spin currents [@fer13; @samp13; @nag13], produced by electrical currents with five to six orders of magnitude smaller current density than those needed for domain wall motion [@rom13], thus pointing to energy efficient [@fer13] skyrmion-based racetrack-type memory devices[@par08]. However,current-driven skyrmions will drift from the racetrack direction due to the presence of Magnus force [@iwa13; @yux12], if the velocity is high enough. This phenomenon known as the Skyrmion Hall effect (SkHE) leads to their annihilation at the racetrack edge and the loss of stored information. An approach for limiting the SkHE effect is through spin-wave driven skyrmion motion [@zha15; @sch15]. Skyrmions can be displaced by magnons induced by thermal gradients in insulating chiral ferromagnets [@kon13], while the SkHE deviation vanishes for high energy magnons [@gar15]. However, compared with the current-driven skyrmion motion, it is difficult to generate spin waves in a nanometre-size nanotrack with appropriate spectral properties for driving the motion of a skyrmion. It is also difficult to realize a skyrmion nanocircuit based on thermal gradients. Consequently, the current-driven skyrmion motion is the most promising method and as such it attracts a great deal of research effort. To this end, various potential barriers have been proposed to confine skyrmions in the central region of the racetrack so that the annihilation at the racetrack edge is avoided [@zha16; @bar16; @pur16; @lai17; @foo15]. A suggested method is by tuning the perpendicular [@foo15] or the crystalline [@lai17] magnetic anisotropy. As a result, a path of lower resistance is created at the racetrack center, allowing the skyrmions to pass the racetrack without annihilation. Another approach, is by tuning the height of the ferromagnetic layers, creating a rectangular groove on the center of the racetrack. As a result, a curb structure is formed, which functions to confine the skyrmion within the groove [@pur16]. Furthermore, the damping constant of the racetrack can be tuned in either the transverse or the longitudinal direction in different regions of the racetrack [@liu16], so that the deviations of the skyrmions are in opposite directions and cancel each other out. Therefore, the skyrmions can be efficiently confined in the racetrack center and the SkHE is avoided. Another aspect hampering the use of magnetic skyrmions in racetrack memory applications, is their uncontrollable excitation realized at the edges of magnetic nanostrips and thin films [@ran17] leading to error writing events.. This phenomenon is known as the edge effect. In addition, skyrmion motion, even including the oscillating motion and the gyration [@gar16], is affected by the edges in confined geometries due to their potential force [@nav16; @gar16] acting on skyrmions. From the aforementioned works, it appears that the possibility of magnetic skyrmions generation and manipulation on boundary-free samples offers be a desirable direction of research and curved nanostructures, as for example, magnetic nanotubes, constitute a promising option.
The study of magnetic structure and solitonic excitation on curved surfaces has recently attracted intensive interest as curvature was shown to control physical properties of the system [@streu16]. The curvilinear geometry of bent and curved ferromagnetic wires and surfaces [@pyl15; @gai15; @car15] introduces effective chiral interactions and curvature-induced anisotropy.[@streu16] As a consequence, curvature-driven effects emerge, such as magnetochiral effects [@kra12; @ota12] and topologically induced magnetization patterning,[@kra12; @pyl15] resulting in high domain wall velocities [@yan12] and chirality symmetry breaking [@pyl15]. Despite the fact that recent works have focused on the impact of surface curvature on the emerging chiral properties and related magnetic order of otherwise achiral ferromagnetic materials [@streu16], to the best of our knowledge, the conditions for skyrmion formation on chiral curved surfaces has not been addressed yet. We anticipate on physical grounds, that the skyrmion phase supported on a planar nanostructure, such as a FM/HM interface, will be driven to instability under curving.
It is the main aim of the present work, to investigate the ground state properties of curved ferromagnetic nanostructures with chiral interactions (DMI) and examine the conditions under which curvature-driven skyrmion instability occurs. Our structural model accounts for direction modulation of the DMI vector induced by the curvature of the nanostructure under consideration, thus providing a more realistic description of the interplay between isotropic exchange (Heisenberg) and chiral interactions on curved surfaces. We focus on cylindrical nanoelements and nanotubes. Our results demonstrate the feasibility of skyrmion formation on the ridge of a nanotube, where the external field remains almost normal to the surface, provided that the radius of the nanotube remains at least comparable to the skyrmion radius $(R_{tube} \ge R_{Sk} )$. The same geometrical criterion ensures the stability of skyrmions without an external magnetic field on curved nanoelements.
Micromagnetic Model and Simulation Method
=========================================
We consider a thin ferromagnetic cylindrical nanostrip along the z-axis with length $L_z$, width $L_y$, inner radius $R$ and thickness $t\ll R$ (Fig.\[fig:sketch\]).
![(Color online) Cylindrical nanostrip extended along the $z$-axis with width $L_y$, thickness $t$, curvature radius $R$ and curvature angle $\phi_0$, used as our model system. []{data-label="fig:sketch"}](fig1.jpg){width="0.40\linewidth"}
The central angle of the curved nanostrip is defined as $\phi_0=L_y/R$. A planar nanostrip ($R\rightarrow\infty,\phi_0=0 $) and a cylindrical nanotube ($R\ne0,\phi_0=360^0$) naturally occur as limiting cases of the curved nanostrip.
The micromagnetic energy of the system as a functional of the continuous magnetization field $\textbf{m}(\textbf{r})=\textbf{M}(\textbf{r})/M_s$ reads $$\begin{aligned}
E[\textbf{m}]=\int d^3\textbf{r}~
\{
A |\nabla\textbf{m}|^2
-K_u (\textbf{m}\cdot \textbf{e}_\rho)^2
\nonumber \\
-M_s\textbf{m}\cdot\textbf{B}
+w_{DM}
\}
\label{eq:microm}\end{aligned}$$ where the integral runs over the volume of the nanostructure, $A$ is the exchange constant and $K_u$ is the radial anisotropy density, which we adopt here as a generalization of the perpendicular anisotropy observed in thin ferromagnetic films on a heavy
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We show that there exists a universal positive constant $\varepsilon_0 > 0$ with the following property: Let $g$ be a positive Einstein metric on $S^4$. If the Yamabe constant of the conformal class $[g]$ satisfies $$Y(S^4, [g]) >\frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}]) - \varepsilon_0\,,$$ where $g_{\mathbb S}$ denotes the standard round metric on $S^4$, then, up to rescaling, $g$ is isometric to $g_{\mathbb S}$. This is an extension of Gursky’s gap theorem for positive Einstein metrics on the four-sphere.'
address:
- 'Department of Mathematics, Tokyo Institute of Technology, Tokyo 152-8551, Japan'
- 'Department of Mathematics, Tokyo Institute of Technology, Tokyo 152-8551, Japan'
- 'Mathematics Department, Indian Institute of Science, 560012 Bangalore, India'
author:
- 'Kazuo Akutagawa${}^*$'
- 'Hisaaki Endo${}^{**}$'
- Harish Seshadri
date: 'January, 2018; February, 2018 (revised version).'
title: |
A gap theorem for positive Einstein metrics\
on the four-sphere
---
Introduction and main results
=============================
A smooth Riemannian metric $g$ is said to be [*Einstein*]{} if its Ricci tensor ${\rm Ric}_g$ is a constant multiple $\lambda$ of $g$: $${\rm Ric}_g = \lambda\,g\,.$$ When such a metric exists, it is natural to ask whether it is unique. However in dimension $n \geq 5$, there exist many examples of closed $n$-manifolds each of which has infinitely many non-homothetic Einstein metrics (cf.[@Besse]). In fact, there exists infinitely many non-homothetic Einstein metrics of positive sacalar curvature ([*positive Einstein*]{} for brevity) on $S^n$ when $5 \le n \le 9$ [@Bohm] (cf.[@Jensen], [@B-K]). There are no non-existence or uniqueness results known when $n \geq 5$.
When $n = 4$, there are necessary topological conditions for a closed $4$-manifold $M$ to admit an Einstein metric [@Thorpe], [@Hitchin-1], [@LeBrun-2]. Uniqueness is known in some special cases: when $M$ is a smooth compact quotient of real hyperbolic $4$-space ([resp.]{} complex-hyperbolic $4$-space), the standard negative Einstein metric is the unique Einstein metric (up to rescaling and isometry) [@BCG] ([resp.]{}[@LeBrun-1]). In the positive case, there are some partial rigidity results on the $4$-sphere $S^4$ and the complex projective plane $\mathbb{CP}^2$ [@GL], [@G], [@Y]. When $M = S^4$, the standard round metric $g_{\mathbb{S}}$ of constant curvature $1$ is, to date, the only known Einstein metric (up to rescaling and isometry). In this connection we have the following gap theorem due to M.Gursky (see [@ABKS] for the significance of the constant $\frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}])$):
\[Gursky\] Let $g$ be a positive Einstein metric on $S^4$. If its Yamabe constant $Y(S^4, [g])$ satisfies the following inequality $$Y(S^4, [g]) \geq \frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}])$$ then, up to rescaling, $g$ is isometric to $g_{\mathbb{S}}$. Here, $[g]$ denotes the conformal class of $g$.
Note that $Y(S^4, [h]) \leq Y(S^4, [g_{\mathbb{S}}]) = 8\sqrt{6}\pi$ for any Riemannian metric $h$ and that $Y(S^4, [g]) = R_g \sqrt{V_g}$ for any Einstein metric $g$, where $R_g$ and $V_g = {\rm Vol}(S^4, g)$ denote respectively the scalar curvature of $g$ and the volume of $(S^4, g)$.
Our main result in this paper is an extension of Theorem\[Gursky\]:
\[MainThm1\] There exists a universal positive constant $\varepsilon_0 > 0$ with the following property$:$ If $g$ is a positive Einstein metric on $S^4$ with Yamabe constant $$Y(S^4, [g]) >\frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}]) - \varepsilon_0,$$ then, up to rescaling, $g$ is isometric to $g_{\mathbb{S}}$.
This result can be restated in terms of the [*Weyl constant*]{} of $[g]$ (cf.[@ABKS]). Indeed, the Chern-Gauss-Bonnet theorem (see Remark\[ALE\]-(1)) implies that the lower bound on the Yamabe constant is equivalent to the following upper bound on the Weyl constant: $\int_M |W_g|^2 d \mu_g < \frac{32}{3} \pi^2 + \widetilde{\varepsilon}_0$, where $\widetilde{\varepsilon}_0 := \frac{\varepsilon_0}{24}(16\sqrt{2}\pi - \varepsilon_0) > 0$.
More generally, we obtain the following (note that $8\sqrt{2}\pi = \frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}])$):
\[MainThm2\] For $c > 0$, let $\mathcal{E}_{\geq c}(S^4)$ denote the space of all unit-volume positive Einstein metrics $g$ on $S^4$ with $c \leq Y(S^4, [g]) < 8\sqrt{2}\pi$. Then the number of connected components of the moduli space $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$ is finite. In particular, $\{ Y(S^4, [g]) \in [c, 8\sqrt{2}\pi)\ |\ g \in \mathcal{E}_{\geq c}(S^4) \}$ is a finite set $($possible empty$)$.
Here $ \mathcal{M}_1(S^4)/{\rm Diff}(S^4)$ has the $C^\infty$-topology and $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$ is endowed with the subspace topology.
These theorems follow from the following crucial result:
\[MainProp\] Let $\{g_i\}$ be a sequence in $ \mathcal{E}_{\geq c}(S^4)$ for some positive constant $c > 0$. Then there exists a subsequence $\{j\} \subset \{i\}$, $\{\phi_j\} \subset {\rm Diff}(S^4)$ and a unit-volume positive Einstein metric $g_{\infty}$ on $S^4$ such that $\phi_j^*g_j$ converges to $g_{\infty}$ with respect to the $C^{\infty}$-topology on $\mathcal{M}_1(S^4)$.
[**Remark:**]{} TheoremD of [@Anderson] states that the same conclusion as the one in Proposition\[MainProp\] holds for any sequence $\{g_i\} \subset \mathcal{E}_{\geq c}(M)$ on any closed $4$-manifold $M$ with $1 \leq \chi(M) \leq 3$, where $\chi(M)$ denotes the Euler characteristic of $M$. Unfortunately, the proof appears to be incorrect. Specifically, Theorem D is based on Lemma 6.3, which asserts that a Ricci-flat ALE 4-space $X$ with $\chi(X)=1$ is necessarily isometric to the Euclidean $4$-space $({\mathbb R}^4, g_{\mathbb{E}})$. This is not true: the Ricci-flat ALE 4-space $X_1$ constructed by Eguchi-Hanson [@EH] has a free, isometric ${\mathbb Z}_2$-action whose quotient $X_2 = X_1/{\mathbb Z}_2$ is a Ricci-flat ALE $4$-space with $\chi(X_2)=1$. Note that $X_2$ is nonorientable. Even if we assume that $X$ is orientable in Lemma6.3, the topological argument in the proof still contains some gaps. Proposition3.10 of [@Anderson-GAFA] corrects a minor inaccuracy of Lemma6.3. However, the proof also contains some gaps in the topological argument (see Remark\[Counter\] in $\S$4 for details).
Gursky’s proof of Theorem\[Gursky\] involves a sophisticated Bochner technique, a modified scalar curvature and a conformal rescaling argument. The proof of Proposition1.4 is based on topological results about $S^3$-quotients embedded in $S^4$ and the convergence theory of Einstein metrics in four-dimensions. Given this proposition, we invoke Gursky’s result to prove Theorems\[MainTh
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Estimation of 3D human pose from monocular image has gained considerable attention, as a key step to several human-centric applications. However, generalizability of human pose estimation models developed using supervision on large-scale in-studio datasets remains questionable, as these models often perform unsatisfactorily on unseen in-the-wild environments. Though weakly-supervised models have been proposed to address this shortcoming, performance of such models relies on availability of paired supervision on some related tasks, such as 2D pose or multi-view image pairs. In contrast, we propose a novel kinematic-structure-preserved unsupervised 3D pose estimation framework[^1], which is not restrained by any paired or unpaired weak supervisions. Our pose estimation framework relies on a minimal set of prior knowledge that defines the underlying kinematic 3D structure, such as skeletal joint connectivity information with bone-length ratios in a fixed canonical scale. The proposed model employs three consecutive differentiable transformations named as forward-kinematics, camera-projection and spatial-map transformation. This design not only acts as a suitable bottleneck stimulating effective pose disentanglement, but also yields interpretable latent pose representations avoiding training of an explicit latent embedding to pose mapper. Furthermore, devoid of unstable adversarial setup, we re-utilize the decoder to formalize an energy-based loss, which enables us to learn from in-the-wild videos, beyond laboratory settings. Comprehensive experiments demonstrate our state-of-the-art unsupervised and weakly-supervised pose estimation performance on both Human3.6M and MPI-INF-3DHP datasets. Qualitative results on unseen environments further establish our superior generalization ability.'
author:
- |
Jogendra Nath Kundu[^2]Siddharth SethRahul M VMugalodi Rakesh\
**R. Venkatesh BabuAnirban Chakraborty\
Indian Institute of Science, Bangalore, India\
[{jogendrak, siddharthseth}@iisc.ac.in, rmvenkat@andrew.cmu.edu, rakeshramesha@gmail.com, {venky, anirban}@iisc.ac.in]{}**
bibliography:
- 'ms.bib'
title: |
Kinematic-Structure-Preserved Representation for\
Unsupervised 3D Human Pose Estimation
---
\[tab:char\]
Introduction
============
Building general intelligent systems, capable of understanding the inherent 3D structure and pose of non-rigid humans from monocular RGB images, remains an illusive goal in the vision community. In recent years, researchers aim to solve this problem by leveraging the advances in two key aspects, a) improved architecture design [@newell2016stacked; @chu2017multi] and b) increasing collection of diverse annotated samples to fuel the supervised learning paradigm [@VNect_SIGGRAPH2017]. However, obtaining 3D pose ground-truth for non-rigid human-bodies is a highly inconvenient process. Available motion capture systems, such as body-worn sensors (IMUs) or multi-camera structure-from-motion (SFM), requires careful pre-calibration, and hence usually done in a pre-setup laboratory environment [@ionescu2013human3; @zhang2017martial]. This often restricts diversity in the collected dataset, which in turn hampers generalization of the supervised models trained on such data. For instance, the widely used Human3.6M [@ionescu2013human3] dataset captures 3D pose using 4 fixed cameras (only 4 backgrounds scenes), 11 actors (limited apparel variations), and 17 action categories (limited pose diversity). A model trained on this dataset delivers impressive results when tested on samples from the same dataset, but does not generalize to an unknown deployed environment, thereby yielding non-transferability issue.
To deal with this problem, researchers have started exploring innovative techniques to reduce dependency on annotated real samples. Aiming to enhance appearance diversity on known 3D pose samples (CMU-MoCap), synthetic datasets have been proposed, by compositing a diverse set of human template foregrounds with random backgrounds [@varol2017learning]. However, models trained on such samples do not generalize to a new motion (e.g. a particular dance form), apparel, or environment much different from the training samples, as a result of large domain shift. Following a different direction, several recent works propose weakly-supervised approaches [@zhou2017towards], where they consider access to a large-scale dataset with paired supervision on some related-tasks other than the task in focus (3D pose estimation). Particularly, they access multiple cues for weak supervision, such as, a) paired 2D ground-truth, b) unpaired 3D ground-truth (3D pose without the corresponding image), c) multi-view image pair ([Rhodin et al. [-@rhodin2018unsupervised]]{}), d) camera parameters in a multi-view setup etc. (see Table \[tab:char\] for a detailed analysis).
While accessing such weak paired-supervisions, the general approach is to formalize a self-supervised consistency loop, such as 2D$\rightarrow$3D$\rightarrow$2D [@tung2017adversarial], view-1$\rightarrow$3D$\rightarrow$view-2 [@kocabas2019self], etc. However, the limitations of domain-shift still persists as a result of using annotated data (2D ground-truth or multi-view camera extrinsic). To this end, without accessing such paired samples, [@jakab2019learning] proposed to leverage unpaired samples to model the natural distribution of the expected representations (2D or 3D pose) using adversarial learning. Obtaining such samples, however, requires access to a 2D or 3D pose dataset and hence the learning process is still biased towards the action categories presented in that dataset. One can not expect to have access to any of the above discussed paired or unpaired weak supervisory signals for an unknown deployed environment (e.g. frames of a dance-show where the actor is wearing a rare traditional costume). This motivates us to formalize a fully-unsupervised framework for monocular 3D pose estimation, where the pose representation can be adapted to the deployed environment by accessing only the RGB video frames devoid of dependency on any explicit supervisory signal.
**Our contributions.** We propose a novel unsupervised 3D pose estimation framework, relying on a carefully designed kinematic structure preservation pipeline. Here, we constrain the latent pose embedding, to form interpretable 3D pose representation, thus avoiding the need for an explicit latent to 3D pose mapper. Several recent approaches aim to learn a prior characterizing kinematically plausible 3D human poses using available MoCap datasets ([Kundu et al. ]{}[-@kundu2019bihmp]). In contrast, we plan to utilize minimal kinematic prior information, by adhering to the restrictions to not use any external unpaired supervision. This involves, a) access to the knowledge of hierarchical limb connectivity, b) a vector of allowed bone length ratios, and c) a set of 20 synthetically rendered images with diverse background and pose (a minimal dataset with paired supervision to standardize the model towards the intended 2D or 3D pose conventions). The aforementioned prior information is very minimal in comparison to the pose-conditioned limits formalized by ([Akhter et al. ]{}[-@akhter2015pose]) in terms of both dataset size and parameters associated to define the constraints.
In the absence of multi-view or depth information, we infer 3D structure, directly from the video samples, for the unsupervised 3D pose estimation task. One can easily segment moving objects from a video, in absence of any background (BG) motion. However, this is only applicable to in-studio static camera feeds. Aiming to work on in-the-wild YouTube videos , we formalize separate unsupervised learning schemes for videos with both static and dynamic BG. In absence of background motion, we form pairs of video frames with a rough estimate of the corresponding BG image, following a training scheme to disentangle foreground-apparel and the associated 3D pose. However, in the presence of BG motion, we lack in forming such consistent pairs, and thus devise a novel energy-based loss on the disentangled pose and appearance representations. In summary,
- We formalize a novel collection of three differentiable transformations, which not only acts as a bottleneck stimulating effective pose disentanglement but also yields interpretable latent pose representations avoiding training of an explicit latent-to-pose mapper.
- The proposed energy-based loss, not only enables us to learn from in-the-wild videos, but also improves generalizability of the model as a result of training on diverse scenarios, without ignoring any individual image sample.
- We demonstrate *state-of-the-art* unsupervised and weakly-supervised 3D pose estimation performance on both Human3.6M and MPI-INF-3DHP datasets.
Related Works {#sec:related-works}
=============
**3D human pose estimation.** There is a plethora of fully-supervised 3D pose estimations works [@fang2018learning; @mehta2017monocular; @VNect_SIGGRAPH2017], where the performance is bench-marked on the same dataset, which is used for training. Such approaches do not generalize on minimal domain shifts beyond the laboratory environment. In absence of large-scale diverse outdoor datasets with 3D pose annotations, datasets with 2D pose annotations is used as a weak supervisory signal for transfer learning using various
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Convergence of stochastic processes with jumps to diffusion processes is investigated in the case when the limit process has discontinuous coefficients. An example is given in which the diffusion approximation of a queueing model yields a diffusion process with discontinuous diffusion and drift coefficients.'
address:
- 'School of Mathematics, University of Minnesota, Minneapolis, MN, 55455, USA'
- 'Department of Electrical Engineering-Systems, Tel Aviv University, 69978 Tel Aviv, Israel'
author:
- 'N. V. Krylov'
- 'R. Liptser'
title: On diffusion approximation with discontinuous coefficients
---
Introduction {#sec-1}
============
Suppose that we are given a sequence of semimartingales $(x^n_t)_{t\ge 0}$, $n=1,2,...$, with paths in the Skorokhod space ${\mathcal{D}}=
{\mathcal{D}}([0,\infty),{\mathbb{R}}^d)$ of ${\mathbb{R}}^{d}$-valued right-continuous functions on $[0,\infty)$ having left limits on $(0,\infty)$. If one can prove that the sequence of distributions ${\mathbb{Q}}^{n}$ of $x^{n}_{\cdot}$ on ${\mathcal{D}}$ weakly converges to the distribution ${\mathbb{Q}}$ of a diffusion process $(x _t)_{t\ge 0}$, then one says that the sequence of $(x^n_t)_{t\ge 0}$ admits a diffusion approximation. In this article by diffusion processes we mean solutions of Itô equations of the form $$x_t=x_0+\int_0^tb(s,x_s)\,ds+\int_0^t\sqrt{a(s,x_s)}\,dw_s,$$ with $w_{t}$ being a vector-valued Wiener process. Usually to investigate the question if in a particular situation there is a diffusion approximation one uses the general framework of convergence of semimartingales as developed for instance in §3, Ch. 8 of [@LS] (also see the references in this book).
The problem of diffusion approximation attracted attention of many researchers who obtained many deep and important results. The reason for this is that diffusion approximation is a quite efficient tool in stochastic systems theory (see [@Ku'84], [@Ku'90]), in asymptotic analysis of queueing models under heavy traffic and bottleneck regimes (see [@KL]), in finding asymptotically optimal filters (see [@KuRu], [@LR]), in asymptotical optimization in stochastic control problems (see [@KuRu0], [@LRT]), and in many other issues.
In all above-mentioned references the coefficients $a(t,x)$ and $b(t,x)$ of the limit diffusion process are continuous in $x$. In part, this is dictated by the approach developed in §3, Ch. 8 of [@LS]. On the other hand, there are quite a few situations in which the limit process should have discontinuous coefficients. One of such situations is presented in [@FS] where a queueing model is considered. It was not possible to apply standard results and the authors only conjectured that the diffusion approximation should be a process with natural coefficients. Later this conjecture was rigorously proved in [@Ch]. In [@Ch] and [@FS] only drift term is discontinuous. Another example of the limit diffusion with discontinuous both drift and diffusion coefficients is given in article [@KhasKryl] on averaging principle for diffusion processes with null-recurrent fast component.
The idea to circumvent the discontinuity of $a$ and $b$ is to try to show that the time spent by $(t,x_{t})$ in the set $G$ of their discontinuity in $x$ is zero. This turns out to be enough if outside of $G$ the “coefficients” of $x^{n}_{t}$ converge “uniformly” to the coefficients of $x_{t}$. By the way, even if all these hold, still the functionals $$\int_{0}^{t}a(t,y_{t})\,dt,\quad
\int_{0}^{t}b(t,y_{t})\,dt,\quad y_{\cdot}\in{\mathcal{D}}$$ need not be continuous on the support of ${\mathbb{Q}}$. This closes the route of “trivial” generalizing the result from §3, Ch. 8 of [@LS].
To estimate the time spent by $x_{t}$ we use an inequality similar to the following one $$\label{*}
E\int_{0}^{T}f(t,x_{t})\,dt \leq
N\Bigg(\int_{0}^{T}\int_{\mathbb{R}^d}
f^{d+1}(t,x)\,dxdt\Bigg)^{1/(d+1)},$$ which is obtained in [@Kr74] for nonnegative Borel $f$. Then upon assuming that $G\subset(0,\infty)\times{\mathbb{R}}^{d}$ has $d+1$-dimensional Lebesgue measure zero and substituting $I_{G}$ in place of $f$ in (\[\*\]) we get that indeed the time spent by $(t,x_{t})$ in $G$ is zero. However, for (\[\*\]) to hold we need the process $x_{t}$ to be uniformly nondegenerate which may be not convenient in some applications. Therefore, in Sec. \[section 3.14.1\] we prove a version of (\[\*\]), which allows us to get the conclusion about the time spent in $G$ assuming that the process is nondegenerate only on $G$. In essence, our approach to diffusion approximation with discontinuous coefficients is close to the one from [@Ch]. However, details are quite different and we get more general results under less restrictive assumptions. In particular, we do not impose the linear growth condition. Neither do we assume that the second moments of $x^{n}_{0}$ are bounded. The weak limits of processes with jumps appear in many other settings, in particular, in Markov chain approximations in the theory of controlled diffusion processes, where, generally, the coefficients of $x^{n}_{t}$ are not supposed to converge to anything in any sense and yet the processes converge weakly to a process of diffusion type.
We mention here Theorem 5.3 in Ch. 10 of [@KD] also bears on this matter in the particular case of Markov chain approximations in the theory of controlled diffusion processes. Clearly, there is no way to specify precisely the coefficients of all limit points in the general problem. Still one can obtain some nontrivial information and one may wonder if one can get anything from general results when we are additionally given that the coefficients do converge on the major part of the space. In Remarks \[remark 10.13.1\] and \[remark 10.13.2\] we show that this is not the case in what concerns Theorem 5.3 in Ch. 10 of [@KD].
Above we alluded to the “coefficients” of $x^{n}_{t}$. By them we actually mean the local drift and the matrix of quadratic variation. We do not use any additional structure of $x^{n}_{t}$. In particular, the quadratic variation is just the sum of two terms: one coming from diffusion and another from jumps. Therefore unlike [@KP] we do not use any stochastic equations for $x^{n}_{t}$. This allows us to neither introduce nor use any assumptions on the martingales driving these equations and their (usual) coefficients thus making the presentation simpler and more general. On the other hand it is worth noting that the methods of [@KP] may be more useful in other problems. Our intention was not to cover all aspects of diffusion approximation but rather give a new method allowing us to treat discontinuous coefficients. In particular, we do not discuss uniqueness of solutions to the limit equation. This is a separate issue belonging to the theory of diffusion processes and we only mention article [@KhasKryl], where the reader can find a discussion of it.
The paper is organizes as follows. In Section \[section 4.14.1\] we prove our main results, Theorems \[theorem 3.8.1\] and \[theorem 4.25.1\], about diffusion approximation. Their proofs rely on the estimate proved in Sec. \[section 3.14.1\] we have been talking about above. But even if the set $G$ is empty, the results which we prove are the first ones of the kind.
In Theorems \[theorem 3.8.1\] and \[theorem 4.25.1\] there is no assumption about any control of $\sqrt{a(t,x)}$ and $b(t,x)$ as $|x|\to\infty$, but instead we assume that ${\mathbb{Q}}^{n}$ converge weakly to ${\mathbb{Q}}$. Therefore, in Sec. \[section 4.14.2\] we give a sufficient condition for precompactness of a sequence of distributions on Skorokhod space. Interestingly enough, this condition is different from those which one gets from [@JS] and [@LS] and again does not involve usual growth conditions. Sec. \[section 4.18.1\] contains an example of application of our results to a queueing model close to the one from [@Ch], [@FS]. We slightly modify the model from [@Ch], [@FS] and get the diffusion approximation with discontinuous [*drift*]{} and [*diffusion*]{} coefficients. To the best of our knowledge this is the first example when the diffusion approximation leads to discontinuous diffusion coefficients.
The authors are sincerely grateful to the referees for many useful suggestions.
The main results {#section 4.14.1}
================
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The Infrared Space Observatory [*(ISO)*]{} is used to carry out mid-IR (7 and 15 m) and far-IR (90 m) observations of a sample of star-forming sub-mJy radio sources. By selecting the sample at radio wavelengths, one avoids biases due to dust obscuration. It is found that the mid-IR luminosities, covering the PAH features, measure the star formation rate for galaxies with $P_{1.4 GHz} < 10^{23}$ W Hz$^{-1}$. This is further confirmed using the H$\alpha$ luminosities. The far-IR emission is also found to trace the SFR over the whole range of radio and H$\alpha$ luminosities. The implication of the mid-IR measurements in estimating the SFRs from the future infrared space missions (SIRTF and ASTRO-F) is discussed.'
author:
- Bahram Mobasher
- José Afonso
- Lawrence Cram
title: 'ISO Observations of Star-forming Galaxies'
---
Introduction
============
There now exist several measurements of the star formation rate (SFR) at different redshifts, based on UV [@1; @2; @3] and Balmer-line [@4; @5; @6; @7] studies, with the latter yielding estimates a factor of 2–3 times higher than the former, presumably because of differential dust extinction. These disagreements impede progress in understanding the evolution with redshift of the rates of star-formation and heavy element production [@8]. The problem becomes more serious at high redshifts due to changes in dust content in galaxies with look-back time. In particular, optically selected samples are likely to be biased against actively star-forming and dusty galaxies, leading to an underestimation of the SFR from these samples. Indeed, it has been shown that a large fraction of the bolometric luminosity emerges at far–IR wavelengths, with recent observations with the Infrared Space Observatory (ISO) showing that the contribution to the cosmic infrared background is dominated by infrared luminous galaxies. This confirms that most of the star formation, specially at high redshifts, is hidden in dusty environments. Also, it is shown that different star-formation diagnostics give different SFRs even for the same galaxy. Therefore, to accurately trace the SFR, one needs to use as many [*independent*]{} star-formation diagnostics as possible.
In this study, the sensitivity of the mid-IR fluxes (7-15 m), covering the PAH features, to the star-formation activity in galaxies will be studied, using an unbiased sample of star-forming galaxies. The potential of this technique in measuring the SFRs at $z\sim 2$ is then discussed.
Sample Selection
================
The sample for this study consists of sub-mJy radio sources, selected at radio (1.4GHz) wavelengths [@9] and hence, is free from dust-induced selection biases. A total of 400 of these galaxies are then spectroscopically observed with their redshifts measured and spectral features (H$\alpha$, MgII, etc) identified [@10]. A sample of 65 radio sources were then observed with ISOCAM (7 and 15 m) and ISOPHOT (90 m)- (Afonso et al 2001, [*in preparation*]{}). The objects adopted for [*ISO*]{} observations are chosen to be sub-mJy radio sources, showing evidence for star-formation activity in their spectra and sufficiently bright at mid- to far- IR wavelengths (as predicted from their SEDs) to allow detection at these wavelengths. The number of radio sources in the [*ISO*]{} survey region, together with the number of galaxies with detections at the three [*ISO*]{} wavelengths are listed in Table \[tab1\]. The ISOCAM pointed survey also resulted in the serendipitous detection of 26 sources for which no radio counterpart was found. These objects will not be discussed here. Details about the [*ISO*]{} observations and data reduction will be presented in a future paper (Afonso et al 2001, [*in preparation*]{}).
------ ------- -------
Band $N_s$ $N_d$
7 m 146 16
15 m 146 15
90 m 44 9
------ ------- -------
: Number of sources in the the areas covered by both the [*ISO*]{} and radio surveys. $N_s$ and $N_d$ denote, respectively, the number of radio sources over the area covered by the [*ISO*]{} (65 pointings for ISOCAM, and 44 for ISOPHOT) and the number of [*ISO*]{} detected sources.
\[tab1\]
Results
=======
The intrinsic luminosities at the [*ISO*]{} and radio wavelengths are estimated assuming $H_0 =65$ km/sec/Mpc. The K-corrections are applied, assuming a flat spectrum at 7 and 15 m wavelengths. For the 90 $\mu$m and 1.4GHz fluxes, a power-law SED of the form $f_\nu \propto \nu^{n}$ is assumed with spectral indices of respectively $n=-2$ and $-0.7$.
The ratio of the [*ISO*]{} (7, 15, 90 m) to radio power as a function of the radio power for galaxies in the present sample is shown in Fig. \[fig1\]. Both the detections and upper limits are included in this diagram. Figure \[fig1\] is significant in that, the lack of a trend here indicates that the radio, compared to mid-IR and far-IR luminosities measure the [*same*]{} quantity (ie. star-formation) whereas, the presence of a trend implies that they are sensitive to [*different*]{} physical processes.
![Ratio of the [*ISO*]{} luminosities to the radio power as a function of the radio (1.4GHz) power. $L_{7{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}$, $L_{15{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}$ and $L_{90{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}$ are defined as $\nu P_\nu$ at the respective rest-frame wavelength and are given in units of $L_\odot$.[]{data-label="fig1"}](PLOTratio7_15_90vs14all.eps){width=".9\textwidth"}
The $L_{7{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/P_{1.4\,{\rm GHz}} - P_{1.4\,{\rm GHz}} $ and $L_{15{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/P_{1.4\,{\rm GHz}} - P_{1.4\,{\rm GHz}}$ relations both show a slight trend for log$(P_{1.4\,{\rm GHz}}) < 10^{23}$W/Hz, followed by a steep slope at log$(P_{1.4\,{\rm GHz}}) > 10^{23}$W/Hz. The value of $10^{23}$W/Hz corresponds to the characteristic radio power of the sub-mJy sources where also a change of slope is found in the 1.4GHz luminosity function of star-forming galaxies [@13]. Assuming that the radio emission from galaxies is a measure of the synchrotron radiation due to relativistic electrons, produced by supernovae remnants, and hence their SFR [@11; @12], one concludes that for $ P_{1.4\,{\rm GHz}} < 10^{23}$W/Hz, the mid-IR (7 and 15 m) luminosity is sensitive to the star-formation activity. However, for objects with $ P_{1.4\,{\rm GHz}} > 10^{23}$ W/Hz, the PAH molecules are destroyed due to the strength of the photon field, resulting a decrease in the mid-IR flux from galaxies. At the far-IR 90 m wavelength, there is no significant trend on the $L_{90{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/P_{1.4\,{\rm GHz}} - P_{1.4\,{\rm GHz}}$ diagram, confirming that both the far-IR and radio luminosities measure the same quantity (ie. SFR). These results are obtained using both the detections and upper limits. Using only the detections, the trend in the relation disappears at 15 m while remains the same for 7 m.
The above results are confirmed using H$\alpha$ line luminosity (Figure \[fig2\]) which is a more direct measure of the star-formation in galaxies. While there is a small trend on the $L_{7{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/L_{H\alpha} - L_{H\alpha}$ relation for $L_{H\alpha} > 10^{34.8}$ W, the trend almost disappears for $L_{15{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/L_{H\alpha} - L_{H\alpha}$ and is entirely absent on the $L_{90{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/L_{H\alpha} - L_{H\alpha}$ relation. This implies an increase in the sensitivity to the star-formation from 7 to 15 and 90 m wavelengths, in agreement with the results from Figure \[fig1\].
![Ratio of the [*ISO*]{} mid and far-IR luminosities to H$\alpha$ luminosity
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'A 2-server Private Information Retrieval (PIR) scheme allows a user to retrieve the $i$th bit of an $n$-bit database replicated among two servers (which do not communicate) while not revealing any information about $i$ to either server. In this work we construct a 1-round 2-server PIR with total communication cost $n^{O({\sqrt{\log\log n/\log n}})}$. This improves over the currently known 2-server protocols which require $O(n^{1/3})$ communication and matches the communication cost of known $3$-server PIR schemes. Our improvement comes from reducing the number of servers in existing protocols, based on Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing these protocols in an algebraic way (using polynomial interpolation) and extending them using partial derivatives.'
author:
- 'Zeev Dvir[^1]'
- 'Sivakanth Gopi[^2]'
bibliography:
- 'bibliography.bib'
title: '2-Server PIR with sub-polynomial communication'
---
Introduction
============
Private Information Retrieval (PIR) was first introduced by Chor, Goldreich, Kuzhelevtiz and Sudan [@ChorKGS98]. In a $k$-server PIR scheme, a user can retrieve the $i$th bit $a_i$ of a $n$-bit database $\ba=({a_1,\cdots,a_n})\in{{\{0,1\}}}^n$ replicated among $k$ servers (which do not communicate) while giving no information about $i$ to any server. The goal is to design PIR schemes that minimize the communication cost which is the worst case number of bits transferred between the user and the servers in the protocol. The trivial solution which works even with one server is to ask a server to send the entire database $\ba$, which has communication cost $n$.
When $k=1$ the trivial solution cannot be improved [@ChorKGS98]. But when $k\ge 2$, the communication cost can be brought down significantly. In [@ChorKGS98], a 2-server PIR scheme with communication cost $O(n^{1/3})$ and a $k$-server PIR scheme with cost ${O\left(k^2\log k n^{1/k}\right)}$ were presented. The $k$-server PIR schemes were improved further in subsequent papers [@Ambainis97; @BeimelI01; @BeimelIKR02]. In [@BeimelIKR02], a $k$-server PIR scheme with cost $n^{{O\left(\frac{\log\log k}{k\log k}\right)}}$ was obtained. This was the best for a long time until the breakthrough result of Yekhanin[@Yekhanin08] who gave the first $3$-server scheme with sub-polynomial communication (assuming a number theoretic conjecture). Later, Efremenko[@Efremenko09] gave an unconditional $k$-server PIR scheme with sub-polynomial cost for $k\ge 3$ which were slightly improved in [@ItohS10] and [@CheeFLWZ13]. These new PIR schemes follow from the constructions of constant query smooth Locally Decodable Codes (LDCs) of sub-exponential length called Matching Vector Codes (MVCs)[@DvirGY10]. A $k$-query LDC [@KT00] is an error correcting code which allows the receiver of a corrupted encoding of a message to recover the $i$th bit of the message using only $k$ (random) queries. In a [*smooth*]{} LDC, each query of the reconstruction algorithm is uniformly distributed among the code word symbols. Given a $k$-query smooth LDC, one can construct a $k$-server PIR scheme by letting each server simulate one of the queries. Despite the advances in $3$-server PIR schemes, the 2-server PIR case is still stuck at $O(n^{1/3})$ since 2-query LDCs provably require exponential size encoding [@KerenidisW03] (which translates to $\Omega(n)$ communication cost in the corresponding PIR scheme). For more information on the relation between PIR and LDC and the constructions of sub-exponential LDCs and sub-polynomial cost PIR schemes with more than 2 servers we refer to the survey [@Yekhanin12].
On the lower bounds side, there is very little known. The best known lower bound for the communication cost of a 2-server PIR is $5\log n$ [@WehnerW05] whereas the trivial lower bound is $\log n$. In [@ChorKGS98], a lower bound of $\Omega(n^{1/3})$ is conjectured. In [@RazborovY06], an $\Omega(n^{1/3})$ lower bound was proved for a restricted model of 2-server PIR called bilinear group based PIR. This model encompasses all the previously known constructions which achieve $O(n^{1/3})$ cost for 2-server PIR. We elaborate more on the relation between this model and our construction after we present our results below.
PIR is extensively studied and there are several variants of PIR in literature. The most important variant with cryptographic applications is called Computationally Private Information Retrieval (CPIR). In CPIR, the privacy guarantee is based on computational hardness of certain functions i.e. a computationally bounded server cannot gain any information about the user’s query. In this case, non-trivial schemes exist even in the case of one server under some cryptographic hardness assumptions. For more information on these variants of PIR see [@Gasarch; @Gasarch04; @Lipmaa]. In this paper, we are only concerned with information theoretic privacy i.e. even a computationally unbounded server cannot gain any information about the user’s query which is the strongest form of privacy.
Our Results
-----------
We start with a formal definition of a 2-server PIR scheme. A 2-server PIR scheme involves two servers $\cS_1$ and $\cS_2$ and a user $\cU$. A database $\ba=({a_1,\cdots,a_n})\in {{\{0,1\}}}^n$ is replicated between the servers $\cS_1$ and $\cS_2$. We assume that the servers cannot communicate with each other. The user $\cU$ wants to retrieve the $i$th bit of the database $a_i$ without revealing any information about $i$ to either server. The following definition is from [@ChorKGS98]:
A 2-server PIR protocol is a triplet of algorithms $\cP=(\cQ,\cA,\cR)$. At the beginning, the user $\cU$ obtains a random string $r$. Next $\cU$ invokes $\cQ(i,r)$ to generate a pair of queries $(q_1,q_2)$. $\cU$ sends $q_1$ to $\cS_1$ and $q_2$ to $\cS_2$. Each server $S_j$ responds with an answer $ans_j=\cA(j,\ba,q_j)$. Finally, $\cU$ computes its output by applying the recovery algorithm $\cR(ans_1,ans_2,i,r)$. The protocol should satisfy the following conditions:
- For any n, $\ba\in {{\{0,1\}}}^n$ and $i\in [n]$, the user the outputs the correct value of $a_i$ with probability 1 (where the probability is over the random strings $r$) i.e. $\cR(ans_1,ans_2,i,r)=a_i$
- Each server individually learns no information about $i$ i.e. for any fixed database $\ba$ and for $ j=1,2$, the distributions of $q_j(i_1,r)$ and $q_j(i_2,r)$ are identical for all $i_1,i_2\in [n]$ when $r$ is randomly chosen.
The communication cost of the protocol is the total number of bits exchanged between the user and the servers in the worst case.
$k$-server PIR is similarly defined, with the database replicated among $k$ servers which cannot communicate between themselves. We only defined 1-round PIR i.e. there is only one round of interaction between the user and the servers. All known constructions of PIR schemes are 1-round and it is an interesting open problem to find if interaction helps. We now state our main theorem:
\[mainthm\] There exists a 2-server PIR scheme with communication cost $n^{{O\left(\sqrt{\frac{\log\log n}{\log n}}\right)}}$.
The definition of a 2-server PIR scheme can be generalized in an obvious manner to any number of servers. In [@Efremenko09] a $2^r$-server PIR schemes was given with $n^{{O\left(({\log\log n}/{\log n})^{1-1/r}\right)}}$ communication cost for any $r\ge 2$. Using our techniques, we can reduce the number of servers in this scheme by a factor of two. That is, we prove the following stronger form of Theorem
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The universal three-body dynamics in ultra-cold binary gases confined to one-dimensional motion are studied. The three-body binding energies and the (2 + 1)-scattering lengths are calculated for two identical particles of mass $m$ and a different one of mass $m_1$, which interactions is described in the low-energy limit by zero-range potentials. The critical values of the mass ratio $m/m_1$, at which the three-body states arise and the (2 + 1)-scattering length equals zero, are determined both for zero and infinite interaction strength $\lambda_1$ of the identical particles. A number of exact results are enlisted and asymptotic dependences both for $m/m_1 \to \infty$ and $\lambda_1 \to -\infty$ are derived. Combining the numerical and analytical results, a schematic diagram showing the number of the three-body bound states and the sign of the (2 + 1)-scattering length in the plane of the mass ratio and interaction-strength ratio is deduced. The results provide a description of the homogeneous and mixed phases of atoms and molecules in dilute binary quantum gases.'
author:
- 'O. I. Kartavtsev'
- 'A. V. Malykh'
- 'S. A. Sofianos'
bibliography:
- 'onedim.bib'
title: 'Bound states and scattering lengths of three two-component particles with zero-range interactions under one-dimensional confinement'
---
Introduction {#Introduction}
============
Dynamics of few particles confined in low dimensions is of interest in connection with numerous investigations ranging from atoms in ultra-cold gases [@Gorlitz01; @Rychtarik04; @Petrov00; @Mora04; @Mora05; @Yurovsky06; @Rizzi08] to nonostructures [@Johnson04; @Slachmuylders07; @Olendski08]. Experiments with ultra-cold gases in the one-dimensional (1D) and quasi-1D traps have been recently performed [@Gorlitz01; @Moritz05; @Sadler06; @Ospelkaus06], amid the rapidly growing interest to the investigation of mixtures of ultra-cold gases [@Karpiuk05; @Shin06; @Chevy06; @Deh08; @Taglieber08; @Capponi08; @Zollner08]. Different aspects of the three-body dynamics in 1D have been analyzed in a number of recent papers, e. g., the bound-state spectrum of two-component compound in [@Cornean06], low-energy three-body recombination in [@Mehta07], application of the integral equations in [@Mehta05], and variants of the hyperradial expansion in [@Amaya-Tapia98; @Amaya-Tapia04; @Kartavtsev06].
It is necessary to emphasize that the exact solutions are known for an arbitrary number of identical particles in 1D with contact interactions [@McGuire64; @Lieb63]; in particular, it was found that the ground-state energy $E_N$ of $N$ attractive particles scales as $E_N/E_{N=2} = N (N^2 - 1)/6$. There is a vast literature, in which the exact solution is used to analyze different properties of few- and many-body systems; few examples of this approach can be found in Ref. [@Li03; @Girardeau07; @Zvonarev07; @Guan07].
The main parameters characterizing the multi-component ultracold gases, i. e., the masses and interaction strengths can be easily tuned within wide ranges in the modern experiments, which handle with different compounds of ultracold atoms and adjust the two-body scattering lengths to an arbitrary values by using the Feshbach-resonance and confinement-resonance technique [@Olshanii98]. Under properly chosen scales, all the properties of the system depend on the two dimensionless parameters, viz., mass ratio and interaction strength ratio, the most important characteristics being the bound-state energies and the (2 + 1)-scattering lengths. In particular, knowledge of these characteristics is essential for description of the concentration dependence and phase transitions in dilute two-component mixtures of ultra-cold gases.
In the present paper, the two-component three-body system consisting of a particle of mass $m_1$ and two identical particles of mass $m$ interacting via contact ($\delta$-function) inter-particle potential is studied. In the low-energy limit, the contact potential is a good approximation for any short-range interaction and its usage provides a universal, i. e., independent of the potential form, description of the dynamics [@Demkov88; @Wodkiewicz91; @Mehta05; @Kartavtsev99; @Kartavtsev06; @Kartavtsev07]. More specifically, it is assumed that one particle interacts with the other two via an attractive contact interaction of strength $\lambda < 0$ while the sign of the interaction strength $\lambda_1$ for the identical particles is arbitrary. This choice of the parameters is conditioned by an intention to consider a sufficiently rich three-body dynamics since the three-body bound states exist only if $\lambda < 0$.
Most of the numerical and analytical results can be obtained by solving a system of hyper-radial equations (HREs) [@Macek68]. It is of importance that all the terms in HREs are derived analytically; the method of derivation and the analytical expressions are similar to those obtained for a number of problems with zero-range interactions [@Kartavtsev99; @Kartavtsev06; @Kartavtsev07]. To describe the dependence on the mass ratio and interaction-strength ratio for the three-body binding energies and the (2 + 1)-scattering length, the two limiting cases $\lambda_1 = 0$ and $\lambda_1 \to \infty$ are considered and the precise critical values of $m/m_1$ for which the three-body bound states arise and the (2 + 1)-scattering length becomes zero are determined. Combining the numerical calculations, exact analytical results, qualitative considerations, and deduced asymptotic dependencies, one produces a schematic “phase” diagram, which shows the number of the three-body bound states and a sign of the (2 + 1)-scattering lengths in the plane of the parameters $m/m_1$ and $\lambda_1/|\lambda|$. This sign is important in studying the stability of mixtures containing both atoms and two-atomic molecules.
The paper is organized in the following way. In Sect. \[Outline\] the problem is formulated, the relevant notations are introduced, and the method of “surface” function is described; the analytical solutions, numerical results and asymptotic dependencies are presented and discussed in Sect. \[Results\]; the conclusions are summarized in Sect. \[Conclusion\].
General outline and method {#Outline}
==========================
The Hamiltonian of three particles confined in 1D, interacting through the pairwise contact potentials with strengths $\lambda_i$, reads $$\label{ham}
H = -\sum_{i} \frac{\hbar^2}{2m_i}\frac{\partial^2}{\partial x_i^2}
+ \sum_{i} \lambda_{i}\delta(x_{jk}) \ ,$$ where $x_i$ and $m_i$ are the coordinate and mass of the $i$th particle, $x_{jk} = x_j - x_k$, and $ \{ ijk \} $ is a permutation of $ \{ 123 \} $. In order to study the aforementioned two-component three-body systems, one assumes that particle 1 interacts with two identical particles 2 and 3 through attractive potentials and denotes for simplicity $m_2 = m_3 = m$ and $\lambda_{2} = \lambda_{3} \equiv \lambda<0$. The corresponding solutions are classified by their parity and are symmetrical or antisymmetrical under the permutation of identical particles, depending on whether these particles are bosons or fermions. The even (odd) parity solutions will be denoted by $P = 0$ ($P = 1$).
In the following, the dependence of the three-body bound state energies and the (2 + 1)-scattering lengths on two dimensionless parameters $m/m_1$ and $\lambda_1/|\lambda|$ will be investigated. Hereafter, one lets $\hbar = |\lambda| = m = 1$ and thus $m \lambda^2/\hbar^2$ and $\hbar^2/(m |\lambda|)$ are the units of energy and length. Furthermore, one denotes by $A$ and $A_1$ the scattering lengths for the collision of the third particle off the bound pair of different and identical particles, respectively. The scattering length is considered at the lowest two-body threshold, which corresponds to determination of $A$ if $\lambda_1/|\lambda| >
-\sqrt{2/(1 + m/m_1)}$ and $A_1$ otherwise. With the chosen units, $E_\mathrm{th} = -1/[2(1 + m/m_1)]$ and $E'_\mathrm{th} = -\lambda_1^2/4$ are two-body thresholds, i.e., the bound-state energies of two different and two identical particles, respectively.
The binding energy and the scattering length are monotonic functions of the
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'This paper introduces application of Reflexive Game Theory to the matter of multistage decision making processes. The idea behind is that each decision making session has certain parameters like “when the session is taking place", “who are the group members to make decision", “how group members influence on each other", etc. This study illustrates the consecutive or sequential decision making process, which consist of two stages. During the stage 1 decisions about the parameters of the ultimate decision making are made. Then stage 2 is implementation of Ultimate decision making itself. Since during stage 1 there can be multiple decision sessions. In such a case it takes more than two sessions to make ultimate (final) decision. Therefore the overall process of ultimate decision making becomes multistage decision making process consisting of consecutive decision making sessions.'
author:
- Sergey Tarasenko
title: Modeling multistage decision processes with Reflexive Game Theory
---
Introduction
============
The Reflexive Game Theory (RGT) [@lef2; @lef5] allows to predict choices of subjects in the group. To do so, the information about a group structure and mutual influences between subjects is needed. Formulation and development of RGT was possible due to fundamental psychological research in the field of reflexion, which had been conducted by Vladimir Lefebvre [@lef4].
The group structure means the set of pair-wise relationships between subjects in the group. These relationships can be either of alliance or conflict type. The mutual influences are formulated in terms of elements of Boolean algebra, which is built upon the set of universal actions. The elements of Boolean algebra represent all possible choices. The mutual influences are presented in the form of Influence matrix.
In general, RGT inference can be presented as a sequence of the following steps [@lef2; @lef5]:
1\) formalize choices in terms of elements of Boolean algebra of alternatives;
2\) presentation of a group in the form of a fully connected relationship graph, where solid-line and dashed-line ribs (edges) represent alliance and conflict relationships, respectively;
3\) if relationship graph is decomposable, then it is represented in the form of polynomial: alliance and conflict are denoted by conjunction ($\cdot$) and disjunction (+) operations;
4\) diagonal form transformation (build diagonal form on the basis of the polynomial and fold this diagonal form);
5\) deduct the decision equations;
6\) input influence values into the decision equations for each subject.
Let us call the process of decision making in a group to be a session. Therefore, in RGT models a single session.
Model of two-stage decision making: formation of points of view
===============================================================
This study is dedicated to the matter of setting mutual influences in a group by means of *reflexive control* [@lef1]. The influences, which subjects make on each other, could be considered as a result of a decision making session previous to *ultimate decision making (final session)*. We will call the influences, obtained as a result of a previous session(s), a *set-up influences*. The set-up influences are intermediate result of the overall decision making process. The term set-up influences is related to the influences, which are used during the final session, only.
Consequently, the overall decision making process could be segregated into two stages. Let the result of such discussion (decision making) be a particular decision regarding the matter under consideration. We assume the actual decision making regarding the matter (final session - Stage 2) is preceded by preliminary session (Stage 1). Stage 1 is about a decision making regarding the influences (points of view), which each subject will support during the final session. We call such overall decision making process to be a *two-stage decision making process*. The general schema of a two-stage decision making is presented in Fig.\[twostage\].
![The general schema of the two-stage decision making.[]{data-label="twostage"}](twostages.png){height="2cm"}
To illustrate such model we consider a simple example.
*Example 1.* Let director of some company has a meeting with his advisors. The goal of this meeting is to make decision about marketing policy for the next half a year. The background analysis and predictions of experts suggest three distinct strategies: aggressive (action $\alpha$), moderate (action $\beta$) and soft (action $\gamma$) strategies. The points of view of director and his advisors are formulated in terms of Boolean algebra of alternatives. Term point of view implies that a subject makes the same influences on the others. Director supports moderate strategy ($\{\alpha\}$), the 1st and the 2nd advisors are supporting aggressive strategy ($\{\beta\}$), and the 3rd advisor defends the idea of soft strategy ($\{\gamma\}$). The matrix of initial influences is presented in Table \[infMtx\].
[|c|c|c|c|c|]{} &a&b&c&d\
------------------------------------------------------------------------
a&a&$\{\alpha\}$&$\{\alpha\}$&$\{\alpha\}$\
------------------------------------------------------------------------
b&$\{\alpha\}$&b&$\{\alpha\}$&$\{\alpha\}$\
------------------------------------------------------------------------
c&$\{\beta\}$&$\{\beta\}$&c&$\{\beta\}$\
------------------------------------------------------------------------
c&$\{\gamma\}$&$\{\gamma\}$&$\{\gamma\}$&d\
\[infMtx\]
Let director is in a conflict with all his advisors, but his advisors are in alliance with each other. Variable $c$ represents the Director, variables $a$, $b$ and $d$ correspond to the 1st , the 2nd and the 3rd advisor, respectively.
The relationship graph is presented in Fig.\[polyn1\]. Polynomial $abd+c$ corresponds to this graph.
![Relationship graph for a director-advisors group.[]{data-label="polyn1"}](polyn1.png){height="2cm"}
After diagonal form transformation the polynomial does not change:
$$\begin{array}{*{20}{c}}
{} & {} & {[a][b][d]} & {} & {} & {} & {} & {} \\
{} & {[abd] } & {} &{+[c]} & {} & {} & {1 + [c]} & {} \\
{[abc+d]} & {} & {} & {} & = & {[abd+c]} & {} & { = abd+c.} \\
\end{array}$$
Then we obtain four decision equations and their solutions (decision intervals) (Table \[decInt\]).
[|c|c|c|]{} &[Decision Equations]{}&[Decision Intervals]{}\
------------------------------------------------------------------------
a&$a=(bd+c)a+c\overline{a}$&$(bd+c)\supseteq a \supseteq c$\
------------------------------------------------------------------------
b&$b=(ad+c)b+c\overline{b}$&$(ad+c)\supseteq b \supseteq c$\
------------------------------------------------------------------------
c&$c=c+abd\overline{c}$&$1\supseteq c \supseteq abd$\
------------------------------------------------------------------------
d&$d=(ab+c)d+c\overline{d}$&$(ab+c)\supseteq d \supseteq c$\
\[decInt\]
Next we calculate the decision intervals by using information from the influence matrix:
subject a: $(bd+c)\supseteq a \supseteq c$ $\Rightarrow$ $(\{\alpha\} \{\gamma\}+\{\beta\})\supseteq a \supseteq \{\beta\}$ $\Rightarrow$ $a=\{\beta\}$;
subject b: $(ad+c)\supseteq b \supseteq c$ $\Rightarrow$ $(\{\alpha\} \{\gamma\}+\{\beta\})\supseteq b \supseteq \{\beta\}$ $\Rightarrow$ $b=\{\beta\}$;
subject c: $1\supseteq c \supseteq abd$$\Rightarrow$ $1\supseteq c \supseteq \{\alpha\}\{\alpha\}\{\gamma\} $ $\Rightarrow$ $1\supseteq c \supseteq 0 $ $\Rightarrow$ $c = c$;
subject d: $(ab+c)\supseteq d \supseteq c$ $\Rightarrow$ $(\{\alpha\} c+\{\beta\})\supseteq b \supseteq \{\beta\}$ $\Rightarrow$ $\{\alpha, \beta\} \supseteq d \supseteq \{\beta\}$.
Therefore, after the preliminary sessions, the point of view of the subjects have changed. Director has obtained a freedom of choice, since he can choose any alternative: $1 \supseteq c \supseteq 0$ $\Rightarrow$ $c = c$. At the same time, the 1st and the 2nd advisors support moderate strategy ($a$ = $b$ = $\{\beta\}$). Finally, the 3rd advisor now can choose between points of view $\{\alpha,\beta\}$ (aggressive of moderate strategy) and $\{\beta\}$ (moderate strategy): $\{\alpha, \beta\} \supseteq d \supseteq \{\beta\}$.
Thus, the point of view of the 1st and the 2nd advisors is strictly determined, while the point of view of 3rd advisor is probabilistic.
Next we calculate choice of each subject during the final session, considering the influences resulting from the preliminary session. The matrix of set-up influences is presented in Table \[setupInf\]. The intervals in the matrix imply that a subject can choose either of alternatives from the given interval as a point of
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We present an analysis of the photometry and spectroscopy of the host galaxy of [*Swift*]{}-detected GRB080517. From our optical spectroscopy, we identify a redshift of $z=0.089\pm0.003$, based on strong emission lines, making this a rare example of a very local, low luminosity, long gamma ray burst. The galaxy is detected in the radio with a flux density of $S_{4.5\,GHz}=$0.22$\pm$0.04mJy - one of relatively few known GRB hosts with a securely measured radio flux. Both optical emission lines and a strong detection at 22$\mu$m suggest that the host galaxy is forming stars rapidly, with an inferred star formation rate $\sim16$M$_\odot$yr$^{-1}$ and a high dust obscuration (E$(B-V)>1$, based on sight-lines to the nebular emission regions). The presence of a companion galaxy within a projected distance of 25kpc, and almost identical in redshift, suggests that star formation may have been triggered by galaxy-galaxy interaction. However, fitting of the remarkably flat spectral energy distribution from the ultraviolet through to the infrared suggests that an older, 500Myr post-starburst stellar population is present along with the ongoing star formation. We conclude that the host galaxy of GRB080517 is a valuable addition to the still very small sample of well-studied local gamma-ray burst hosts.'
author:
- |
Elizabeth R. Stanway$^{1}$[^1], Andrew J. Levan$^{1}$, Nial Tanvir$^{2}$, Klaas Wiersema$^{2}$, Alexander van der Horst$^3$, Carole G. Mundell$^4$, Cristiano Guidorzi$^5$\
$^{1}$Department of Physics, University of Warwick, Gibbet Hill Road, Coventry, CV4 7AL, UK\
$^{2}$Department of Physics and Astronomy, University of Leicester, University Road, Leicester LE1 7RH, UK\
$^{3}$Anton Pannekoek Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands\
$^{4}$Astrophysics Research Institute, Liverpool John Moores University, IC2, Liverpool Science Park, 146 Brownlow Hill, Liverpool L3 5RF, UK\
$^{5}$Department of Physics and Earth Sciences, University of Ferrara, via Saragat 1, I-44122, Ferrara, Italy\
date: 'Accepted 2014 October 28. Received 2014 October 23; in original form 2014 September 19'
title: 'GRB 080517: A local, low luminosity GRB in a dusty galaxy at z=0.09'
---
\[firstpage\]
gamma-ray burst:individual:080517 – galaxies:star formation – galaxies:structure – galaxies: distances and redshifts
Introduction
============
Long Gamma Ray Bursts (GRBs) are intense, relativistically beamed, bursts of radiation, likely emitted during the collapse of a massive star at the end of its life [@2006ApJ...637..914W]. As well as constraining the end stages of the evolution for massive stars, they also mark out star formation in the distant Universe, in galaxies often too small to observe directly through their stellar emission or molecular gas [e.g. @2012ApJ...754...46T]. However, extrapolating from the detection of a single stellar event (the burst) to their wider environment, and the contribution of their hosts to the volume averaged cosmic star formation rate [e.g. @2012ApJ...744...95R], is challenging. Doing so relies on a good understanding of the stellar populations and physical conditions that give rise to GRB events.
This understanding has improved significantly over recent years. A number of studies now constrain the stellar properties of typical GRB hosts [e.g. @2009ApJ...691..182S; @2010MNRAS.tmp..479S; @2012ApJ...756..187H], their radio properties [e.g. @2012ApJ...755...85M; @2010MNRAS.409L..74S; @radiopaper; @2014arXiv1407.4456P] and behaviour in the far-infrared [@2014arXiv1402.4006H; @2014arXiv1406.2599S]. However these studies have also demonstrated diversity within the population. GRB host galaxies range from low mass, metal poor galaxies forming stars at a moderate rate [e.g. @2010AJ....140.1557L], to more massive moderately dusty but not extreme (SMG-like) starbursts such as the ‘dark’ burst population [@2013ApJ...778..128P; @2013ApJ...778..172P].
The challenge of understanding these sources has been complicated by the high redshifts at which they typically occur. The long GRB redshift distribution peaks beyond $z=1$ [@2012ApJ...752...62J], tracing both the rise in the volume-averaged star formation rate and the decrease in typical metallicity - which may favour the formation of GRB progenitors [see e.g. @2012ApJ...744...95R and references therein]; local examples which can be studied in detail are rare. Of long duration ($>$2s) bursts in the official [*Swift Space Telescope*]{} GRB catalogue table[^2], only three are listed as having $z<0.1$. A few other (pre-[*Swift*]{}) bursts are also known at low redshifts [e.g. GRB980425 at $z=0.009$ @1998Natur.395..670G], but were detected by instruments with quite different systematics and tend to be unusual systems. One of the most recent studies, which exploited ALMA data, identified the host of GRB980425 as a dwarf system with low dust content and suggested that this is typical of GRB hosts as a whole . However each low redshift host investigated in detail has informed our understanding of the population as a whole and proven to differ from the others [e.g. @2011ApJ...741...58W; @2011MNRAS.411.2792S]. Low redshift bursts include several which are sub-luminous, such as GRBs090825 and 031203 [@1998Natur.395..670G; @2004ApJ...609L...5M; @2004Natur.430..648S], and others such as GRBs060505 and 060614 that were long bursts without associated supernovae [@2006Natur.444.1047F; @2006Natur.444.1050D]. Cross-correlation with local galaxy surveys (at $z<0.037$) has suggested that some low redshift GRBs in the existing burst catalogues have yet to be identified as such [@2007MNRAS.382L..21C] and hence opportunities to study their properties in detail have been missed. Given the very small sample, and the variation within it, it is important that we continue to follow up the hosts of low redshift bursts and do not allow a few examples to skew our perception of the population.
We have acquired new evidence suggesting that a previously overlooked burst, GRB080517, and its host galaxy might prove a valuable addition to the study of local gamma ray bursts. The WISE all-sky survey [@2010AJ....140.1868W], publically released in 2012, maps the sky at 3-22$\mu$m. While the observations are relatively shallow and most GRB hosts remain undetected or confused, we have identified the host of GRB080517 as anomalous. Not only is an infrared-bright source clearly detected coincident with the burst location, but it has a sharply rising spectrum and is extremely luminous in the 22$\mu$m W4 band, suggesting that it is a rather dusty galaxy, and likely at low redshift.
In this paper, we present new photometry and spectroscopy of the host of GRB080517, identifying its redshift as $z=0.09$. Compiling archival data, we consider the spectral energy distribution (SED) of the host galaxy, and also its larger scale environment, evaluating the source as a low redshift example of a dusty GRB host galaxy. In section \[sec:initial\] we discuss the initial identification of this GRB and its properties. In section \[sec:data\] we present new data on the host galaxy of this source. We present our optical photometry and spectroscopy of the GRB host and a neighbouring companion in section \[sec:spec\] and report a detection of the GRB host at radio frequencies in section \[sec:radio\]. In section \[sec:reassess\] we reassess the initial burst properties and its early evolution in the light of our new redshift information. In section \[sec:sed\] we compile new and archival photometry to secure an analysis of the spectral energy distribution, and in section \[sec:sfr\] report constraints on the host galaxy’s star formation rate. In section \[sec:disc\] we discuss the properties of the host galaxy in the context of other galaxy populations before presenting our conclusions in section \[sec:conc\].
Throughout, magnitudes are presented in the AB system [@1983ApJ...266..713O] and fluxes in $\mu$Jy unless otherwise specified. Where necessary, we use a standard cosmology with $h_0$=70kms$^{-1}$Mpc$^{-1}$, $\Omega_M=0.3$ and $\Omega_\Lambda$=0.7.
Initial Observations {#sec:initial}
====================
GRB080517 triggered the [*Swift*]{} Burst Alert Telescope (BAT) at 21:22:51 UT on 17th May 2008 as a flare with a measured T
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'This paper examines the problem of rate allocation for multicasting over slow Rayleigh fading channels using network coding. In the proposed model, the network is treated as a collection of Rayleigh fading multiple access channels. In this model, rate allocation scheme that is based solely on the statistics of the channels is presented. The rate allocation scheme is aimed at minimizing the outage probability. An upper bound is presented for the probability of outage in the fading multiple access channel. A suboptimal solution based on this bound is given. A distributed primal-dual gradient algorithm is derived to solve the rate allocation problem.'
author:
- Avi Zanko
- Amir Leshem
- Ephraim Zehavi
title: Topology management and outage optimization for multicasting over slowly fading multiple access networks
---
Network coding for multicasting ,wireless networks ,outage capacity ,Rayleigh fading ,multiple access channels
Introduction {#sec:introduction}
============
Network coding extends the functionality of intermediate nodes from storing/forwarding packets to performing algebraic operations on received data. If network coding is permitted, the multicast capacity of a network with a single source has been shown to be equal to the minimal min-cut between the source and each of its destinations [@Ahlswede_Network_2000]. In the past decade, the concept of combining data by network coding has been extensively extended by e.g. [@Li_Linear_2003; @Jaggi_Low_2003; @Barbero_Heuristic_2006] and it is well known that in order to achieve the multicast rate, a linear combination over a finite field suffices if the field size is larger than the number of destinations. Moreover, centralized linear network coding can be designed in polynomial time [@Jaggi_Polynomial_2005]. Decentralized linear network coding can be implemented using a random code approach [@Ho_A_random_2006]. A comprehensive survey of network coding can be found in e.g., [@Fragouli_Network_2007; @Ho_Network_2008].
Many network resource allocation problems can be formulated as a constrained maximization of a certain utility function. The problem of network utility maximization has been explored extensively in the past few decades [@Palomar_A_Tutorial_2006; @Chiang_Layering_2007]. We briefly introduce related work on topology management and rate allocation for network coding in multicast over wireless networks. The problem of finding a minimum-cost scheme (while maintaining a certain multicast rate) in coded networks was studied by Lun et al. [@Lun_Network_2004; @Lun_Minimum_2006]. They showed that there is no loss of optimality when the problem is decoupled into: finding the optimal coding rate allocation vector (also known as subgraph selection) and designing the code that is applied over the optimal subgraph. Moreover, in many cases, optimal subgraphs can be found in polynomial time. If in addition the cost function is also convex and separable, the solution can be found in a decentralized manner, where message passing is required solely between directly connected nodes. This decentralized solution, if coupled with random network coding (e.g. [@Lun_On_2008; @Chou_Practical_2003]) provides a fully distributed scheme for multicast in coded wireline networks. This has prompted many researchers to develop different algorithms that find minimum-cost rate allocation solutions distributively; e.g. [@Cui_Optimal_2004; @Bhadra_Min_2006; @Wu_Distributed_2006; @Xi_Distributed_2010].
When addressing the problem of rate allocation for multicast with network coding in wireless networks, Lun et al., [@Lun_Minimum_2006; @Lun_Achieving_2005] tackled the problem through the so-called *wireless multicast advantage* phenomenon. This phenomenon simply comes down to the fact that when interference is avoided in the network (e.g., by avoiding simultaneous transmissions), communication between any two nodes is overheard by their nearby nodes due to the broadcast nature of the wireless medium. In [@Lun_Achieving_2005], the wireless multicast advantage was used to reduce the transmission energy of the multicast scheme (since when two nodes communicate, some of their nearby nodes get the packet for “free”). Therefore, their wireline minimum-cost optimization problem was updated accordingly [see @Lun_Achieving_2005 eq.(1) and (40)]. In [@Xi_Distributed_2010] interference is allowed but is assumed to be limited. Joint optimal power control, network coding and congestion control is presented for the case of very high SINR (signal to noise plus interference ratio). This interference assumption implies that there are some limitations on simultaneous transmissions and this is taken into account in the optimization problem. In [@Yuan_A_Cross_2006] the problem of joint power control, network coding and rate allocation was studied. They showed that the throughput maximization problem can be decomposed into two parts: subgraph selection at the network layer and power control at the physical layer. A primal dual algorithm was given that converges to the optimal solution provided that the capacity region is convex with respect to the power control variables (i.e., when interference are ignored). On the other hand, to take interference into account a game theoretic method was derived to approximately characterize the capacity region.
In wireless networks, it is reasonable to assume that there is no simultaneous packet transmission or reception by any transceiver. These properties of the wireless medium introduced a new cross-layer interactions that may not exist in the wired network. Sagduyu et al. [@Sagduyu_On_Joint_2007] analyzed and designed wireless network codes in conjunction with conflict-free transmission schedules in wireless ad hoc networks. They studied the cross-layer design possibilities of joint medium access control and network coding. It was shown that when certain objectives such as throughput or delay efficiency are considered, then network codes must be jointly designed with medium access control. The joint design of medium access control and network coding [@Sagduyu_On_Joint_2007] was formulated as a nonlinear optimization problem. In [@Niati_Throughput_2012] the work reported in [@Sagduyu_On_Joint_2007] was extended and a linear formulation was derived.
However, there are certain other considerations that must be taken into account in the search for a rate allocation vector in wireless networks. The wireless medium varies over time and suffers from fading channels due to multipath or shadowing, for example. In [@Ozarow_Information_1994] the block fading model was introduced. In this model the channel gain is assumed to be constant over each coherence time interval. Typically, fading models are classified as fast fading or slow fading. In fast fading, the coherence time of the channel is small relative to a code block length and as a consequence the channel is ergodic with a well-defined Shannon capacity (also known as the ergodic capacity [@Goldsmith_Capacity_1997]). In slow fading, the code block length and the coherence time of the channel are of the same order. Hence, the channel is not ergodic and the Shannon capacity is not usually a good measure of performance. The notion of outage capacity was introduced in [@Ozarow_Information_1994] for transmitting over fading channels when the channel gain is available only at the receiver. In this approach, transmission takes place at a certain rate and tolerates some information loss when an outage event occurs. An outage event occurs whenever the transmitted rate is not supported by the instantaneous channel gain; i.e., when the channel gain is too low for successful decoding of the transmitted message. It is assumed that outage events occur with low probability that reliable communication is available most of the time. A different strategy to deal with slow fading is the broadcast channel approach [@Shamai_A_broadcast_1997]. In this approach different states of the channel are treated as channels toward different receivers (a receiver for each state). Hence, the same strategy as used for sending common and private messages to different users on the Gaussian broadcast channel can be applied here. When the channel gain is also available at the encoder, the encoder can adapt the power and the transmission rate as a function of the instantaneous state of the channel and thus can achieve a higher rate on average. Moreover, as regards the outage capacity, the transmitter can use power control to conserve power by not transmitting at all during designated outage periods.
When dealing with outage capacity for fading MAC, the common outage has a similar definition to the outage event in the point to point case. A common outage event is declared whenever we transmit with rates that are not supported by the instantaneous channel gains. If the channel gains are available at both the decoder and the encoders, additional notions of capacities for the fading MAC need to be taken into account. The throughput capacity region for the Gaussian fading MAC was introduced in [@Tse_Multiaccess_1998]. In a nutshell, this is the Shannon capacity region where the codewords can be chosen as a function of the realization of the fading with arbitrarily long coding delays. However, as for the point to point case, this approach is not realistic in slow fading cases since it requires a very long delay to average out the fading effect. [@Hanly_Multiaccess_1998] derived the delay limited capacity for the Gaussian fading MAC (also known as the zero outage capacity). In the delay limited capacity, unlike the throughput capacity, the chosen coding delay has to work uniformly for all fading processes with a given stationary distribution. However, the delay limited capacity is somewhat pessimistic due to the demand to maintain a constant rate under any fading condition. The outage capacity region and the optimal power allocation for a fading MAC were described in
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We analyze dropout in deep networks with rectified linear units and the quadratic loss. Our results expose surprising differences between the behavior of dropout and more traditional regularizers like weight decay. For example, on some simple data sets dropout training produces negative weights even though the output is the sum of the inputs. This provides a counterpoint to the suggestion that dropout discourages co-adaptation of weights. We also show that the dropout penalty can grow exponentially in the depth of the network while the weight-decay penalty remains essentially linear, and that dropout is insensitive to various re-scalings of the input features, outputs, and network weights. This last insensitivity implies that there are no isolated local minima of the dropout training criterion. Our work uncovers new properties of dropout, extends our understanding of why dropout succeeds, and lays the foundation for further progress.'
author:
- |
David P. Helmbold\
Department of Computer Science\
University of California, Santa Cruz\
Santa Cruz, CA 95064, USA\
`dph@soe.ucsc.edu`\
- |
Philip M. Long\
Google\
`plong@google.com`\
bibliography:
- 'general.bib'
title: |
**Surprising properties of dropout\
in deep networks\
**
---
Properties of the dropout penalty {#s:dropout.penalty}
=================================
Acknowledgments {#acknowledgments .unnumbered}
===============
We are very grateful to Peter Bartlett, Seshadhri Comandur, and anonymous reviewers for valuable communications.
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: '[Considering the strong field approximation we compute the hard thermal loop pressure at finite temperature and chemical potential of hot and dense deconfined QCD matter in lowest Landau level in one-loop order. We consider anisotropic pressure in presence of strong magnetic field [*[i.e.]{}*]{}, longitudinal and transverse pressure along parallel and perpendicular to the magnetic field direction. As a first effort we compute and discuss the anisotropic quark number susceptibility of deconfined QCD matter in lowest Landau level. The longitudinal quark number susceptibility is found to increase with temperature whereas that of transverse one decreases with the temperature. We also compute quark number susceptibility in weak field approximation. The thermomagnetic correction is very marginal in weak field approximation. ]{}'
author:
- Bithika Karmakar
- Najmul Haque
- Munshi G Mustafa
bibliography:
- 'ref.bib'
title: 'Second-order quark number susceptibility of deconfined QCD matter in presence of magnetic field'
---
Introduction
============
Fluctuations of the conserved quantum numbers like baryon number, electric charge, strangeness number have been proposed as the probe of the hot and dense matter created in high energy heavy-ion collisions. However if one collects all the charged particle in heavy-ion collision then the net charge will be conserved and there will be no fluctuation. But all the particles can not be collected by any detector [@2000PhRvL..85.2076J]. One should consider grand canonical ensemble for the case of real detector. An isolated system does not fluctuate because it is at thermodynamic limit. But if we consider small portion of a system which is small enough to consider the rest of the system as bath and is large enough to ignore the quantum fluctuations then one can calculate the fluctuation of conserved quantities like baryon number using grand canonical ensemble [@Asakawa:2000wh]. These fluctuations can be measured experimentally [@2000PhRvL..85.2076J; @Asakawa:2000wh; @Koch:2001zn]. Several lattice calculations are there which calculate fluctuation and correlation of the conserved quantities [@PhysRevD.92.114505; @PhysRevLett.111.062005; @BORSANYI2013270c; @Ding:2015fca; @PhysRevD.73.014004]. The fluctuation of the conserved quantum numbers can be used to determine the degrees of freedom of the system [@Asakawa:2000wh]. Second and fourth order quark number susceptibilities in thermal medium have been calculated using Hard Thermal Loop (HTL) approximation [@Haque:2014rua; @Haque:2013sja; @Haque:2013qta; @Chakraborty:2003uw; @Chakraborty:2001kx; @BLAIZOT2001143], pQCD [@Vuorinen:2002ue; @Toimela:1984xy; @PhysRevD.68.054017]. Ref. [@Haque:2018eph] calculates the second-order quark number susceptibility (QNS) considering different quark masses for $u$, $d$ and $s$ quark.
On the other hand, recent findings show that magnetic field of the order of $10^{18}$ Gauss can be created at the center of the fire ball by the charged spectator particles in non-central heavy-ion collisions [@SKOKOV_2009; @Kharzeev_2008]. The time varying magnetic field is created in a direction perpendicular to the reaction plane [@Shovkovy:2012zn; @DElia:2012ems; @Fukushima:2012vr; @Mueller:2014tea; @Miransky:2015ava] and its strength depends on the impact parameter. The strength of the magnetic field decreases after few fm$/c$ of the collision [@SKOKOV_2009]. Several activities are under way to study the properties of strongly interacting matter in presence of magnetic field. Effects like magnetic catalysis [@Shovkovy:2012zn; @Gusynin:1994xp; @Gusynin:1995gt], inverse magnetic catalysis [@Bali:2011qj; @AYALA201699; @PhysRevD.90.036001; @PhysRevD.91.016002], chiral magnetic effect [@Fukushima:2008xe; @Kharzeev:2013ffa] in presence of magnetic field in non-central heavy-ion collision have been reported. Furthermore, various thermodynamic quantities [@Karmakar:2019tdp; @Bandyopadhyay:2017cle], transport coefficients [@Kurian:2018dbn; @Kurian:2017yxj], dilepton production rate [@Das:2019nzv; @Bandyopadhyay:2016fyd; @Bandyopadhyay_2017; @Chyi_2000; @PhysRevC.88.024910; @Ghosh:2018xhh], photon production rate [@PhysRevLett.110.192301; @PhysRevLett.109.202303] and damping of photon [@Ghosh:2019kmf] of magnetised QCD matter have been obtained.
Here for simplicity, we consider strong ($gT<T<\sqrt{|eB|}$) and weak ($\sqrt{|q_fB|}<m_{th}\sim gT<T $ ) magnetic field with two different scale hierarchies. As a first effort in this article we, using the one-loop HTL pressure of quarks and gluons at finite quark chemical potential in presence of magnetic field, calculate the second-order QNS of deconfined QCD matter in this two scale hierarchies.
The paper is organized as follows: in Sec. \[setup\] we present the setup to calculate second-order QNS. In Subsec. \[quark\_f\], one-loop HTL free-energy of quark in presence of strong magnetic field at finite temperature and chemical potential is calculated. The gauge boson free-energy in presence of strong magnetic field is obtained in Subsec. \[gauge\_boson\]. We discuss in Subsec. \[pressure\] the anisotropic pressure and second-order QNS of QCD matter in a strong field approximation. Considering one-loop HTL pressure quark-gluon plasma in weak field approximation [@Bandyopadhyay:2017cle], we also calculate and discuss the second-order QNS in the presence of weak magnetic field in Sec. \[wfa\]. We conclude in Sec. \[conclusion\].
Setup
=====
Here we consider the deconfined QCD matter as grand canonical ensemble. The free-energy of the system can be written as F(T,V,)&=&u-Ts-n where $\mu$ is the quark chemical potential, $n$ number density and $s$ is the entropy density. The pressure of the system is given as P=-F. However, we consider the system to be anisotropic in presence of strong magnetic field and the free-energy of the system is defined in Eq. .
The second-order QNS is defined as =-\_[=0]{}=\_[=0]{}=\_[=0]{}\[chi\_def\] which is the measure of the variance or the fluctuation of the net quark number. One can find out the covariance of two conserved quantities when the quark flavors have different chemical potential. Alternatively, one can work with other basis according to the system [*[e.g.]{}*]{}, net baryon number $\mathcal B$, net charge $\mathcal Q$ and strangeness number $\mathcal S$ or $\mathcal B$, $\mathcal Q$ and third component of isospin $\mathcal I_3$. In our case we take strangeness and charge chemical potential to be zero. Moreover, we have considered same chemical potential for all flavors which results in zero off-diagonal quark number susceptibilities. Thus the net second order baryon number susceptibility is related to the second-order QNS as $\chi_B=\frac{1}{3}\chi$.
The strength of the magnetic field produced in non-central heavy-ion collision can be up to $(10-20)m_\pi^2$ at the time of collision [@Bzdak:2011yy]. However, it decreases very fast being inversely proportional to the square of time [@PhysRevLett.110.192301; @McLerran:2013hla]. But if one considers finite electric conductivity of the medium, then the magnetic field strength will not die out very fast [@Tuchin:2013bda; @Tuchin:2012mf; @Tuchin:2013ie]. We consider two different cases with strong and weak magnetic field in this article.
Strong magnetic field {#sfa}
=====================
In this section we consider strong field scale hierarchy $gT < T < \sqrt{eB}$. In presence of magnetic field, the energy of charged fermion becomes $E_n=\sqrt{k_3^2+m_f^2+2n q_fB}$ where $k_3$ is the momentum of fermion along the magnetic field direction, $m_f$ is the mass of the fermion and the Landau level, $n$, can vary from 0 to $\infty$. The transverse momentum of fermion becomes quantised. It can be shown that at very high magnetic field, the contribution from all the Landau levels except the lowest Landau level can be ignored [@Bandyopadhyay:2016fyd]. Consequently, the dynamics becomes $(1+1)$ dimensional when one considers only lowest Landau level (LLL). The general structures of quark and gluon self-energy in presence of magnetic field have been formulated in Ref. [@Karmakar:2019tdp] at finite temperature but for zero quark chemical potential. Here
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We calculate the most massive object in the Universe, finding it to be a cluster of galaxies with total mass $M_{200}=3.8\times10^{15}\,M_{\odot}$ at $z=0.22$, with the $1\sigma$ marginalized regions being $3.3\times10^{15}\,M_{\odot}<M_{200}<4.4\times10^{15}\,M_{\odot}$ and $0.12<z<0.36$. We restrict ourselves to self-gravitating bound objects, and base our results on halo mass functions derived from N-body simulations. Since we consider the very highest mass objects, the number of candidates is expected to be small, and therefore each candidate can be extensively observed and characterized. If objects are found with excessively large masses, or insufficient objects are found near the maximum expected mass, this would be a strong indication of the failure of $\Lambda$CDM. The expected range of the highest masses is very sensitive to redshift, providing an additional evolutionary probe of $\Lambda$CDM. We find that the three most massive clusters in the recent SPT $178\,\mbox{deg}^2$ catalog match predictions, while XMMU J2235.3–2557 is roughly $3\sigma$ inconsistent with $\Lambda$CDM. We discuss Abell 2163 and Abell 370 as candidates for the most massive cluster in the Universe, although uncertainties in their masses preclude definitive comparisons with theory. Our findings motivate further observations of the highest mass end of the mass function. Future surveys will explore larger volumes, and the most massive object in the Universe may be identified within the next decade. The mass distribution of the largest objects in the Universe is a potentially powerful test of $\Lambda$CDM, probing non-Gaussianity and the behavior of gravity on large scales.'
author:
- 'Daniel E. Holz$^1$ and Saul Perlmutter$^{2,3}$'
bibliography:
- 'references.bib'
title: The most massive objects in the Universe
---
[*Introduction*]{}—Our Universe has a finite observable volume, and therefore within our Universe there is a unique most massive object. This object will be a supercluster of galaxies. Theoretical studies of the growth of structure have now matured, and the mass of the most massive objects can be robustly predicted to the level of a few percent. Furthermore, we are in the midst of a revolution in our ability to conduct volume-limited samples of high-mass clusters, with Sunyaev-Zel’dovich (SZ) and X-ray surveys able to provide complete samples at mass $>5\times10^{14}\,\msun$ out to $z>1$. The masses of the most massive clusters in the Universe are therefore a robust prediction of $\Lambda$CDM models, as well as a direct observable of our Universe.
The cluster mass function is already being utilized as a probe of cosmology, and in particular, of the dark energy equation-of-state [@2001ApJ...560L.111H; @2001ApJ...553..545H; @2002PhRvL..88w1301W; @2003ApJ...585..603M; @2006PhRvD..74b3512K; @2006astro.ph..9591A; @2008MNRAS.385.2025C]. What additional value is there in singling out the very tail end of the mass function, representing the most massive clusters in the Universe, for special treatment? First, we note that these systems are in many ways the easiest to find, as they are among the largest and brightest objects. They thus avoid many selection effects which might plague lower mass cuts. In addition, these systems constitute a very small sample (ideally, just one compelling candidate), and it is possible to devote significant observational resources to studying them. One might imagine coupled S-Z, X-ray, and weak lensing measurements, and thus the masses of these systems will be among the best constrained of any systems. The mass-observable relation for clusters is an essential component in using the cluster mass function to measure properties of the dark energy, and therefore there is a tremendous amount of ongoing work to characterize the masses of these objects [@2003ApJ...585..603M; @2005PhRvD..72d3006L; @2005ApJ...623L..63M; @2006ApJ...650..538N; @2006ApJ...650..128K; @2008ApJ...672...19R; @2009arXiv0910.3668W]. Finally, because we are probing far down the exponential tail of the mass function, these objects offer an unusually powerful constraint. If the most massive object is found to have too large a mass (or especially, as explained below, too small a mass), this [*single object*]{} will provide a strong indication of non-Gaussianity or modified gravity [@1998ApJ...494..479C]. An excellent example of this is the high-redshift cluster XMMU J2235.3–2557 (hereafter XMM2235) [@2005ApJ...623L..85M], which has been argued to be a few sigma inconsistent with $\Lambda$CDM [@2009ApJ...704..672J; @2009PhRvD..80l7302J; @2010arXiv1003.0841S]. A similar approach based on strong lensing has been presented in [@2009MNRAS.392..930O], which considers the distribution of the largest Einstein radii in the Universe as a probe of $\Lambda$CDM. Although much work has focused on using halo statistics as a probe of cosmology, here we focus on using the high-mass tails of precision mass functions to make explicit predictions for current and future observations.
A critical question in one’s attempt to determine the most massive object is to define precisely what is meant by “object”. The largest structure in the Universe detected to date is the Sloan Great Wall [@2005ApJ...624..463G], but the identification of this wall as a unique object is sensitive to a (completely arbitrary) density threshold. For our purposes we define an object as a gravitationally self-bound, virialized mass aggregation. These objects have decoupled from the Hubble flow, and represent large local matter overdensities. This definition has the convenience of robustly identifying objects (both in theory and observation). [*Mass function*]{}—Recent years have shown tremendous progress in characterizing the mass function of dark matter halos in cosmological N-body simulations. We have now established, to better than 5%, the expected number density of dark matter halos as a function of mass and redshift [@2006ApJ...646..881W; @2007MNRAS.374....2R; @2008ApJ...688..709T]. In the simulations underlying these precise mass function expressions, the halos at the high-mass end are resolved by millions of particles, lending particular confidence and robustness to the mass function in this regime. The simulations are pure dark matter, and neglect the influence of baryons. At smaller scales baryons could play a major role in the density profile of the dark matter halos, and could potentially impact the mass function of the objects themselves. At the large scales being considered in this paper, the effects of baryons are expected to be negligible. This is particularly true as our interest is in the mass function, and hence the number density of these halos, not their density profiles. An important issue is the process by which a dark matter halo is identified and characterized in a dark matter simulation . There are two dominant approaches: friends-of-friends (FOF) and spherical overdensity (SO). FOF defines a halo by contours of constant density, while SO defines halos by the overdensity (compared to the mean or critical density) within a spherical region. It has been argued that the mass associated with SO can be most closely tied to observations of clusters [@2008ApJ...688..709T]. On the other hand, using an FOF with a linking length of 0.2 corresponds closely to contours of density 200 times the background density, which from spherical collapse models is a natural proxy for the virial mass. Because of the steep exponential in the mass function, our results are essentially independent of these differences (see Fig. \[fig:fig3\]).
The halo mass function depends sensitively on cosmological parameters, including $\Omega_m$, $\Omega_\Lambda$, and the equation-of-state of the dark energy. For our purposes, one of the most important cosmological parameters is the amplitude of the initial density fluctuations, characterized by $\sigma_8$, the RMS variance of the linear density field, smoothed on scales of $8\,\mbox{Mpc}$. Uncertainty in this quantity translates directly into uncertainty in the amplitude of the mass function. We utilize the latest value from [*WMAP*]{}, which provides a $\sim4\%$ measurement of $\sigma_8$ [@2010arXiv1001.4538K]. For reference, a 5% error on $\sigma_8$ shifts the contours in Figure \[fig:fig2\] by less than $1\sigma$ in mass for a full-sky survey, and considerably less for smaller surveys. Since the value of $\sigma_8$ is a major source of uncertainty in the use of the cluster mass function to constrain cosmology, there is great interest in improving its measurement. In addition, the mass function also depends implicitly on the Hubble constant, $h$, which can be seen by expressing it in units of $\mbox{\# of halos}/(\mbox{Mpc}/h)^3$ (observations naturally measure volume in these units). For simplicity we have explicitly put in the [*WMAP*]{}7 value ($h=0.710$), but it is straightforward to re-express all of our
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'The majority of the highest energy cosmic rays are thought to be electrically charged: protons or nuclei. Charged particles experience angular deflections as they pass through galactic and extra-galactic magnetic fields. As a consequence correlation of cosmic ray arrival directions with potential sources has proved to be difficult. This situation is not helped by current data samples where the number of cosmic rays/source are typically $\leq O(1)$. Progress will be made when there are significantly larger data samples and perhaps with better catalogs of candidate sources. This paper reports a search for correlations between the RXTE catalog of nearby active galactic nuclei, AGNs, and the published list of ultra-high energy cosmic rays from the AGASA experiment. Although no statistically significant correlations were found, two correlations were observed between AGASA events and the most inclusive category of RXTE AGNs.'
address: 'University of New Mexico, Department of Physics and Astronomy, Albuquerque, New Mexico, USA'
author:
- 'J. D. Hague'
- 'J.A.J. Matthews'
- 'B. R. Becker'
- 'M. S. Gold'
title: 'Search for Correlations between Nearby AGNs and Ultra-high Energy Cosmic Rays'
---
highest energy cosmic rays – AGNs as sources – search for correlations
Introduction
============
Perhaps the primary goal of all experiments studying the highest energy cosmic rays is to find the source of these particles. While circumstantial evidence may favor one type of source over another, demonstration of a clear correlation between the direction of cosmic rays and their sources is arguably essential. Unfortunately for electrically charged cosmic rays, galactic magnetic fields, and for the highest energy cosmic rays extra-galactic magnetic fields, cause angular deflections that can blur the correlation between cosmic ray arrival direction and source direction. If the sources as viewed from the earth are extended[@waxman; @cuoco] the problem is even more difficult. Unless otherwise noted, for this paper we assume compact (point-like) sources for the highest energy cosmic rays.
If the angular blurring from magnetic fields is small[@dolag] ([*i.e.*]{} not significantly greater than the experimental angular resolution) and/or for neutral primaries, then experiments should observe cosmic rays that cluster in arrival direction[@agasa_cluster; @tinyakov], and/or that correlate with potential astronomical ([*e.g.*]{} BL Lac) sources[@tinyakov_BLLac; @gorbunov_BLLac0; @gorbunov_BLLac1; @gorbunov_BLLac2; @hires_BLLac]. For nearby sources, where experiments should detect multiple cosmic rays/source, event clusters provide bounds on the cosmic ray source density[@dubovsky; @blasi; @kachelriess] potentially favoring one type of source for the highest energy cosmic rays over another. However at this time the situation is less than clear as some results[@finley; @hires_cluster] question the significance of the reported clusters and/or some of the BL Lac correlations[@hires_BLLac; @not_BLLacs].
If deflections of charged cosmic rays by extra-galactic magnetic fields are not small[@sigl], then lower energy, $E$, cosmic rays should experience the greatest angular deflections. Unfortunately small experiment data samples and a cosmic ray flux $\propto E^{-3}$ have often caused studies to retain cosmic rays to energies, $E_{thresh}$, well below GZK[@gzk] energies[@agasa_cluster]. Furthermore deflections of the highest energy cosmic rays even by our galactic magnetic field can be substantial[@tinyakov_Bfield; @kachelriess_Bfield; @tanco]. As magnetic deflections scale proportional to the charge of the primary cosmic ray, nuclei in the cosmic rays may have significant deflections. Although most searches have looked for clustering and/or source correlations on small angular scales, studies at larger angular scales have also found evidence for clustering and/or source correlations[@kachelriess_clusters; @smialkowski; @singh]. Certainly the angular scale of cosmic ray clusters and the magnitude, and thus relevance, of the deflections of ultra-high energy cosmic rays by magnetic fields is not universally agreed to at this time.
In the future, significantly larger data samples will allow analyses to increase $E_{thresh}$ while retaining the number of observed cosmic rays/source (for nearby sources) $\geq O(1)$. However another possibility is to exploit catalogs of candidate sources. With a catalog of source directions, cosmic rays can be effectively correlated with sources even when if magnetic field deflections are “not small” and/or when the the number of observed cosmic rays per source is $<1$ allowing searches with existing data samples. That said, catalog based studies are limited by the completeness of the source catalog and the relevance (or not) of that class of astronomical source to the production of the highest energy cosmic rays. Often conjectured astrophysical sources include gamma ray bursts, GRBs, and/or active galactic nuclei, AGNs[@BGG2002].
This paper reports a search for correlations between a catalog of nearby AGNs[@rxte_catalog] and the published list of ultra-high energy cosmic rays from AGASA[@agasa_cluster]. The components of our analysis are listed in Section\[section:components\]. Issues that relate to data and AGN selection are given in Section\[section:selection\]. The cosmic ray–AGN comparison results are given in Section\[section:comparison\]. Section\[section:summary\] summarizes this study.
Analysis Components {#section:components}
===================
Our comparison of ultra-high energy cosmic rays and a catalog of AGNs includes three components: the RXTE catalog of AGNs, the AGASA list of cosmic rays, and a Monte Carlo sample of uniformly distributed cosmic rays generated to match the experimental acceptance of AGASA.
The catalog of nearby AGNs[@rxte_catalog] results from the Rossi X-ray Timing Explorer, RXTE, all-sky slew survey[@rxte] sensitive to sources of hard X-rays (3-20keV). The survey excluded the galactic plane ($|b|>10^{\circ}$) but covered $\sim 90$% of the remaining sky. X-ray sources were located to better than $1^{\circ}$ and then correlated with known astronomical objects. The efficiency for AGN identification was estimated to be $\sim 70\%$ with somewhat higher efficiency for northern AGNs ($\sim 87\%$) and somewhat lower efficiency for southern AGNs ($\sim 60\%$)[@rxte_catalog]. The resulting catalog provides source directions and probable source distances and intrinsic X-ray luminosities, $L_{3-20}$. The catalog is best for nearby AGNs as RXTE signal thresholds significantly reduced the efficiency for detecting distant sources; additional details are given below.
The list of ultra-high energy cosmic rays comes from published AGASA data [@agasa_cluster].
The Monte Carlo sample of uniformly distributed cosmic rays was generated according to a $\cos(\theta)\sin(\theta)$ distribution in local zenith angle, $\theta\leq45^{\circ}$, and uniform in local azimuth. Events were then transformed to celestial right ascension and declination assuming constant detector aperture with time.
Correlations, between the AGASA events and the catalog of AGNs from RXTE, would appear as an excess at small angular separations in comparison to the Monte Carlo sample of simulated cosmic rays. To be clear, define unit vectors in the directions of cosmic rays, û$_i$, AGNs, v$_j$, and Monte Carlo simulated cosmic rays, ŵ$_k$. A correlation [*signal*]{} should then appear [*near 1.0*]{} in the distribution of [*dot*]{}-products: û$_i \cdot $v$_j$ (if magnetic field deflections are modest). The index “$i$” runs over the cosmic rays in the data sample. For each value of “$i$”, only the AGN catalog source (index “$j$”) giving the maximum value of: û$_i \cdot $v$_j$ contributes to the distribution[^1]. The simulated distribution of [*random background*]{} comes from the analogous distribution of: ŵ$_k \cdot $v$_j$ where index “$k$” now runs over the sample of Monte Carlo simulated cosmic rays. As with the cosmic ray events, only the AGN catalog source (index “$j$”) giving the maximum value of: ŵ$_k \cdot $v$_j$ contributes to the distribution.
Cosmic Ray and AGN Selection {#section:selection}
============================
A few choices have been made in the comparison of AGASA data and catalog of AGNs from RXTE. These are described here.
0.5 cm
The AGASA data have energies, $E > 40$EeV and populate values of declination: $-10^{\circ} \leq Dec \leq 80^{\circ}$. As noted above, the steep cosmic rays spectrum, $\propto E^{-3}$, and modest number of events: 57 with $E>40$EeV and 29 (just over half) with $E>53EeV$ led us to consider three (overlapping) bins in energy: $E\geq40EeV$, $E\geq53EeV$ and $E\geq100$EeV. The last was to see if there are any correlations with the AGASA super-GZK events. Except for the $E\geq100$EeV selection, most of the cosmic
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'We study octet-octet baryon ($J^P = {\textstyle \frac12}^+$) contact interactions in SU(3) chiral effective field theory by using large-$N_c$ operator analysis. Applying the $1/N_c$ expansion of the Hartee Hamiltonian, we find 15 operators in the octet-octet baryon potential where 4 operators for leading order (LO) and 11 for and net-to-next-to-leading order (NNLO). The large-$N_c$ operator analysis of octet-octet baryon matrix elements reduces the number of free parameters from 15 to 6 at LO of the $1/N_c$ expansion. The application of large-$N_c$ sum rules to the Jülich model of hyperon-nucleon (YN) interactions at the LO of the chiral expansion reduces the model parameters to 3 from 5 at the LO of $1/N_c$ expansion. We find that the values of LECs fitted to YN scattering data in Ref. [@Li:2016paq] in the relativistic covariant ChEFT (EG) approach is more consistent with the predictions of large-$N_c$ than in the heavy baryon (HB) formalism approach.'
author:
- Xuyang Liu
- Viroj Limkaisang
- Daris Samart
- Yupeng Yan
title: |
Large-$N_c$ operator analysis of hyperon-nucleon interactions\
in SU(3) chiral effective field theory
---
Introduction
============
Chiral effective field theory (ChEFT) [@Weinberg:1978kz; @Gasser:1983yg], based on the approximately and spontaneously broken chiral symmetry of QCD, allows for a systematic way of calculating low-energy hadronic observables. It is very efficient and convenient to use hadrons as basic degrees of freedom rather than quarks and gluons in the ChEFT. Chiral Lagrangian is required to include all possible interactions between hadrons which are constructed in terms of the relevant symmetries of QCD [@Scherer:2012xha]. A number of low-energy properties in the strong interaction is very successfully described by using the ChEFT. The ChEFT is also utilized to shed light on the study of nuclear forces (see [@Epelbaum:2008ga; @Machleidt:2011zz] for reviews). It was demonstrated by Weinberg’s seminal works [@Weinberg:1990rz; @Weinberg:1991um] that one can calculate the nuclear forces systematically by using appropriate power counting scheme. Therefore, loop-corrections and higher order terms can be included for the accuracy of the calculations. Nucleon-nucleon (NN) forces derived in the ChEFT successfully described a huge number of NN experimental data. The NN potentials are composed of the long and short range interactions, where the long range NN force is mainly contributed by the pion exchange while the short range part is encoded by contact term NN interactions with unknown low-energy constants (LECs) to be fitted to experimental data. The higher order contact terms of the NN potentials have been constructed in Refs. [@Ordonez:1993tn; @Ordonez:1996] at next-to leading order (NLO) and in Refs. [@Epelbaum:2004fk; @Entem:2003ft] for next-to-next-to-next-to leading order (N$^3$LO) in terms of chiral expansions.
On the other hand, hyperon-nucleon (YN) and hyperon-hyperon (YY) forces have been less studied compared with the NN forces. YN interactions are keys for understanding hyper-nuclei and neutron stars [@Nogga:2001ef; @Lonardoni:2014bwa]. The contact and meson exchange terms of the YN interactions in the ChEFT were constructed by using the SU(3) flavor symmetry in Ref. [@Polinder:2006zh] at leading order (LO) and extended to NLO in Ref. [@Haidenbauer:2013oca]. The most general SU(3) chiral Largrangians of the octet-octet baryon contact term interactions have been worked out in Ref. [@Petschauer:2013uua]. The study of the YY interactions was performed in Refs. [@Polinder:2007mp; @Haidenbauer:2015zqb; @Haidenbauer:2009qn]. At the LO of the YN interactions [@Polinder:2006zh; @Li:2016paq], the SU(3) chiral Lagrangian has 15 free parameters (LECs) and the partial-wave expansion analysis leads to 5 LECs which are fixed with YN data. In this work, we will use the large-$N_c$ operator analysis to explore the $N_c$ scales and reduce the number of the unknow LECs in the SU(3) chiral Largrangians and in the LO YN potential [@Polinder:2006zh; @Li:2016paq]. Large-$N_c$ is an approximate framework of QCD and very useful in the study of hadrons at low-energies. The basic idea is that one can consider the number of colors ($N_c$) to be large and expand it in power of $1/N_c$ [@'tHooft:1973jz; @Witten:1979kh]. By using this framework, a number of simplifications of QCD occurs in the large-$N_c$ limit (see Refs. [@Jenkins:1998wy; @Matagne:2014lla] for reviews). The $1/N_c$ expansion of QCD for the baryon [@Dashen:1993jt; @Dashen:1994qi; @Luty:1993fu] has been applied to the NN potential in [@Kaplan:1995yg; @Kaplan:1996rk; @Banerjee:2001js] and three-nucleon potential in [@Phillips:2013rsa]. Moreover, the $1/N_c$ expansion is used to study parity-violating NN potentials in [@Phillips:2014kna; @Schindler:2015nga] as well as time-reversal violating NN potentials [@Samart:2016ufg]. The study of the large-$N_c$ analysis in the NN system provides the understanding of the $N_c$ scales of the LECs in the NN forces. In addition, the $1/N_c$ expansion also helps us to reduce the independent number of the LECs [@Schindler:2015nga]. However, the octet-octet baryon interactions in SU(3) flavor symmetry have not been investigated in the large-$N_c$ approach. In this work, we will extend the large-$N_c$ operator analysis in Refs. [@Kaplan:1996rk; @Phillips:2013rsa] to the SU(3) chiral Lagrangian in Refs. [@Polinder:2006zh; @Li:2016paq]. The large-$N_c$ octet-octet baryon potential is constructed up to NNLO in terms of the $1/N_c$ expansion. We will apply large-$N_c$ sum rules to YN interactions at LO which has been recently investigated in Ref. [@Li:2016paq]. Moreover, the results can be applied to the YN at NLO and YY sector.
We outline this work as follows: In section 2 we will setup the matrix elements of the octet-octet baryon potential from the SU(3) chiral Lagrangian. In the next section, the potential of the $1/N_c$ expansion is constructed up to NNLO and large-$N_c$ sum rules for LECs are implied. In section 4, we apply results of the large-$N_c$ sum rules to the LO YN potential. In the last section, we give the conclusion in this work.
The potential of the SU(3) octet-octet baryon contact term interactions
=======================================================================
We start with the SU(3) chiral Largrangian of the octet-octet baryon interactions and it was proposed by Ref. [@Polinder:2006zh]. The SU(3)-flavor symmetry is imposed and the chiral Lagrangian is Hermitian and invariant under Lorentz transformations and the CPT discrete symmetry is implied. The minimal SU(3) invariant chiral Lagrangian with non-derivative is given by, $$\begin{aligned}
\label{chi-L}
{\mathcal L}^{(1)} &=& C^{(1)}_i \left<\bar{B}_1\bar{B}_2\left(\Gamma_i B\right)_2\left(\Gamma_i B\right)_1\right>\ , \nonumber \\
{\mathcal L}^{(2)} &=& C^{(2)}_i \left<\bar{B}_1\left(\Gamma_i B\right)_1\bar{B}_2\left(\Gamma_i B\right)_2\right>\ , \nonumber \\
{\mathcal L}^{(3)} &=& C^{(3)}_i \left<\bar{B}_1\left(\Gamma_i B\right)_1\right>\left<\bar{B}_2\left(\Gamma_i B\right)_2\right>\ .\end{aligned}$$ Here $1$ and $2$ denote the label of the particles in the scattering process, the $B$ is the usual irreducible octet representation of SU(3) given by $$\begin{aligned}
B&=& \frac{1}{\sqrt 2}\sum_{a=1}^8 \lambda^a B^a =
\left(
\begin{array}{ccc
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: Correct quantization of free electromagnetic field is proposed
author:
- 'D.Yearchuck'
- 'Y.Yerchak'
- 'A.Alexandrov'
title: To Quantization of Free Electromagnetic Field
---
In 1873 “A Treatise on Electricity and Magnetism” by Maxwell [@Maxwell] was published, in which the discovery of the system of electrodynamics equations was reported. The equations are in fact the symmetry expressions for experimental laws, established by Faraday, and, consequently, they are mathematical mapping of experimentally founded symmetry of EM-field. It means in its turn that if some new experimental data will indicate, that symmetry of EM-field is higher, then Maxwell equations have to be generalized. That is the reason why the symmetry study of Maxwell equations is the subject of many research in field theory up to now. Heaviside [@Heaviside] in twenty years after Maxwell discovery was the first, who payed attention to the symmetry between electrical and magnnetic quantities in Maxwell equations. Mathematical formulation of given symmetry, consisting in invariance of Maxwell equations for free EM-field under the duality transformations $$\label{eq1d}
\vec {E} \rightarrow \pm\vec {H}, \vec {H} \rightarrow \mp\vec {E},$$ gave Larmor [@Larmor]. Duality transformations (\[eq1d\]) are private case of the more general dual transformations, established by Rainich [@Rainich]. Dual transformations produce oneparametric abelian group $U_1$ of chiral transformations and they are $$\label{eq2d}
\begin{split}
\raisetag{40pt}
\vec {E} \rightarrow \vec {E} cos\theta + \vec {H} sin\theta\\
\vec {H} \rightarrow \vec {H} cos\theta - \vec {E} sin\theta.
\end{split}$$ Given symmetry indicates, that both constituents $\vec {E}$ and $\vec {H}$ of EM-field are possessing equal rights, in particular they both have to consist of component with different parity. Subsequent extension of dual symmetry for the EM-field with sources leads to requirement of two type of charges. Examples of the dual symmetry display are for instance the equality of magnetic and electric energy values in LC-tank or in free electromagnetic wave. Recently concrete experimental results have been obtained concerning dual symmetry of EM-field in the matter. Two new physical phenomena - ferroelectric [@Yearchuck_Yerchak] and antiferroelectric [@Yearchuck_PL] spin wave resonances have been observed. They were predicted on the base of the model [@Yearchuck_Doklady] for the chain of electrical “spin” moments, that is intrinsic electrical moments of (quasi)particles. Especially interesting, that in [@Yearchuck_PL] was experimentally proved, that really purely imaginary electrical “spin” moment, in full correspondence with Dirac prediction [@Dirac], is responsible for the phenomenon observed. Earlier on the same samples has been registered ferromagnetic spin wave resonance, [@Ertchak_J_Physics_Condensed_Matter].
The values of splitting parameters $\mathfrak{A}^E$ and $\mathfrak{A}^H$ in ferroelectric and ferromagnetic spin wave resonance spectra allowed to find the ratio $J_{E }/J_{H}$ of exchange constants in the range of $(1.2 - 1.6)10^{4}$. Given result seems to be direct proof, that the charge, that is function, which is invariant under gauge transformations is two component function. The ratio of imagine $e_{H} \equiv g$ to real $e_{E}\equiv e $ components of complex charge is $\frac{g}{e} \sim \sqrt{J_{E }/J_{H}} \approx (1.1 - 1.3)10^{2}$. At the same time in classical and in quantum theory dual symmetry of Maxwell eqations does not take into consideration. Moreover the known solutions of Maxwell eqations do not reveal given symmetry even for free EM-field, see for instance [@Scully], although it is understandable, that the general solutions have to posseess by the same symmetry.
The aim of given work is to find the cause of symmetry difference of Maxwell eqations and their solutions and to propose correct field functions for classical and quantized EM-field. Suppose EM-field in volume rectangular cavity. Suppose also, that the field polarization is linear in z-direction. Then the vector of electrical component can be represented in the form $$E_x(z,t) = \sum_{\alpha=1}^{\infty}A_{\alpha}q_{\alpha}(t)\sin(k_{\alpha}z),$$ where $q_{\alpha}(t)$ is amplitude of $\alpha$-th normal mode of the cavity, $\alpha \in N$, $k_{\alpha} = \alpha\pi/L$, $A_{\alpha}=\sqrt{2 \nu_{\alpha}^2m_{\alpha}/(V\epsilon_0)}$, $\nu_{\alpha} = \alpha\pi c/L$, $L$ is cavity length along z-axis, $V$ is cavity volume, $m_{\alpha}$ is parameter, which is introduced to obtain the analogy with mechanical harmonic oscillator. Using the equation $$\epsilon_0\partial_t \vec{E}(z,t) = \left[ \nabla\times\vec{H}(z,t)\right]$$ we obtain in suggestion of transversal EM-field the expression for magnetic field $${H}_y(z,t) = \sum_{\alpha=1}^{\infty}\epsilon_0\frac{A_{\alpha}}{k_{\alpha}}\frac{dq_{\alpha}}{dt}\cos(k_{\alpha}z) + H_{y0}(t),$$ where $H_{y0} = \sum_{\alpha=1}^{\infty} f_{\alpha}(t)$, $\{f_{\alpha}(t)\}$, $\alpha \in N$, is the set of arbitrary functions of the time. The partial solution is usually used, in which the function $H_{y0}(t)$ is identically zero. The field Hamiltonian $\mathcal{H}^{[1]}(t)$, corresponding given partial solution, is $$\begin{split}
&\mathcal{H}^{[1]}(t) = \frac{1}{2}\iiint\limits_{(V)}\left[\epsilon_0E_x^2(z,t)+\mu_0H_y^2(z,t)\right]dxdydz\\
&= \frac{1}{2}\sum_{\alpha=1}^{\infty}\left[m_{\alpha}\nu_{\alpha}^2q_{\alpha}^2(t) + \frac{p_{\alpha}^2(t)}{m_{\alpha}} \right],
\end{split}$$ where $$p_{\alpha} = m_{\alpha} \frac{dq_{\alpha}(t)}{dt}.$$ Then, using the equation $$\left[ \nabla\times\vec{E}\right] = -\frac{\partial \vec{B}}{\partial t} = -\mu_0 \frac{\partial \vec{H}}{\partial t}$$ it is easily to find the field functions $\{q_{\alpha}(t)\}$. They will satisfy to differential equation $$\frac{d^2q_{\alpha}(t)}{dt^2}+\frac{k_{\alpha}^2}{\mu_0\epsilon_0}q_{\alpha}(t)=0.$$ Consequently, taking into account $\mu_0\epsilon_0 = 1/c^2$, we have $$q_{\alpha}(t) = C_1e^{i\nu_{\alpha}t}+C_2e^{-i\nu_{\alpha}t}$$
Thus, real free Maxwell field equations result in well known in the theory of differential equations situation - the solutions are complex-valued functions. It means, that generally the field function for free Maxwell field produce complex space.
From general expression for the field $\vec{H}(\vec{r},t)$ $$\vec{H}(\vec{r},t) = \left[\sum_{\alpha=1}^{\infty}A_{\alpha}\frac{\epsilon_0}{k_{\alpha}}\frac{dq_{\alpha}(t)}{dt}\cos(k_{\alpha}z) + f_{\alpha}(t)\right]\vec{e}_y$$ it is easily to obtain differential equation for $f_{\alpha}(t)$ $$\begin{split}
&\frac{d f_{\alpha}(t)}{dt} + A_{\alpha}\frac{\epsilon_0}{k_{\alpha}}\frac{\partial^2q_{\alpha}(t)}{\partial t^2}\cos(k_{\alpha}z) \\
&- \frac {1}{\mu_0} A_{\alpha}k_{\alpha}q_{\alpha}(t)cos(k_{\alpha}z) = 0.
\end{split}$$ Its solution in general case is $$f_{\alpha}(t) = \int A_{\alpha} \cos(k_{\alpha}z)\left[q_{\alpha}(t)\frac{k_{\alpha}}{\mu_0}-\frac{d^2q_{\alpha}(t)}{dt^2}\frac{\epsilon_0}{k_{\alpha}}\right]
dt +C_{\alpha}$$ Then we have another solution of Maxwell equations $$\vec{H}(\vec{r},t) = \frac{1}{\mu_0}\left\{\sum_{\alpha=1}^{\infty}k_{\alpha}A_{\alpha} \cos(k_{\alpha}z) q_{\alpha}'(t)\right\}\vec{e}_y,$$ $$\vec{
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Most directly imaged giant exoplanets are fainter than brown dwarfs with similar spectra. To explain their relative underluminosity unusually cloudy atmospheres have been proposed. However, with multiple parameters varying between any two objects, it remained difficult to observationally test this idea. We present a new method, sensitive time-resolved Hubble Space Telescope near-infrared spectroscopy, to study two rotating L/T transition brown dwarfs (2M2139 and SIMP0136). The observations provide spatially and spectrally resolved mapping of the cloud decks of the brown dwarfs. The data allow the study of cloud structure variations while other parameters are unchanged. We find that both brown dwarfs display variations of identical nature: J- and H-band brightness variations with minimal color and spectral changes. Our light curve models show that even the simplest surface brightness distributions require at least three elliptical spots. We show that for each source the spectral changes can be reproduced with a linear combination of only two different spectra, i.e. the entire surface is covered by two distinct types of regions. Modeling the color changes and spectral variations together reveal patchy cloud covers consisting of a spatially heterogenous mix of low-brightness, low-temperature thick clouds and brighter, thin and warm clouds. We show that the same thick cloud patches seen in our varying brown dwarf targets, if extended to the entire photosphere, predict near-infrared colors/magnitudes matching the range occupied by the directly imaged exoplanets that are cooler and less luminous than brown dwarfs with similar spectral types. This supports the models in which thick clouds are responsible for the near infrared properties of these “underluminous” exoplanets.'
author:
- Dániel Apai
- Jacqueline Radigan
- Esther Buenzli
- Adam Burrows
- Iain Neill Reid
- Ray Jayawardhana
bibliography:
- 'bdrefs.bib'
title: 'HST Spectral Mapping of L/T Transition Brown Dwarfs Reveals Cloud Thickness Variations'
---
Introduction
============
With masses between cool stars and giant exoplanets and effective temperatures comparable to those of directly imaged exoplanets [e.g. @Chauvin2005; @Marois2008; @Lafreniere2008; @Marois2010; @Lagrange2010; @Skemer2011] L and T-type brown dwarfs provide the critical reference points for understanding the atmospheres of exoplanets [e.g. @Burrows2001; @Kirkpatrick2005; @Marley2007]. Because the observations of brown dwarfs are not limited by the extreme star-to-planet contrasts exoplanet observations pose, much more detailed studies can be carried out. In particular, brown dwarfs provide an opportunity to solve the puzzling observation that most directly imaged giant planets appear to be redder and up to 4–10 times fainter than typical brown dwarfs with the same spectral type [e.g. @Barman2011_HR8799; @Skemer2012], often referred to as the [[*under-luminosity problem*]{}. Particularly interesting well-studied examples are see in Ross 458C [@Burgasser2010; @Burningham2011; @Morley2012] and 2M1207b [@Chauvin2005; @Mohanty2007; @Patience2010; @Barman2011_2M1207; @Skemer2011]. Although for the case of an obscuring edge-on disk with grey extinction has been proposed as a solution, in the light of additional observations and analysis this solution appears very unlikely [@Skemer2011]. More likely is that the fainter and redder near-infrared emission is due to a property intrinsic to the atmospheres of these exoplanets. This possibility is further supported by the fact that similar underluminosity has also been reported for a handful of field brown dwarfs (e.g. : @Metchev2006, : @Luhman2007, , : @Looper2008) and young brown dwarfs in clusters [@Lucas2001; @Allers2006]. ]{} The different models proposed to explain the lower near-infrared luminosity of exoplanets and [brown dwarfs]{} invoke differences in elemental abundances, surface gravity differences, evolutionary state, chemical equilibrium/non-equilibrium, or cloud structure, or some combination of these. However, because several of these parameters may change between any two brown [dwarfs or exoplanets]{} it remained difficult to isolate the effect of these variables. Possible differences in the structure of condensate clouds have, in particular, received much attention in models (e.g. @AckermanMarley2001 [@Burgasser2002; @Skemer2012; @Barman2011_HR8799; @Barman2011_2M1207; @Burrows2006; @Madhu2011; @Marley2010] and progress has been made in spectroscopic modeling to separate or constrain the impact of cloud structure from other parameters [e.g. @Cruz2007; @Folkes2007; @Looper2008; @Burgasser2008; @Radigan2008; @Cushing2010]. Yet, this problem remains a challenging aspect of ultracool atmospheres and one which will benefit from observational data probing cloud properties more directly.
We present here high-cadence, high-precision time-resolved HST spectroscopy of two rotating early T-type brown dwarfs that reveal highly heterogeneous cloud covers across their [photospheres]{}. These observations allow us to separate the effects of different cloud structures from variations in surface gravity, elemental abundances, age and evolutionary state. We show that the observed variations are well reproduced by models with large cloud scale height variations (thin and thick clouds) across the surfaces. When thick clouds turn to the visible hemispheres both targets fade in the near-infrared and display changes consistent with the colors and brightness of ÒunderluminousÓ directly imaged exoplanets. The similarity of the changes observed provides strong support to models that invoke atmospheres with high dust scale heights to explain the photometry of directly imaged exoplanets.
Observations and Data Reduction
===============================
Observations and Targets
------------------------
We used the Hubble Space Telescope (HST) to obtain near-infrared grism spectroscopy of two L/T transition brown dwarfs as part of a larger campaign (Programs 12314, 12551 PI: Apai). The data were acquired with the sensitive Wide Field Camera 3 instrument [@MacKenty2010] by obtaining 256$\times$256 pixel images of the targets’ spectra dispersed by the G141 grism in six consecutive HST orbits. [Table \[ObsLog\] provides a log of the observations. In short, we obtained 660 spectra for 2M2139, each with 22.34s integration time; and 495 spectra for SIMP0136, each with 22.34s integration time. In the analysis that follows we averaged sets of 10 spectra for 2M2139 and sets of 5 spectra for SIMP0136, giving us an effective temporal resolution of 223s for 2M2139 and 112s for SIMP0136. Our targets are relatively bright (J=13.5 mag for SIMP0136 and J=15.3 mag for 2M2139), resulting in very high signal-to-noise spectra (see Sect. \[Uncertainties\] for a detailed assessment).]{}
At the beginning of each orbit a direct image was obtained to accurately determine the position of the source on the detector, required for precise wavelength calibration. Cross-correlation of the images and centroid positions on-source revealed positional differences less than 0.1 pixel (0.01") between images taken at different orbits. No dithering was applied to stabilize source positions, and improve the accuracy of relative measurements.
[lccccccc]{} Target & Date & Time & \# of Int. & \#Orbits & Total & Bin & Noise\
Name & & Per Int. & / Orbit & & \#Spectra & Size & per Int.\
2M2139 & 2010/10/21 & 22.34 s & 11 & 6 & 660 & 10 & 0.27%\
SIMP0136 & 2011/10/10 & 22.34 s & 16 or 17 & 6 & 495 & 5 & 0.11%\
![Systematic effects observed and corrected in the WFC3 data: flux loss (a) and ramp (b). Both effects are well fitted and removed by simple analytical functions. In (a) sources are shown with counts $<$0.6 (blue) or $>$0.6 (red) of the maximum count in the spectrum. In (b) blue symbols are ramp in orbits 2-6 and red symbols are the ramp in orbit 1. Here the source is a non-variable field star. \[FigCorrections\]](Fig_S1_Corrections.pdf)
The observations presented here focus on two L/T transition brown dwarfs. Target 2M2139 (or 2MASS J21392676+0220226) has been classified [as a T0 dwarf based on red optical spectrum [@Reid2008] and as a peculiar T2.5$\pm$1 dwarf based on a 0.8–2.5 $\mu$m spectrum [@Burgasser2006]. More recently [@Burgasser2010] found that the spectrum of 2M2139 is better fit by a composite spectrum of an earlier (L8.5) and a later type (T3.5) dwarf than any single template brown dwarf. It was recently found to show impressive periodic photometric variability with a peak-to-peak amplitude of $\simeq$27% [@Radigan2012]. Ground-based photometry of variation 2M2139 argues for a period of $7.721\pm0.005$ hr, but also leaves open the possibility of
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Expansion dynamics of single-species, non-neutral clouds, such as electron bunches used in ultrafast electron microscopy, show novel behavior due to high acceleration of particles in the cloud interior. This often leads to electron bunching and dynamical formation of a density shock in the outer regions of the bunch. We develop analytic fluid models to capture these effects, and the analytic predictions are validated by PIC and N-particle simulations. In the space-charge dominated regime, two and three dimensional systems with Gaussian initial densities show bunching and a strong shock response, while one dimensional systems do not; moreover these effects can be tuned using the initial particle density profile and velocity chirp.'
author:
- 'B. S. Zerbe'
- 'X. Xiang'
- 'C.-Y. Ruan'
- 'S. M. Lund'
- 'P. M. Duxbury'
bibliography:
- 'CoulombDynamics.bib'
title: Dynamical bunching and density peaks in expanding Coulomb clouds
---
=1
Introduction
============
Non-neutral plasma systems arise in a variety of physical contexts ranging from astrophysics[@Arbanil:2014_charged_star_review; @Maurya:2015_charged_sphere; @Yousefi:2014_dust_aggregates]; accelerator technologies [@Bacci:2014_plasma_acceleration; @Boine:2015_intense_beams; @Whelan:2016_MRI; @Bernal:2016_recirculator]; ion and neutron production [@Bulanov:2002_charged_beam_generation; @Fukuda:2009_species_generation; @Esirkepov:2004_highly_efficient_ion_generation; @Kaplan:2015_preprint; @Parks:2001_neutron_production; @Bychenkov_2015_review]; sources for electron and ion microscopy[@Murphy:2014_cold_ions; @Gahlmann:2008_ultrashort]; to high power vacuum electronics[@Booske:2011_vacuum_review; @Liu:2015_maximal_charge; @Zhang:2016_review]. Understanding of the dynamics of spreading of such systems is critical to the design of next generation technologies, and simple analytic models are particularly helpful for instrument design. As a result, substantial theoretical efforts have already been made in this vein[@Jansen:1988_book; @Reiser:1994_book; @Batygin:2001_self; @Bychenkov:2005_coulomb_explosion; @Grech:2011_coulomb_explosion; @Kaplan:2003_shock; @Kovalev:2005_kinetic_spherically_coulomb_explosion; @Last:1997_analytic_coulomb_explosion; @Eloy:2001_coulomb_explosion; @Krainov:2001_ce_dynamics; @Morrison:2015_slow_down_dynamics; @Boella:2016_multiple_species]. Specifically, free expansion of clouds of charged single-specie particles starting from rest have been well studied both analytically and computationally[@Last:1997_analytic_coulomb_explosion; @Eloy:2001_coulomb_explosion; @Grech:2011_coulomb_explosion; @Batygin:2001_self; @Degtyareva:1998_gaussian_pileup; @Siwick:2002_mean_field; @Qian:2002_fluid_flow; @Reed:2006_short_pulse_theory; @Collin:2005_broadening; @Gahlmann:2008_ultrashort; @Tao:2012_space_charge; @Portman:2013_computational_characterization; @Portman:2014_image_charge; @Michalik:2006_analytic_gaussian], and a number of studies have found evidence of the formation of a region of high-density, often termed a “shock”, on the periphery of the clouds under certain conditions[@Grech:2011_coulomb_explosion; @Kaplan:2003_shock; @Kovalev:2005_kinetic_spherically_coulomb_explosion; @Last:1997_analytic_coulomb_explosion; @Murphy:2014_cold_ions; @Reed:2006_short_pulse_theory; @Degtyareva:1998_gaussian_pileup].
One application of these theories that is of particular current interest is to high-density electron clouds used in next-generation ultrafast electron microscopy (UEM) development[@King:2005_review; @Hall:2014_report; @Williams:2017_longitudinal_emittance]. The researchers in the UEM and the ultrafast electron diffraction (UED) communities have conducted substantial theoretical treatment of initially extremely short bunches of thousands to ultimately hundreds of millions of electrons that operate in a regime dominated by a virtual cathode (VC) limit[@Valfells:2002_vc_limit; @Luiten:2004_uniform_ellipsoidal; @King:2005_review; @Miller:2014_science_review; @Tao:2012_space_charge] which is akin to the Child-Langmuir current limit for beams generated under the steady-state conditions[@Zhang:2016_review]. These short bunches are often generated by photoemission, and such bunches inheret an initial profile similar to that of the driving laser pulse profile. Typically, the laser pulse has an in-plane, “transverse” extent that is of order one hundred microns and a duration on the order of fifty femtoseconds, and these parameters translate into an initial electron bunch with similar transverse extents and sub-micron widths[@King:2005_review]. After photoemission, the electrons are extracted longitudinally using either a DC or AC field typically in the 1-10 MV/m[@Srinivasan:2003_UED; @Ruan:2009_nanocrystallography; @van_Oudheusden:2010_rf_compression_experiment; @Sciaini:2011_review] through tens of MV/m[@Musumeci:2010_single_shot; @Weathersby:2015_slac; @Murooka:2011_TED] ranges, respectively. However, the theoretical treatments of such “pancake-like” electron bunch evolution have largely focused on the longitudinal dimension[@Luiten:2004_uniform_ellipsoidal; @Siwick:2002_mean_field; @Qian:2002_fluid_flow; @Reed:2006_short_pulse_theory; @Collin:2005_broadening], and the few studies looking at transverse dynamics have either assumed a uniform-transverse distribution[@Collin:2005_broadening] or have looked at the effect of a smooth Gaussian-to-uniform evolution of the transverse profile on the evolution of the pulse in the longitudinal direction[@Reed:2006_short_pulse_theory; @Portman:2013_computational_characterization]. Of specific note, only one analytic study found any indication, a weak longitudinal signal, of a shock[@Reed:2006_short_pulse_theory].
---------- -- ---------
\[-3mm\] \[8cm\]
---------- -- ---------
On the other hand, an attractive theoretical observation is that an ellipsoidal cloud of cool, uniformly distributed charged particles has a linear electric field within the ellipsoid which results in maintenance of the uniform charge density as the cloud spreads [@Grech:2011_coulomb_explosion]. In the accelerator community, such a uniform distribution is a prerequisite in employing techniques such as emittance compensation[@Rosenzweig:2006_emittance_compensation] as well as forming the basis of other theoretical analyses. It has long been proposed that such a uniform ellipsoid may be generated through proper control of the transverse profile of a short charged-particle bunch emitted from a source into vacuum[@Luiten:2004_uniform_ellipsoidal], and experimental results have shown that an electron cloud emitted from a photocathode and rapidly accelerated into the highly-relativistic regime can develop into a final ellipsoidal profile characteristic of a uniform charge distribution[@Musucemi:2008_generate_uniform_ellipsoid]. Contrary to expectations from the free expansion work but consistent with the longitudinal analyses, this shadow lacks any indication of a peripheral region of high-density shocks. However, recent work has indicated that a substantial high density region may indeed form in the transverse direction[@Williams:2017_transverse_emittance], and N-particle simulation results, as demonstrated in Fig. (\[fig:distribution substructure\]), demonstrate a rapidly-developed substantial ring-like shock circumscribing the median of the bunch when the bunch starts from sufficient density. Moreover, this shock corresponds to a region of exceedingly low brightness, or conversely, high, local temperature, and that experiments show that removal of this region results in a dramatic increase in the bunch brightness[@Williams:2017_transverse_emittance], which we term “Coulomb cooling” as it is similar to evaporative cooling in the fact that the “hottest” charged particles are removed from the distribution’s edge thus leaving behind a higher-quality, cooler bunch.
To understand Coulomb cooling, we first investigate this transverse shock. Here we demonstrate the formation of a ring-like shock within N-particle simulations [@Berz:1987_cosy; @Zhang:
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'One studies Cremona monomial maps by combinatorial means. Among the results is a simple integer matrix theoretic proof that the inverse of a Cremona monomial map is also defined by monomials of fixed degree, and moreover, the set of monomials defining the inverse can be obtained explicitly in terms of the initial data. A neat consequence is drawn for the plane Cremona monomial group, in particular the known result saying that a plane Cremona (monomial) map and its inverse have the same degree. Included is a discussion about the computational side and/or implementation of the combinatorial invariants stemming from these questions.'
address:
- |
Departamento de Matemática\
Universidade Federal de Pernambuco\
50740-540 Recife\
Pe\
Brazil
- |
Departamento de Matemáticas\
Centro de Investigación y de Estudios Avanzados del IPN\
Apartado Postal 14–740\
07000 Mexico City, D.F.
author:
- Aron Simis
- 'Rafael H. Villarreal'
title: Combinatorics of Cremona monomial maps
---
[^1]
Introduction
============
The expression “birational combinatorics” has been introduced in [@birational-linear] to mean the combinatorial theory of rational maps ${\mathbb{P}}^{n-1}\dasharrow {\mathbb{P}}^{m-1}$ defined by monomials, along with natural integer arithmetic criteria for such maps to be birational onto their image varieties. As claimed there, both the theory and the criteria were intended to be a simple transcription of the initial geometric data. Yet another goal is to write characteristic-free results. Thus, here too one works over an arbitrary field in order that the theory be essentially independent of the nature of the field of coefficients, specially when dealing with squarefree monomials.
In this paper, we stick to the case where $m=n$ and deal with Cremona maps. An important step has been silently taken for granted in the background of [@birational-linear Section 5.1.2], namely, that the inverse of a Cremona monomial map is also defined by monomials. To be fair this result can be obtained via the method of [@birational-linear Section 3] together with the criterion of [@bir2003]; however the latter gives no hint on how to derive explicit data from the given ones.
Here we add a few steps to the theory, by setting up a direct way to convert geometric results into numeric or combinatorial data regardless of the ground field nature. The conversion allows for an incursion into some of the details of the theory of plane Cremona maps defined by monomials. In particular, it is shown that the group of such maps under composition is completely understood without recurring to the known results about general plane Cremona maps. Thus, one shows that this group is generated by two basic monomial quadratic maps, up to reordering of variables in the source and the target. The result is not a trivial consequence of Noether’s theorem since the latter requires composing with projective transformations, which is out of the picture here. Moreover, the known proofs of Noether’s theorem (see, e.g., [@alberich]) reduce to various especial situations, passing through the celebrated de Jonquières maps which are rarely monomial.
The well-known result that a plane Cremona map and its inverse have the same degree is shown here for such monomial maps by an easy numerical counting. The argument for general plane Cremona maps is not difficult but requires quite a bit of geometric insight and preparation (see, e.g., [@alberich Proposition 2.1.12]).
Monomial Cremona maps have been dealt with in [@pan] and in [@Kor], but the methods and some of the goals are different and have not been drawn upon here.
Tools of integer linear algebra
===============================
Recall that if $a=(a_1,\ldots,a_n)\in {\mathbb R}^n$, its [*support*]{} is defined as ${\rm supp}(a)=\{i\, |\, a_i\neq 0\}$. Note that we can write $a=a^+-a^-$, where $a^+$ and $a^-$ are two non-negative vectors with disjoint support. The vectors $a^+$ and $a^-$ are called the positive and negative part of $a$ respectively. Following a familiar notation we write $|a|=a_1+\cdots+a_n$.
The following result is essentially embodied in the treatment given in the preliminaries of [@birational-linear]. For easy reference we chose to isolate it along with its complete proof.
\[starbucks-upstairs\] Let $v_1,\ldots,v_n$ be a set of vectors in $\mathbb{N}^n$ such that $|v_i|=d\geq 1$ for all $i$ and $\det(A)=\pm
d$, where $A$ is the $n\times n$ matrix with column vectors $v_1,\ldots,v_n$. Then $A^{-1}(e_i-e_j)\in\mathbb{Z}^n$ for all $i,j$.
Fixing indices $i,j$, there are $\lambda_1,\ldots,\lambda_n$ in $\mathbb{Q}$ such that $A^{-1}(e_i-e_j)=\sum_{k=1}^n\lambda_ke_k$. Notice that $A^{-1}(e_i)$ is the $i$[*th*]{} column of $A^{-1}$. Set $\mathbf{1}=(1,\ldots,1)$. Since $\mathbf{1}A=d\mathbf{1}$, we get $\mathbf{1}/d=\mathbf{1}A^{-1}$. Therefore $|A^{-1}(e_i)|=|A^{-1}(e_j)|=1/d$ and $\sum_k\lambda_k=0$. Then we can write $$A^{-1}(e_i-e_j)=\sum_{k=2}^n\lambda_k(e_i-e_1)\ \Longrightarrow\
e_i-e_j=\sum_{k=2}^n\lambda_k(v_k-v_1).$$ Thus there is $0\neq s\in\mathbb{N}$ such that $s(e_i-e_j)$ belong to $\mathbb{Z}\{v_1-v_k\}_{k=2}^n$, the subgroup of $\mathbb{Z}^n$ generated by $\{v_1-v_k\}_{k=2}^n$. By [@birational-linear Lemma 2.2 and Theorem 2.6], the quotient group $\mathbb{Z}^n/\mathbb{Z}\{v_1-v_k\}_{k=2}^n$ is free, in particular has no nonzero torsion elements. Then we can write $$e_i-e_j=\eta_2(v_2-v_1)+\cdots+\eta_n(v_n-v_1),$$ for some $\eta_i$’s in $\mathbb{Z}$. Since $\mathbb{Z}\{v_1-v_k\}_{k=2}^n$ is also free (of rank $n-1$), the vectors $v_2-v_1,\ldots,v_n-v_1$ are linearly independent. Thus $\lambda_k=\eta_k\in \mathbb{Z}$ for all $k\geq 2$, hence ultimately $A^{-1}(e_i-e_j)\in\mathbb{Z}^n$.
Next we state our main result of integer linear algebra nature. Its geometric translation and applications will be given in Section \[cremona\].
\[ipn-ufpe\] Let $v_1,\ldots,v_n$ be a set of vectors in $\mathbb{N}^n$ such that $|v_i|=d\geq 1$ for all $i$ and $\det(A)=\pm
d$, where $A$ is the $n\times n$ matrix with column vectors $v_1,\ldots,v_n$. Then there are unique vectors $\beta_1,\ldots,\beta_n,\gamma \in \mathbb{N}^n$ such that the following two conditions hold[:]{}
1. $A\beta_i=\gamma+e_i$ for all $i$, where $\beta_i,\gamma$ and $e_i$ are regarded as column vectors$\,$[;]{}
2. The matrix $B$ whose columns are $\beta_1,\ldots,\beta_n$ has at least one zero entry in every row.
Moreover, $\det(B)=\pm (|\gamma|+1)/d=\pm |\beta_i|$ for all $i$.
First we show the uniqueness. Assume that $\beta_1',\ldots,\beta_n',\gamma'$ is a set of vectors in $\mathbb{N}^n$ such that: (a’) $A\beta_i'=\gamma'+e_i$ for all $i$, and (b’) The matrix $B'$ whose column vectors are $\beta_1',\ldots,\beta_n'$ has at least one zero entry in every row. Let $\Delta=(\Delta_i)$ and $\Delta'=(\Delta_i
|
{
"pile_set_name": "ArXiv"
}
| null | null |
---
abstract: 'Solid-state qubits hold the promise to achieve unmatched combination of sensitivity and spatial resolution. To achieve their potential, the qubits need however to be shielded from the deleterious effects of the environment. While dynamical decoupling techniques can improve the coherence time, they impose a compromise between sensitivity and bandwidth, since to higher decoupling power correspond higher frequencies of the field to be measured. Moreover, the performance of pulse sequences is ultimately limited by control bounds and errors. Here we analyze a versatile alternative based on continuous driving. We find that continuous dynamical decoupling schemes can be used for AC magnetometry, providing similar frequency constraints on the AC field and improved sensitivity for some noise regimes. In addition, the flexibility of phase and amplitude modulation could yield superior robustness to driving errors and a better adaptability to external experimental scenarios.'
author:
- Masashi Hirose
- 'Clarice D. Aiello'
- Paola Cappellaro
bibliography:
- '../../Biblio.bib'
title: Continuous dynamical decoupling magnetometry
---
Solid-state qubits have emerged as promising quantum sensors, as they can be fabricated in small volumes and brought close to the field to be detected. Notably, Nitrogen-Vacancy (NV) centers in nano-crystals of diamond [@Jelezko02] have been applied for high sensitivity detection of magnetic [@Taylor08; @Maze08; @Balasubramanian08] and electric fields [@Dolde11] and could be used either as nano-scale scanning tips [@Maletinsky12] or even in-vivo due their small dimensions and low cytotoxicity [@McGuinness11]. Unfortunately, solid-state qubits are also sensitive probes of their environment [@Bar-Gill12; @Bylander11] and this leads to rapid signal decay, which limits the sensor interrogation time and thus its sensitivity. Dynamical decoupling (DD) methods [@Carr54; @Viola99b; @Uhrig07; @Khodjasteh07; @Biercuk11] have been adopted to prolong the coherence time of the sensor qubits [@Taylor08; @deLange11; @Bar-Gill12; @Pham12]. Although DD techniques prevent measuring constant, DC fields, they provide superior sensitivity to oscillating AC fields, as they can increase the sensor coherence time by orders of magnitude. The sensitivity is maximized by carefully matching the decoupling period to the AC field; conversely, one can study the response of a decoupling scheme to fields of various frequencies, thus mapping out their bandwidth. Still, the refocusing power of pulsed DD techniques is ultimately limited by pulse errors and bounds in the driving power. Here we investigate an alternative strategy, based on continuous dynamical decoupling (CoDD), that has the potential to overcome these limitations.
We consider the problem of measuring a small external field, coupled to the sensor by a Hamiltonian: ${{\mathcal{H}}}_{b}=\gamma b(t) S_{z}$, where $S_{z}$ is the spin operator of the quantum sensor. For example, $b(t)$ can be an external magnetic field and $\gamma$ the spin’s gyromagnetic ratio. The figure of merit for a quantum sensor is the smallest field $\delta b_{min}$ that can be read out during a total time $\mathbf{t}$, that is, the sensitivity $\eta=\delta b_{min}\sqrt{\mathbf{t}}$. We use this metric to compare pulsed and continuous DD schemes and show how CoDD can offer an advantage for some noise regimes.
The principle of DD schemes rests on the spin echo sequence, which refocuses unwanted phase accumulation due to a slow bath by reversing the system evolution with control pulses. More complex DD sequences can in principle extend the coherence time indefinitely, by increasing the number of pulses. In practice, however, a large number of imperfect, finite-width pulses provokes the accumulation of error and degrades DD performance [@Khodjasteh07; @Khodjasteh05; @Wang12]. CoDD has been first introduced in the context of NMR to mitigate pulse errors [@Burum81; @Boutis03] and it has then lead to many schemes, such as composite pulses [@Shaka83b; @Levitt86], dynamically corrected gates [@Khodjasteh09] and optimized modulations [@Jones12]. In general, phase and amplitude modulation of the continuous driving allows great flexibility and CoDD can achieve high decoupling power. Here we consider only two schemes, constant continuous driving (C) and Rotary Echo (RE) [@Solomon57; @Aiello12; @Laraoui11], as their periodicity allows an easier use for AC magnetometry (see Fig. \[fig:Sequence\]); we will compare these schemes to the simplest pulsed DD scheme, period dynamical decoupling (PDD).
![Pulse sequences for four AC magnetometry schemes: PDD (P), constant driving (S), RE with optimal frequency (R$_k^{\text{opt}}$) and spin-locking (S). Blue boxes represent microwave driving, with phase (x and y) as indicated.[]{data-label="fig:Sequence"}](Sequence2){width="45.00000%"}
As an example, we compute the signal and sensitivity of AC magnetometry under RE, but similar derivations apply for the other schemes. The RE sequence consists of a continuous on-resonance driving field of constant amplitude $\Omega$ and phase inverted at periodic intervals (see Fig. \[fig:Sequence\]). RE is parametrized by the angle $\theta=\Omega T/2$, where $T$ is the sequence period. While RE is usually employed to refocus errors in the driving field, for $\theta=2\pi k$ the sequence also refocus dephasing noise, with performance depending on both $k$ and the Rabi frequency. We consider the evolution of a sensor qubit under a sequence of $2\pi k$-RE and in the presence of an external AC magnetic field of frequency $\omega$ whose magnitude $b$ is to be sensed: $$\mathcal{H}(t) = \Omega \mathbb{SW}(t)S_x + \gamma b\cos(\omega t + \phi)S_z,$$ where $\mathbb{SW}(t)$ is the square wave of period $T = {4\pi k}/{\Omega}$. In the toggling frame of the driving field, the Hamiltonian becomes $$\widetilde{\mathcal{H}}(t)\!=\!\frac{\gamma b\cos(\omega t+\phi)}{2}[ \cos(\Omega t)S_z-\mathbb{SW}(t)\sin(\Omega t)S_y ].$$ We consider only the cases where $\phi=0$ and $\omega T=2m\pi$, with $m$ an odd integer, since as we show below this yields good sensitivities. Under this assumption $\tilde{\mathcal{H}}(t)$ is periodic and for small fields $b$ the evolution operator can be well approximated from a first order average Hamiltonian over the period $T$, $\overline{\mathcal{H}} \approx \frac{1}{T}\int_{0}^{T}\tilde{\mathcal{H}}(t)dt=\gamma\overline b\,S_y$.
If $m = 1$, we define $\omega_{low} = \frac{\Omega}{2 k}$, which, for a fixed $\Omega$, is easily adjustable by changing the echo angle $2\pi k$. Setting instead $m = (2k-1)$, we define $\omega_{opt} = \frac{\Omega (2k-1)}{2k}$, which yields $\overline b=4bk/[\pi(4k-1)]$ and attains the best sensitivity of the method. The sensitivity, obtained as $\eta(t) = \displaystyle\lim_{b \rightarrow 0}\textstyle\frac{\Delta\mathcal{S}}{|\frac{\partial \mathcal{S}}{\partial b}|}\sqrt{t}$, where $\mathcal{S}$ is the signal and $\Delta\mathcal{S}$ its shot-noise limited uncertainty, depends on $\overline b$, that is, on the averaging of the AC field over the sequence period due to the DD modulation. We compare the performance of both $2\pi k$-RE schemes to PDD (optimum $\omega = {2\pi}/{t}$, $\phi = {\pi}/{2}$) and a constant modulation with $\omega=\Omega$ (see Fig. \[fig:Sequence\]). We obtain for the schemes considered: $$\renewcommand{\arraystretch}{2}
\begin{array}{lclc}
\eta^{opt}_{R_k}=\eta\frac{4k-1}{2k} &\quad (\theequation.a) &\qquad \eta_P=\eta&\quad (\theequation.b)\\
\eta^{low}_{R_k}=\eta\frac{4k^2-1}{2k} &\quad (\theequation.c) &\qquad \eta_C=\frac4\pi\eta&\quad (\theequation.d),
\end{array}\nonumber \label{eq:sensitivity}
\addtocounter{equation}{1}$$ where $\eta=\frac{\pi }{2\gamma C \sqrt{t}}$, with $C$ a parameter capturing inefficiencies in the sensor readout [@Taylor08]. Here $R_k$ labels a $2k\pi$-RE scheme, $P$ the PDD scheme and $C$ the constant modulation (see Figure \[fig:Sequence\]). A fourth operating scheme can be obtained by a “spin-locking” sequence
|
{
"pile_set_name": "ArXiv"
}
| null | null |
‘=11 makefntext\#1[ to 3.2pt [-.9pt $^{{\ninerm\@thefnmark}}$]{}\#1]{} makefnmark[to 0pt[$^{\@thefnmark}$]{}]{} PS. @myheadings[mkbothgobbletwo oddhead[ ]{} oddfootevenheadevenfoot \#\#1\#\#1]{}
\[appendixc\] \[subappendixc\]
\#1
=1.5pc
citex\[\#1\]\#2[@fileswauxout citeacite[forciteb:=\#2]{}[\#1]{}]{}
@cghi cite\#1\#2[[$\null^{#1}$@tempswa ]{}]{}
=cmbx10 scaled1 =cmr10 scaled1 =cmti10 scaled1 =cmbxti10 scaled=cmbx10 scaled=cmr10 scaled=cmti10 scaled=cmbxti10 =cmbx10 =cmr10 =cmti10 =cmbx9 =cmr9 =cmti9 =cmbx8 =cmr8 =cmti8 6.0in 8.6in -0.25truein 0.30truein 0.30truein =1.5pc
YUMS 97-018, DO-TH 97-15, SNUTP 97-089\
hep-ph/9706451 (modified 27 June 1997)\
**FLAVOR DEMOCRACY AND QUARK MASS MATRICES [^1]**
C. S. Kim
*Department of Physics, Yonsei University, Seoul 120-749, Korea*
E-mail: kim@cskim.yonsei.ac.kr
and
G. Cvetič
*Department of Physics, University of Dortmund, Dortmund, Germany*
E-mail: cvetic@doom.physik.uni-dortmund.de
Flavor Democracy at Low Energy
==============================
In the standard electroweak theory, the hierarchical pattern of the quark masses and their mixing remains an outstanding issue. While a gauge interaction is characterized by its universal coupling constant, the Yukawa interactions have as many coupling constants as there are fields coupled to the Higgs boson. There is no apparent underlying principle which governs the hierarchy of the various Yukawa couplings, and as a result, the Standard Model of strong and electroweak interactions can predict neither the quark (or lepton) masses nor their mixing. This situation can be improved by assuming a universal Yukawa interaction – the resulting spectrum consists then of one massive and two massless quarks in each (up and down) sector in the three generation Standard Model. Flavor–democratic (FD) quark mass matrices, and a perturbed form of such FD matrices, were introduced already in 1978 by Harari, Haut and Weyers[@0)] in a left-right symmetric framework. Flavor democracy has recently been suggested by Koide, Fritzsch and Plankl[@1)], as well as Nambu[@[3]] and many other authors[@[3]] as an analogy with the BCS theory of superconductivity. In this Section we will discuss how this flavor symmetry can be broken by a slight perturbation at low energies, in order to reproduce the quark masses and the CKM matrix[@3)]. As a result, predictions for the top quark mass and for the CP violation parameter $J_{CP}$ are obtained. This Section is based on a work by Cuypers and Kim[@11)].
Considering only quark fields, the gauge invariant Yukawa Lagrangian is $${\cal L}_{\rm Y} =
- \sum_{i,j} (\bar Q'_{iL}~\Gamma^D_{ij}~d'_{jR}~\phi~+~
\bar Q'_{iL}~\Gamma^U_{ij}~u'_{jR}~\tilde \phi~+~\mbox{h.c.}) \ .
\label{eq1}$$ Here, the primed quark fields are in a flavor \[$SU(2)$\] basis of the $SU(2) \times U(1)$ electroweak gauge group – the left-handed quarks form doublets under the $SU(2)$ transformation, $\bar Q'_L=(\bar u'_L,~\bar d'_L)$, and the right-handed quarks are singlets. The indices $i$ and $j$ run over the number of fermion generations. The Yukawa coupling matrices $\Gamma^{U,D}$ are arbitrary and not necessarily diagonal. After spontaneous symmetry breaking, the Higgs field $\phi$ acquires a nonvanishing vacuum expectation value (VEV) $v$ which yields quark mass terms in the original Lagrangian $${\cal L}_{\rm mass} = - \sum_{i,j} (\bar d'_{iL}~M^D_{ij}~
d'_{jR}~+~\bar u'_{iL}~M^U_{ij}~u'_{jR}~+~\mbox{h.c.})
\ ,
\label{eq2}$$ and the quark mass matrices are defined as $$M^{U,D}_{ij} \equiv {v \over \sqrt{2}}~\Gamma^{U,D}_{ij}
\ .
\label{eq3}$$ Mass matrices $M^{U,D}$ are diagonalized by biunitary transformations involving unitary matrices $U^{U,D}_L$ and $U^{U,D}_R$, and the flavor eigenstates are tranformed to physical mass eigenstates by the same unitary transformations, $$U^{U,D}_L~M^{U,D}~(U^{U,D}_R)^{\dagger} = M^{U,D}_{\rm diag}~~{\rm and}~~
U^U_{L,R}~u'_{L,R} = u_{L,R},~~U^D_{L,R}~d'_{L,R} = d_{L,R}~~.
\label{eq4}$$ Using the recent CDF data[@4)] of the physical top mass $m_t^{\rm phys.} \approx 175$ GeV, the diagonalized mass matrices $M^{U,D}_{\rm diag}$ at a mass scale of 1 GeV are $$M_{\rm diag}^U \approx m_t
\left[ \begin{array}{ccc}
2.5\times10^{-5} & & \\
& 0.006 & \\
& & 1
\end{array} \right]
\quad {\rm and} \quad
M_{\rm diag}^D \approx m_b
\left[ \begin{array}{ccc}
1.7\times10^{-3} & & \\
& 0.03 & \\
& & 1
\end{array} \right].
\label{eq5}$$ The first two eigenvalues in both matrices are almost zero (almost degenerate) when compared to the eigenvalue of the third generation. In order to account for this large mass gap, one can use mass matrices which have in a flavor basis the flavor–democratic (FD) form $$M^U_0 = \frac{m_t}{3}
\left[ \begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1
\end{array} \right]
~~{\rm and}~~
M^D_0 = \frac{m_b}{3}
\left[ \begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1
\end{array} \right]
~~.
\label{eq6}$$ Diagonalization leads to a pattern similar to the experimental spectrum (5) $$M_{\rm diag}^U = m_t
\left[ \begin{array}{ccc}
0 & & \\
& 0 & \\
& & 1
\end{array} \right]
\qquad {\rm and} \qquad
M_{\rm diag}^D = m_b
\left[ \begin{array}{ccc}
0 & & \\
& 0 & \\
& & 1
\end{array} \right]
\ .
\label{eq7}$$ Arbitrariness in the choice of the Yukawa Lagrangian has been substantially reduced with this symmetric choice. Each (up or down) quark sector is determined in this pure FD approximation by a single universal Yukawa coupling.
To induce nonzero masses for the lighter quarks and to reproduce the experimental CKM matrix, small perturbations have to be added to the universal Yukawa interactions. One possibility is to analyze effects of the following two kinds of independent perturbation matrices $$P_1 =
\left[ \begin{array}{ccc}
\alpha & 0 & 0 \\
0 & \beta & 0 \\
0 & 0 & 0
\end{array} \right]
\qquad {\rm and} \qquad
P_2 =
\left[ \begin{array}{ccc}
0 & a & 0 \\
a & 0 & b \\
0 & b & 0
\end{array} \right]
\ ,
\label{eq8}$$ $\alpha,~\beta,~a$ and $b$ being real parameters to be determined from the quark masses. For simplicity, these perturbations can be applied separately. Quark mass matrices (in a flavor basis) are then sums of the dominant universal FD matrices (6) plus one kind of the perturbation matrices (8). One then
|
{
"pile_set_name": "ArXiv"
}
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.