text
stringlengths 0
12.5k
| meta
dict | change_metrics
dict |
|---|---|---|
---
abstract: 'We describe the accelerated propagation wave arising from a non-local reaction-diffusion equation. This equation originates from an ecological problem, where accelerated biological invasions have been documented. The analysis is based on the comparison of this model with a related local equation, and on the analysis of the dynamics of the solutions of this second model thanks to probabilistic methods.'
author:
- 'N. Berestycki, C. Mouhot, G. Raoul'
title: 'Existence of self-accelerating fronts for a non-local reaction-diffusion equations'
---
Introduction and results {#sec:introduction}
========================
Biological invasions happen when a species recently introduced in a location succeeds to establish and to spread in this new environment. These introduction are usually either a consequence of human transportation systems [@Carlton], or a consequence of the climate change [@Kovats]. Biological invasions are occuring at an unprecedented rate [@Hulme], and have an important impact on e.g. biodiversity [@Sakai] and human well-being [@Pejchar; @Juliano]. Predicting the dynamics of those invasion is an issue, that requires (among other approaches) the development of new mathematical methods and results [@Clark; @Kolar].
In this study, we are interested in a particular phenomenon that may happen during biological invasions [@Thomas2; @Mack; @Edmonds]: the dispersion of the individuals increases during the invasion. As a result, the speed of the invasive front increases, and often keeps accelerating as long as the invasion progresses [@Mack]. The best documented case is a biological invasion of Cane Toads in Australia [@Phillips; @Urban]. The amphibians have been introduced in Australia in 1935 as a (failed) attempt to control beetles populations in cane plantations. Since then, cane toads have been invading large coastal areas, at an accelerated speed: the invasion started with a speed of 10 kilometres a year, and continuously accelerated to the impressive speed of 55 kilometres a years today [@Urban]. The mechanism for this acceleration is documented [@Lindstrom]: the individuals close to the invasion fronts have an anomalously high dispersion rate, and drive the invasion front.
A model introduced in 1937 by Fisher in [@Fisher] (and simultaneously in [@KPP]) has proven very useful to describe biological invasions [@Shigesada; @Hastings]. This model describes the dynamics of the density of a population. In a homogeneous environment, a population which is initially present on a limited set only will propagate at an asymptotically constant speed [@Bramson], with a certain profile, called travelling wave [@KPP]. The study of travelling waves and related propagation phenomena has prompted a large mathematical literature, we refer to [@Xin] for a review on this active field of research. Recently, more surprising dynamics have been uncovered: in [@Roques], it has been shown that a slowly decaying initial condition may lead to accelerating invasion fronts. Similar dynamics can be observed for compactly supported initial populations if the diffusion operator of the Fisher-KPP equation is replaced by a nonlocal dispersal operator with fat tails [@Kot; @Garnier], or by a fractional diffusion operator [@Coulon]. Finally, in [@Bouin], it was proven that a similar dynamics can be observed when the diffusion operator is replaced by a kinetic operator modelling a run and tumble dynamics.
The phenomena that we want to describe here is different from the ones described above: in our case, the acceleration dynamics is due to the continual selection of individual with enhanced dispersion abilities. To model such phenomena, involving both a spatial dynamics of the population and evolutionary phenomena (see [@Hairston; @Lambrinos]), the population should be structured by a phenotypic trait as well as a spatial variable. Starting from an Individual Based model of such a population, a large population limit can be performed [@Fournier] to obtain a non-local parabolic equation. Related models have been studied in e.g. [@Prevost; @Alfaro]. The case where the phenotypic trait structuring the population is the dispersion rate of the population has been introduced in [@Benichou].
The model {#subsec:model}
---------
We will consider a population described by its density $v=v(t,x,\theta)$, where $t\geq 0$ is the time variable, $x\in\mathbb R$ a spatial location, and $\theta\in (1,\infty)$ a phenotypic trait. The dynamics of the population is given by the following model:
Model (NLoc): $$\begin{aligned}
\left\{
\begin{array}{l} {\displaystyle}\partial_t v = \frac \theta 2 \Delta_x v + \frac 1 2\Delta_\theta v + v \left( 1-
\langle v \rangle \right) \\[3mm] {\displaystyle}v = v(t,x,\theta), \ t \ge 0, \ x \in {{\mathbb R}}, \ \theta \ge 1, \\[3mm] {\displaystyle}\langle v \rangle(t,x,\theta) := \int_{\max(\theta- A,1)} ^{\theta+A} v(t,x,\omega) {{\, \mathrm d}}\omega , \\[4mm] {\displaystyle}v(0,x,\theta) = v_0(x,\theta) \ge 0,\\[3mm] {\displaystyle}\partial_\theta v(t,x,1)=0,\ t \ge 0, \ x \in {{\mathbb R}}.
\end{array}
\right.\end{aligned}$$ In this model, we assume that individuals diffuse through space at a rate given by the phenotypic trait $\theta$. This phenotypic trait $\theta\in[1,\infty)$ is itself submitted to mutations, which appears in the model as a diffusion term in the variable $\theta$, at a rate constant rate $1$ independent from $x$ and $\theta$. We assume that the growth rate of the population in the absence of intra-specific competition is $1$, and is in particular independent of the spatial location $x$ and phenotypic trait $\theta$. We assume that the individuals are in competition with the individuals present in the same location, provided their phenotypic traits are not different, which is quantified by $A>0$. Note that (NLoc) would correspond to the model introduced in [@Benichou] if $A=\infty$; we will however always consider here that $A>0$ is finite. We also assume that the individuals reproduce asexualy: during sexual reproductions, recombinations of the DNA strains happen, which leads to very different mathematical models [@MR].
From a modelling point of view, assuming that the phenotypic trait $\theta$ can take arbitrarily large values may appear surprising. It seems however to be a reasonable assumption in this context: an artificial selection experiment [@Weber] has shown that it is possible to increase the dispersion rate of flies a hundred folds in just a hundred generations, with little impact on the reproduction rate of the individuals. The field data obtained in [@Urban] suggest that the set of possible phenotypic traits does not have a limiting effect on the evolution of dispersal in cane toad populations. The data collected in [@Lindstrom] provides some indications on how rapid evolution of the dispersion rate is possible: tracking data of the cane toads show that the animals alternate resting phases and ballistic motion, and the individuals at the front of the invasion simply have longer ballistic phases, and a higher directional persistence. These simple modifications of individual motion has limited energetic cost, while greatly increasing individuals dispersion rate.
The main results {#sec:main-results}
----------------
We make the following assumptions on the initial condition:
1. (Compact support in $\theta$.) We have that $u_0(x, \theta) = 0$ unless $\theta_{\min} \le \theta \le \theta_{\max}$ for some $\theta_{\max} >\theta_{\min} \ge 1$;
2. (Thin tail) We have, for some $C,c>0$ $u_0(x, \theta) \le C \exp( - cx )$ uniformly over $x$ and $\theta$, and $\inf_{\mathbb R_-\times [\theta_{\min}', \theta_{\max}']}u_0>0$, for some $\theta_{\max}' >\theta_{\min}' \ge 1$;
3. (Regularity.) We assume that $\left((x,\theta)\mapsto \bar v_0(x,\theta=v_0(x,|\theta|+1)\right)\in C^{3}(\mathbb R^2)$, that is $$\sum_{0\leq k+l\leq 3}\|\partial_x^k\partial_\theta^l \bar v_0\|_{L^\infty(\mathbb R^2)}<\infty.$$
We can now state the main result of this study, which describes the acceleration of the invasion front:
\[T:toads-nonlocal\] Let $v_0\in C^{2+\delta}(\mathbb R\times [1,\infty))$ with compact support in $\theta$, thin tail in $x$ and regular, as described in Subsection \[sec:main-results\]. Let $v(t,x, \theta)$ denote the corresponding solution of (NLoc). For $x \in {{\mathbb R}}$, let $S(t,x) = \sup_{\theta} v(t,x, \theta)$ and let $$\gamma_0 = \frac{2}{3}2^{1/4}.$$ We have for all $\gamma
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We present new observations from Z-Spec, a broadband 185 - 305 GHz spectrometer, of sub-millimeter bright lensed sources recently detected by the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). Four out of five sources observed were detected in CO, and their redshifts measured using a new redshift finding algorithm that uses combinations of the signal-to-noise of all the lines falling in the Z-Spec bandpass to determine redshifts with high confidence, even in cases where the signal-to-noise in individual lines is low. Lower limits for the dust masses ($\sim$a few 10$^{8}$ M$_{\odot}$) and spatial extents ($\sim$1 kpc equivalent radius) are derived from the continuum spectral energy distributions, corresponding to dust temperatures between 54 and 69 K. The dust and gas properties, as determined by the CO line luminosities, are characteristic of dusty starburst galaxies, with star formation rates of 10$^{2-3}$ M$_{\odot}$ yr$^{-1}$. In the LTE approximation, we derive relatively low CO excitation temperatures ($\lesssim 100$ K) and optical depths ($\tau\lesssim1$). Using a maximum likelihood technique, we perform a non-LTE excitation analysis of the detected CO lines in each galaxy to further constrain the bulk molecular gas properties. We find that the mid-$J$ CO lines measured by Z-Spec localize the best solutions to either a high-temperature / low-density region, or a low-temperature / high-density region near the LTE solution, with the optical depth varying accordingly.'
author:
- |
R. E. Lupu,$^{1\ast}$ K. S. Scott,$^{1}$ J. E. Aguirre,$^{1}$ I. Aretxaga,$^{2}$ R. Auld,$^{3}$ E. Barton,$^{4}$ A. Beelen,$^{5}$ F. Bertoldi,$^{6}$ J. J. Bock,$^{7,8}$ D. Bonfield,$^{9}$ C. M. Bradford,$^{7,8}$ S. Buttiglione,$^{10}$ A. Cava,$^{11,12}$ D. L. Clements,$^{13}$ J. Cooke,$^{4,8}$ A. Cooray,$^{4}$ H. Dannerbauer,$^{14}$ A. Dariush,$^{3}$ G. De Zotti,$^{10,15}$ L. Dunne,$^{16}$ S. Dye,$^{3}$ S. Eales,$^{3}$ D. Frayer,$^{17}$ J. Fritz,$^{18}$ J. Glenn,$^{19}$ D. H. Hughes,$^{2}$ E. Ibar,$^{20}$ R. J. Ivison,$^{20,21}$ M. J. Jarvis,$^{9}$ J. Kamenetzky,$^{19}$ S. Kim,$^{4}$ G. Lagache,$^{22,23}$ L. Leeuw,$^{24,25}$ S. Maddox,$^{16}$ P. R. Maloney,$^{19}$ H. Matsuhara,$^{26}$ E. J. Murphy,$^{27}$ B. J. Naylor,$^{7}$ M. Negrello,$^{28}$ H. Nguyen,$^{7}$ A. Omont,$^{29}$ E. Pascale,$^{3}$ M. Pohlen,$^{3}$ E. Rigby,$^{16}$ G. Rodighiero,$^{30}$ S. Serjeant,$^{28}$ D. Smith,$^{16}$ P. Temi,$^{31}$ M. Thompson,$^{9}$ I. Valtchanov,$^{32}$ A. Verma,$^{33}$ J. D. Vieira,$^{8}$ J. Zmuidzinas$^{7,8}$\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
title: 'MEASUREMENTS OF CO REDSHIFTS WITH Z-SPEC FOR LENSED SUBMILLIMETER GALAXIES DISCOVERED IN THE H-ATLAS SURVEY'
---
INTRODUCTION
============
Galaxies detected by their thermal dust emission at submillimeter (submm) and millimeter (mm) wavelengths ($\lambda \approx 250-2000\,\mu$m) comprise an important population of massive systems in the early Universe that are thought to be undergoing an early phase of intense star formation in their evolution [@blain02]. Dust grains within star-forming regions in these galaxies are heated by incident optical and ultraviolet (UV) radiation from young stars and thermally re-radiate this energy at far-infrared (far-IR) to mm wavelengths, with the peak of dust emission occurring at $\sim60-200\,\mu$m in the rest-frame [@soifer91]. It is estimated that about half of all star-formation in the Universe is heavily obscured by dust and therefore difficult to identify in even the deepest surveys at optical/ultraviolet wavelengths [@Puget1996]. Observations at submm/mm wavelengths sample the Rayleigh-Jeans tail of the thermal dust spectrum, which rises steeply with frequency $\sim\nu^{3.5}$ [@dunne00]. For observations at $\lambda > 500\,\mu$m, the climb up this steep spectrum with increasing redshift roughly cancels the effect of cosmological dimming with increasing distance [e.g., @blain02], making galaxies with a fixed luminosity have roughly the same observed flux density at submm/mm wavelengths for redshifts between $1 < z < 10$. This allows a distance-independent study of dust-obscured star-formation and galaxy evolution spanning the epoch of peak star formation activity in the Universe [$z\sim2-3$, e.g., @chapman05]. Although first predicted by @Low68, the population of high-redshift and heavily dust-obscured galaxies (submillimeter galaxies, SMGs) was first revealed a decade ago [@smail97], and several wide-area surveys at 850$\mu$m – 1.2mm have been carried out since then [e.g., @weiss09b; @austermann10; @Coppin2006; @Bertoldi2007; @scott08], mapping a total of $\sim4$deg$^2$ of sky. More recently, much larger area surveys have been undertaken with the South Pole Telescope [SPT, @vieira09] at $\lambda=1.4-2$mm, the Balloon-borne Large Aperture Submillimeter Telescope [BLAST, @pascale08] at $\lambda=250-500\,\mu$m, and the [*Herschel Space Observatory*]{} [@Pilbratt10] at $\lambda=55-670\,\mu$m. Mapping a total area of $\sim200$deg$^2$ to date [@pascale08; @vieira09; @Eales10], these surveys have uncovered a population of rare, and unusually bright, distant galaxies. Their inferred IR luminosities and high redshifts are consistent with a significant fraction of these extremely bright submm/mm galaxies being gravitationally lensed [@negrello07], but proof requires extensive multi-wavelength follow-up campaigns. Their observed flux densities can be magnified by factors $> 10$ due to lensing by intervening foreground galaxies or clusters, as observed in similarly bright systems [e.g., @Swinbank2010; @Solomon2005]. By targeting lensed objects, we can study the typical properties of the star forming galaxies in the early Universe that would otherwise be inaccessible in a blank survey due to sensitivity limitations and source confusion. The ongoing Herschel-Astrophysical Terahertz Large Area Survey [H-ATLAS, @Eales10] in the Science Demonstration Phase (SDP) has already covered 14.4 deg$^{2}$ out of the $\sim$550 deg$^{2}$ planned, resulting in $\sim$6600 sources [@Clements10; @Rigby10] with fluxes measured at 250, 350, and 500 $\mu$m using the Spectral and Photometric Imaging Receiver [SPIRE, @Griffin10; @Pascale10], and fluxes at 100 and 160 $\mu$m obtained with the Photodetector Array Camera and Spectrometer [PACS, @Poglitsch10; @Ibar2010]. Given the large areal coverage, H-ATLAS can detect the brightest (i.e. rarest) distant submm galaxies and is the first example where the efficient selection of lensed galaxies at submm wavelengths has been demonstrated [@Negrello10]. To understand the nature of these galaxies, in particular whether they represent a previously undiscovered population of intrinsically bright sources [e.g., @devriendt10] or are relatively normal starburst galaxies lensed by foreground structures [e.g., @negrello07], requires both complementary data at other wavelengths and measurements of their redshifts. However, measuring spectroscopic redshifts for these sources is challenging: their positional accuracy from submm/mm imaging is often poor due to diffraction limitations at these long wavelengths, and they tend to be highly extincted by dust, making spectroscopic measurements from optical ground-
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In this paper we introduce a framework for computing upper bounds yet accurate WCET for hardware platforms with caches and pipelines. The methodology we propose consists of 3 steps: 1) given a program to analyse, compute an equivalent (WCET-wise) abstract program; 2) build a timed game by composing this abstract program with a network of timed automata modeling the architecture; and 3) compute the WCET as the optimal time to reach a winning state in this game. We demonstrate the applicability of our framework on standard benchmarks for an ARM9 processor with instruction and data caches, and compute the WCET with UPPAAL-TiGA. We also show that this framework can easily be extended to take into account dynamic changes in the speed of the processor during program execution.'
author:
- 'Franck Cassez[^1]'
bibliography:
- 'wcet.bib'
title: |
Timed Games for Computing\
Worst-Case Execution-Times
---
Introduction
============
Embedded real-time systems are composed of a set of tasks (software) that run on a given architecture (hardware). These systems are subject to strict timing constraints and these constraints must be enforced by a scheduler. Designing an effective scheduler is possible only if some bounds are known about the execution times of each task. For simple scheduling algorithms non preemptive, the knowledge of the *worst-case execution-time* (WCET) is sufficient to design a scheduler. For more complex scheduling algorithms with preemption or shared resources, the WCET for each task might not give rise to the WCET for the entire system though. This is why most critical embedded systems rely on a rather simple scheduling algorithm. Performance wise, determining tight bounds for WCET is crucial as using rough over-estimates might either result in a set of tasks being wrongly declared non schedulable or a lot of computation time might be wasted in idling cycles and loss of energy/power.
#### **The WCET Problem.**
The execution time, $\et(p,d,H)$, of a program $p$, with input data $d$ on the hardware $H$, is measured as the number of cycles of the fastest component of the hardware the processor. Data take their values in a finite domain $\calD$. The program is given in binary code or equivalently in the assembly language of the target processor[^2]. The *worst-case execution-time* of program $p$ on hardware $H$ is defined by: $$\wcet(p,H)=\sup_{d \in \calD} \et(p,d,H) \mathpunct.$$ The WCET problem asks the following: Given $p$ and $H$, compute $\wcet(p,H)$.
In general, the WCET problem is undecidable because otherwise we could solve the halting problem[^3]. However, for programs that always terminate and have a bounded number of paths, it is obviously (theoretically) computable. Indeed the possible runs of the program can be represented by a finite tree. Notice that this does not mean that the problem is tractable though.
If the input data are known or the program execution time is indepedent from the input data, the tree contains a single path and it is usually feasible to compute the WCET. Likewise, if we can determine some input data that produces the WCET (this might be as difficult as computing the WCET), we can compute the WCET on a single-path program.
If is not often the case that the input data are known or that we can determine an input that produces the WCET. Rather the (values of the) input data are unknown, and the number of paths to be explored might be extremely large: for instance, for a Bubble Sort program with $100$ data to be sorted, the tree representing all the runs of the (assembly) program on all the possible input data has more than $2^{50}$ nodes. Although symbolic methods (using BDDs) can be applied to analyse some programs with a huge number of states, they will fail to compute the exact WCET on Bubble Sort by exploring all the possible paths.
Another difficulty of the WCET problem stems from the more and more complex architectures embedded real-time systems are running on. They usually feature a multi-stage *pipeline* and a fast memory component like a *cache*, and they both influence in a complicated manner the WCET. It is then a challenging problem to determine a precise WCET even for relativey small programs running on complex architectures.
#### **Methods and Tools for the WCET Problem.**
The reader is referred to [@wcet-survey-2008] for an exhaustive presentation of the WCET computation techniques and tools. There are two main classes of methods for computing WCET.
- Testing-based methods. These methods are based on experiments running the program on some data, using a simulator of the hardware or the real platform. The execution time of an experiment is measured and, on a large set of experiments, a maximal and minimal bound can be obtained. The maximal bound computed this way is *unsafe* as not all the possible paths have been explored. These methods might not be suitable for safety critical embedded systems but they are versatile and rather easy to implement.
RapiTime [@rapitime] (based on pWCET [@pWCET]) and Mtime [@mtime] are measurement tools that implement this technique.
- Verification-based methods. These methods often rely on the computation of an *abstract* graph, the control flow graph (CFG), and an abstract model of the hardware. Together with a static analysis tool they can be combined to compute WCET. The CFG should produce a super set of the set of all feasible paths. Thus the largest execution time on the abstract program is an upper bound of the WCET. Such methods produce *safe* WCET, but are difficult to implement. Moreover, the abstract program can be extremely large and beyond the scope of any analysis. In this case, a solution is to take an even more abstract program which results in drifting further away from the exact WCET.
Although difficult to implement, there are quite a lot of tools implementing this scheme: Bound-T [@bound-T], OTAWA [@otawa], TuBound [@tubound], Chronos [@chronos], SWEET [@sweet-2003] and aiT [@aiT; @wcet-ai-aswsd-ferdinand-04] are static analysis-based tools for computing WCET.
The verification-based tools mentioned above rely on the construction of a control flow graph, and the determination of loop bounds. This can be achieved using user annotations (in the source code) or sometimes infered automatically. The CFG is also annotated with some timing information about the cache misses/hits and pipeline stalls, and paths analysis is carried out on this model by Integer Linear Programming (ILP). The algorithms implemented in the tools use both the program and the hardware specification to compute the CFG fed to the ILP solver. The architecture of the tools themselves is thus monolithic: it is not easy to adapt an algorithm for a new processor. This is witnessed by *WCET’08 Challenge Report* [@wcet-chal-report-08] that highlights the difficulties encountered by the participants to adapt their tools for the new hardware in a reasonable amount of time.
#### **WCET and Model-Checking.**
Surprisingly enough, only a few tools use model-checking techniques to compute WCET. Considering that ($i$) modern architectures are composed of *concurrent* components (the stages of the pipeline, caches) and ($ii$) these components *synchronize* and synchronization depends on *timing constraints* (time to execute in one stage of the pipeline, time to fetch a data from the cache), formal models like *timed automata* [@AD94] and state-of-the-art *real-time model-checkers* like UPPAAL[@uppaal-sttt-97; @uppaal-40-qest-behrmann-06] appear well-suited to address the WCET problem.
It has previously been claimed [@Wilhelm-04] that *model-checking* was not adequate to compute WCET, but this statement has since been revised. In [@wcet-cav-metzner-04], A. Metzner showed that model-checkers could well be used to compute safe WCET on the CFG for programs running on pipelined processors with an instruction cache.
In [@hubert-wcet-09], B. Hubert and M. Schoeberl consider Java programs and compare ILP-based techniques with model-checking techniques using the model-checker UPPAAL. Model-checking techniques seem slower but easily amenable to changes (in the hardware model). The recommendation is to use ILP tools for large programs and model-checking tools for code fragments.
More recently, the TASM toolset [@tasm-cav-07] (M. Ouimet & K. Lundqvist) has been used to compute WCET with UPPAAL: the TASM machine is a high level machine not featuring pipelining nor caches and computing the WCET amounts to finding the longest path (timewise) in a timed automaton that specifies a tasks.
Another use of timed automata (TA) and the model-checker UPPAAL for computing WCET on pipelined processors with
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We present ALMA observations of two moderate luminosity quasars at redshift 6. These quasars from the Canada-France High-z Quasar Survey (CFHQS) have black hole masses of $\sim 10^{8} M_\odot$. Both quasars are detected in the [\[C[ii]{}\]]{} line and dust continuum. Combining these data with our previous study of two similar CFHQS quasars we investigate the population properties. We show that $z>6$ quasars have a significantly lower far-infrared luminosity than bolometric-luminosity-matched samples at lower redshift, inferring a lower star formation rate, possibly correlated with the lower black hole masses at $z=6$. The ratios of [\[C[ii]{}\]]{} to far-infrared luminosities in the CFHQS quasars are comparable with those of starbursts of similar star formation rate in the local universe. We determine values of velocity dispersion and dynamical mass for the quasar host galaxies based on the [\[C[ii]{}\]]{} data. We find that there is no significant offset from the relations defined by nearby galaxies with similar black hole masses. There is however a marked increase in the scatter at $z=6$, beyond the large observational uncertainties.'
author:
- 'Chris J. Willott'
- Jacqueline Bergeron and Alain Omont
bibliography:
- 'willott.bib'
title: 'Star formation rate and dynamical mass of $10^{8}$ solar mass black hole host galaxies at redshift 6'
---
Introduction
============
Improved astronomical observational facilities have enabled the discovery and study of many galaxies at an early phase of the Universe’s history. It is now possible to witness the majority of the stellar and black hole mass growth over cosmic time and identify how physical conditions at early times differ from now. One of the major relations to be determined as a function of time is the tight correlation between black hole mass and galaxy properties observed for nearby galaxies (see @Kormendy:2013 2013 for a review). Observations of this relation at high-redshift are critical to understanding the cause because most of the growth occurred at early times.
Attempts to measure black hole and galaxy masses at high-redshift face a number of problems. Black hole mass measurements cannot be made directly by resolved kinematics of gas or stars within the black hole’s sphere of influence, nor by reverberation mapping. Instead black hole masses, $M_{\rm BH}$, of quasars can be measured at any redshift using the single-epoch virial mass estimator that involves measuring a low-ionization broad emission line, such as [Mg[ii]{}]{} or [H$\beta$]{}, and calibrating the location of the emitting gas with low-$z$ reverberation-mapped quasars [@Wandel:1999a]. For AGN with obscured broad lines $M_{\rm BH}$ can only be estimated from the luminosity making an assumption about the accretion rate relative to the Eddington limit.
Measuring galaxy properties, such as luminosity or velocity dispersion, $\sigma$, of distant quasars is hampered by surface brightness dimming, the bright glare of the quasar and AGN (active galactic nuclei) emission line-contamination of spectral features. Up to $z\approx 1$ there has been considerable success in measuring AGN host galaxy luminosities, morphologies and in some cases velocity dispersions [@Cisternas:2011; @Park:2014]. At higher redshifts ($1<z<4$) the galaxy light is more difficult to separate from the quasar, which, combined with greater mass-to-light corrections, lead to larger uncertainties [@Merloni:2010; @Targett:2012]. The results of these studies are mixed with some evidence in favour of higher $M_{\rm BH}$ at a given galaxy mass.
At yet higher redshifts it has proved impossible to measure the galaxy light of quasars [@Mechtley:2012] before launch of the and instead the main method of determining galaxy mass is kinematics of cool gas in star-forming regions [@Carilli:2013]. Facilities such as the IRAM Plateau de Bure Interferometer, the Jansky Very Large Array and the Atacama Large Millimeter Array (ALMA) have sufficient sensitivity and resolution to resolve the gas in distant quasar hosts and provide dynamical masses [@Walter:2004; @Walter:2009; @Wang:2010; @Wang:2013]. In particular, ALMA has the sensitivity to probe $z=6$ quasar hosts with star formation rates, SFR, in the tens of solar masses per year, rather than only in the extreme starbursts previously observable [@Willott:2013]. The studies above focussed on $z\approx 6$ Sloan Digital Sky Survey (SDSS) and UKIRT Infrared Deep Sky Survey (UKIDSS) quasars with high UV and far-IR luminosities and found that their black holes are on average 10 times greater than the corresponding $\sigma$ for local galaxies, roughly consistent with a continuation of the evolution seen in lower redshift studies.
Although observationally there appears to be an increase in $M_{\rm BH}$ with redshift at a given galaxy mass or $\sigma$, it has long been understood that there are selection biases that affect how closely the observations trace the underlying distribution. In particular, the steepness of the galaxy and dark matter mass functions combined with large scatter in their correlations with black hole mass mean that a high black-hole-mass-selected sample of quasars will have a systematic offset in $\sigma$ towards lower values. This effect, first identified by @Willott:2005b and @Fine:2006 was studied in detail in @Lauer:2007 and numerous studies thereafter. The magnitude of the effect depends upon the scatter in the correlation, which has not been conclusively measured at high-redshift, but appears to increase with redshift [@Schulze:2014]. @Willott:2005b and @Lauer:2007 showed that the bias is particularly strong for $M_{\rm BH}>10^9 M_\odot$ quasars such as those in the SDSS at $z \approx 6$ and therefore that the factor of 10 increase in $M_{\rm BH}$ at a given $\sigma$ first seen in the quasar SDSSJ1148+5251 [@Walter:2004] could be accounted for by the bias (see also @Schulze:2014 2014). In comparison, there would be little bias for a sample of high-$z$ quasars with black hole masses of $M_{\rm BH}\sim10^8 M_\odot$ [@Lauer:2007].
An alternative to measuring the evolution of the assembled galaxy and black hole masses is to determine the rate at which mass growth is occurring. For quasars the bolometric luminosity is a measure of the black hole mass growth rate. For galaxies, the star formation rate is proportional to the stellar mass growth. The star formation rate can be determined by the rest-frame far-infrared dust continuum luminosity. Additionally, the interstellar [\[C[ii]{}\]]{} far-infrared emission line is well-correlated with star-formation [@De-Looze:2014; @Sargsyan:2014] so can also be used as a star formation proxy.
In @Willott:2013 (2013, hereafter Wi13) we presented Cycle 0 ALMA observations in the [\[C[ii]{}\]]{} line and 1.2mm continuum for two $z=6.4$ quasars from the Canada-France-High-z Quasar Survey (CFHQS, @Willott:2010a 2010b). These quasars have $M_{\rm BH}\sim10^8
M_\odot$, a factor of 10–30 lower than most SDSS quasars known at these redshifts. One quasar was detected in line and continuum and the other remained undetected in these sensitive observations placing an upper limit on its star formation rate of SFR$<40\,M_\odot\,{\rm
yr}^{-1}$.
In this paper we present ALMA observations of two further CFHQS quasars with similar redshift and black hole mass with the aim of providing a sample large enough to address the issue of how host galaxy properties such as SFR, $\sigma$ and dynamical mass depend upon black hole accretion rate and mass at a time just 1 billion years after the Big Bang. In particular, these quasars are not subject to the bias in the $M_{\rm BH} - \sigma$ relation discussed previously because of their moderate black hole masses. Cosmological parameters of $H_0=67.8~ {\rm km~s^{-1}~Mpc^{-1}}$, $\Omega_{\mathrm M}=0.307$ and $\Omega_\Lambda=0.693$ [@Planck-Collaboration:2014] are assumed throughout.
Observations
============
CFHQSJ005502+014618 (hereafter J0055+0146) and CFHQSJ222901+145709 (hereafter J2229+1457) were observed with ALMA on the 28, 29 and 30 November 2013 for Cycle 1 project 2012.1.00676.S. Between 22 and 26 12m diameter antennae were used. The typical long baselines were $\sim 400$m providing similar spatial resolution to our Cycle 0 observations. Observations of the science targets were interleaved with nearby phase calibrators, J0108+0135 and J2232+1143. The amplitude calibrator was Neptune and the bandpass calibrators J2258-2758 and J2148+0657. Total on-source integration times were 4610s for J0055+0146 and 5490s for J2229+1457.
The
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We give an explicit component Lagrangian construction of massive higher spin on-shell $N=1$ supermultiplets in four-dimensional Anti-de Sitter space $AdS_4$. We use a frame-like gauge invariant description of massive higher spin bosonic and fermionic fields. For the two types of the supermultiplets (with integer and half-integer superspins) each one containing two massive bosonic and two massive fermionic fields we derive the supertransformations leaving the sum of four their free Lagrangians invariant such that the algebra of these supertransformations is closed on-shell.'
author:
- |
I.L. Buchbinder${}^{ab}$[^1], M.V. Khabarov${}^{cd}$[^2], T.V. Snegirev${}^{ae}$[^3], Yu.M. Zinoviev${}^{cd}$[^4]\
*[${}^a$Department of Theoretical Physics, Tomsk State Pedagogical University,]{}\
*[Tomsk, 634061, Russia]{}\
*[${}^b$National Research Tomsk State University, Tomsk 634050, Russia]{}\
*[${}^c$Institute for High Energy Physics of National Research Center “Kurchatov Institute”]{}\
*[Protvino, Moscow Region, 142281, Russia]{}\
*\
*[Dolgoprudny, Moscow Region, 141701, Russia]{}\
*[${}^e$National Research Tomsk Polytechnic University, Tomsk 634050, Russia]{}********
title: |
Lagrangian formulation of the massive\
higher spin $N=1$ supermultiplets in $AdS_4$ space
---
Introduction
============
The higher spin theory (see e.g. the reviews [@Vas04] [@Be04],[@BBS10]) has attracted significant interest for a long time and for many reasons. On the one hand, the theory of massless higher spin fields is a maximal extension of the Yang-Mills gauge theories and gravity including all spin fields. On the other hand, it is closely related to superstring theory which involves an infinite tower of higher spin massive fields. In principle, the higher spin field theory can provide the possibility to study some aspects of string theory in the framework of the field theory. It is also worth pointing out that the construction of Lagrangian formulations for the higher spin field models is extremely interesting itself since it allows to reveal the new unexpected properties to relativistic field theory in general.
Beginning with work [@FV] it became clear that the nonlinear massless higher spin theory can only be realized in $AdS$ space with non-zero curvature. This raises the interest in studying the various aspects of field theory in $AdS$ space in the context of $AdS/CFT$-correspondence. Taking into account that the low-energy limit of superstring theory should lead to supersymmetric field theory we face the problem of constructing the supersymmetric massive higher spin models in the $AdS$ space. It is expected that the supersymmetry can be an essential ingredient of the consistent theory of all the fundamental interactions including quantum gravity. It is possible that such a theory should also involve the massless and/or massive higher spin fields. This paper is devoted to developing the $N=1$ supersymmetric Lagrangian formulation of free massive higher spin models in $AdS$ space in the framework of on-shell component formalism.
In supersymmetric theories the massless or massive fields are combined into the corresponding supermultiplets. In the case of free field models containing the different spin fields, it is natural to expect that the Lagrangian should be the sum of the Lagrangians for each concrete spin field. To provide an explicit Lagrangian realization of the free supermultiplet one has to find supertransformations leaving the free Lagrangians invariant and show that the algebra of these supertransformations is closed at least on-shell. In the case of the $N=1$ supersymmetry the massless higher superspin-$s$ supermultiplets consist of the two massless fields with spins $(s,s+1/2)$. The task of constructing supertransformations for such supermultiplets in four dimensional flat space was completely solved in the metric-like formulation [@Cur79] and soon in the frame-like one [@Vas80]. In both cases the supertransformations have a simple enough structure and are determined uniquely by the invariance of the sum of the Lagrangians for two free massless fields with spins $s$ and $s+1/2$. Note that such a requirement allows to find only on-shell supersymmetry when supertransformations are closed on the equations of motion. In order to find off-shell supertransformations, it is necessary to introduce the corresponding auxiliary fields.
A natural procedure to construct off-shell $N=1$ supersymmetric Lagrangian models is realized in terms of $N=1$ superspace and superfields (see e.g. [@BK98]), where all the auxiliary fields providing closure of the superalgebra are automatically obtained. In the framework of superfield formulation the $N=1$ supersymmetric massless higher spin models were constructed in the pioneer papers [@KSP93; @KS93]. Later, on the basis of these results, $N=1$ supersymmetric massless higher spin models were generalized for $AdS_4$ space [@KS94], [@GKS][^5]. In both cases the constructed superfield models, after eliminating the auxiliary fields, reduce to the sum of spin-$s$ and spin-$(s+1/2)$ (Fang)-Fronsdal Lagrangians [@Frons78; @FF78] in flat or $AdS$ spaces. The generalization for ${\cal N}=2$ massless higher spin supermultiplets was given in [@GKS], [@GKS96a].
There are much fewer results in the case of supersymmetric massive higher spin models even in the on-shell formalism, the reason being that when moving from the massless component formulation to the massive one very complicated higher derivative corrections must be introduced to the supertransformations. Moreover the higher the spin of the fields entering a supermultiplet the higher the number of derivatives one has to consider. The problem of the supersymmetric description of the massive higher spin supermultiplet was only explicitly resolved in 2007 for the case of $N=1$ on-shell 4D Poincare superalgebra [@Zin07a] (see also [@Zin02; @Zin07; @OT16]) using the gauge invariant formulation for the massive higher spin fields [@KZ97; @Zin01; @Met06][^6]. In such a formalism the description for the massive field is obtained in terms of the appropriately chosen set of the massless ones. It is assumed that the Lagrangian for massive higher spin supermultiplets is constructed as a sum of the corresponding Lagrangians for massless fields deformed by massive terms. However, it appeared [@Zin07a] that to realize such a program one has to use massless supermultiplets containing four fields $(k-1/2,k,k',k+1/2)$ as the building blocks, where two bosonic fields with equal spins have opposite parities, and this prevents us from separating them into the usual massless pairs. In [@Zin07a] it was shown that to obtain the massive deformation it is enough to add the non-derivative corrections to the supertransformations for the fermions only. Complicated higher derivative corrections to the supertransformations reappear when one tries to fix all local gauge symmetries, breaking the gauge invariance. Note however that in such construction the mass-like terms for the fermions in the Lagrangian take a complicated non-diagonal form making calculations rather cumbersome. Surprisingly however, in 4D the above results remain the main results in the massive supersymmetric higher spin theory until now[^7]. The aim of this paper is to extend and generalize the results of [@Zin07a] to the case of four dimensional $N=1$ $AdS_4$ superalgebra.
We use the gauge invariant description of the massive higher spin bosonic and fermionic fields but in the frame-like version [@Zin08b; @PV10]. Recall that one of the attractive features of such a formalism is that it works nicely both in flat Minkowski space as well as in $(A)dS$ spaces. Our strategy differs from that of [@Zin07a]. For the Lagrangian we take just the sum of four free Lagrangians for the two massive bosonic and two massive fermionic fields entering the supermultiplet. Then for each pair of bosonic and fermionic fields (we call it superblock in what follows) we find the supertransformations leaving the sum of their two Lagrangians invariant. Next we combine all four possible superblocks and adjust their parameters so that the algebra of the supertransformations is closed on-shell.
The paper is organized as follows. In section \[Section1\] we give all necessary descriptions of the frame-like formulation of massless bosonic and fermionic higher spin fields and alos we present the massless higher spin supermultiplets in $AdS_4$ in such a formalism. Massless models given in this section will serve as the building blocks for our construction of the massive higher spin models. In section \[Section2\] we give frame-like gauge invariant formulations for free massive arbitrary integer and half-inter spins. In section \[Section3\] we consider massive superblocks containing one massive bosonic and one massive fermionic field and find corresponding supertransformations. In section \[Section4\] we combine the constructed massive superblocks into one massive supermultiplet.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In this note we give a precise formulation of “resistance to arbitrary side information” and show that several relaxations of differential privacy imply it. The formulation follows the ideas originally due to Dwork and McSherry, stated implicitly in [@Dw06]. This is, to our knowledge, the first place such a formulation appears explicitly. The proof that relaxed definitions (and hence the schemes of [@DKMMN06; @NRS07; @MKAGV08]) satisfy the Bayesian formulation is new.'
author:
- |
Shiva Prasad KasiviswanathanAdam Smith\
Department of Computer Science and Engineering\
Pennsylvania State University\
e-mail: [$\{$kasivisw,asmith$\}$@cse.psu.edu]{}\
bibliography:
- '../bibfiles/master.bib'
title: |
A Note on Differential Privacy:\
Defining Resistance to Arbitrary Side Information
---
Introduction
============
Privacy is an increasingly important aspect of data publishing. Reasoning about privacy, however, is fraught with pitfalls. One of the most significant is the auxiliary information (also called external knowledge, background knowledge, or side information) that an adversary gleans from other channels such as the web, public records, or domain knowledge. Schemes that retain privacy guarantees in the presence of independent releases are said to [*compose securely*]{}. The terminology, borrowed from cryptography (which borrowed, in turn, from software engineering), stems from the fact that schemes which compose securely can be designed in a stand-alone fashion without explicitly taking other releases into account. Thus, understanding independent releases is essential for enabling modular design. In fact, one would like schemes that compose securely not only with independent instances of themselves, but with [*arbitrary external knowledge*]{}.
Certain randomization-based notions of privacy (such as differential privacy [@DMNS06]) are believed to compose securely even in the presence of arbitrary side information. In this note we give a precise formulation of this statement. First, we provide a Bayesian formulation of differential privacy which makes its resistance to arbitrary side information explicit. Second, we prove that the relaxed definitions of [@DKMMN06; @MKAGV08] still imply the Bayesian formulation. The proof is non-trivial, and relies on the “continuity” of Bayes’ rule with respect to certain distance measures on probability distributions. Our result means that the recent techniques mentioned above [@DKMMN06; @CM06; @NRS07; @MKAGV08] can be used modularly with the same sort of assurances as in the case of strictly differentially-private algorithms.
Differential Privacy
--------------------
Databases are assumed to be vectors in $\mathcal{D}^n$ for some domain $\mathcal{D}$. The Hamming distance ${{d}}({{\mathrm{x}}},{{\mathrm{y}}})$ on $\mathcal{D}^n$ is the number of positions in which the vectors ${{\mathrm{x}}},{{\mathrm{y}}}$ differ. We let $\Pr[\cdot]$ and ${{\mathbb{E}}}[\cdot]$ denote probability and expectation, respectively. Given a randomized algorithm $\mathcal{A}$, we let $\mathcal{A}({{\mathrm{x}}})$ be the random variable (or, probability distribution on outputs) corresponding to input ${{\mathrm{x}}}$. If $\erert$ and $\Q$ are probability measure on a discrete space $D$, the [*statistical difference*]{} (a.k.a. [*total variation distance*]{}) between $\erert$ and $\Q$ is defined as: $${\mathbf{SD}{\left( {{\erert,\Q}} \right)}}= \max_{S \subset D}|\erert[S]-\Q[S)|.$$
\[def:ind\] A randomized algorithm ${\mathcal{A}}$ is said to be ${\epsilon}$-differentialy private if for all databases ${{\mathrm{x}}},{{\mathrm{y}}}\in \mathcal{D}^n$ at Hamming distance at most 1, and for all subsets $S$ of outputs $$\begin{aligned}
\Pr[{\mathcal{A}}({{\mathrm{x}}})\in S] \leq e^{{\epsilon}} \Pr[{\mathcal{A}}({{\mathrm{y}}})\in S].\end{aligned}$$
This definition states that changing a single individual’s data in the database leads to a small change in the [*distribution*]{} on outputs. Unlike more standard measures of distance such as total variation (also called statistical difference) or Kullback-Leibler divergence, the metric here is multiplicative and so even very unlikely events must have approximately the same probability under the distributions ${\mathcal{A}}({{\mathrm{x}}})$ and ${\mathcal{A}}({{\mathrm{y}}})$. This condition was relaxed somewhat in other papers [@DiNi03; @DwNi04; @BDMN05; @DKMMN06; @CM06; @NRS07; @MKAGV08]. The schemes in all those papers, however, satisfy the following relaxation [@DKMMN06]:
\[def:indd\] A randomized algorithm ${\mathcal{A}}$ is $({\epsilon},\delta)$-differentially private if for all databases ${{\mathrm{x}}},{{\mathrm{y}}}\in {{\mathcal{D}}}^n$ that differ in one entry, and for all subsets $S$ of outputs, $\Pr[{\mathcal{A}}({{\mathrm{x}}})\in S] \leq e^{{\epsilon}} \Pr[{\mathcal{A}}({{\mathrm{y}}})\in S]+\delta\,.$
The relaxations used in [@DwNi04; @BDMN05; @MKAGV08] were in fact stronger (i.e., less relaxed) than [Definition \[def:ind\]]{}. One consequence of the results below is that all the definitions are equivalent up to polynomial changes in the parameters, and so given the space constraints we work only with the simplest notion.[^1]
Semantics of Differential Privacy {#sec:bayes}
=================================
There is a crisp, semantically-flavored interpretation of differential privacy, due to Dwork and McSherry, and explained in [@Dw06]: [*Regardless of external knowledge, an adversary with access to the sanitized database draws the same conclusions whether or not my data is included in the original data.*]{} (the use of the term “semantic” for such definitions dates back to semantic security of encryption [@GM84]). In this section, we develop a formalization of this interpretation and show that the definition of differential privacy used in the line of work this paper follows ([@DiNi03; @DwNi04; @BDMN05; @DMNS06]) is essential in order to satisfy the intuition.
We require a mathematical formulation of “arbitrary external knowledge”, and of “drawing conclusions”. The first is captured via a [*prior*]{} probability distribution $b$ on ${{\mathcal{D}}}^n$ ($b$ is a mnemonic for “beliefs”). Conclusions are modeled by the corresponding posterior distribution: given a transcript $t$, the adversary updates his belief about the database ${{\mathrm{x}}}$ using Bayes’ rule to obtain a posterior $\bar{b}$:
$$\begin{aligned}
\label{eqn:bel}
\bar{b}[{{\mathrm{x}}}| t] = \frac{\Pr[{\mathcal{A}}({{\mathrm{x}}})=t] b[{{\mathrm{x}}}]}{\sum_{{\mathrm{y}}}\Pr[{\mathcal{A}}({{\mathrm{y}}})=t]b[{{\mathrm{y}}}]}\ . \end{aligned}$$
Note that in an interactive scheme, the definition of ${\mathcal{A}}$ depends on the adversary’s choices; for legibility we omit the dependence on the adversary in the notation. Also, for simplicity, we discuss only discrete probability distributions. Our results extend directly to the interactive, continuous case.
For a database ${{\mathrm{x}}}$, define ${{\mathrm{x}}}_{-i}$ to be the same vector where position $i$ has been replaced by some fixed, default value in $D$. Any valid value in $D$ will do for the default value. We can then imagine $n+1$ related games, numbered 0 through $n$. In Game 0, the adversary interacts with ${\mathcal{A}}({{\mathrm{x}}})$. This is the interaction that actually takes place between the adversary and the randomized algorithm ${\mathcal{A}}$. In Game $i$ (for $1\leq i \leq n$), the adversary interacts with ${\mathcal{A}}({{\mathrm{x}}}_{-i})$. Game $i$ describes the hypothetical scenario where person $i$’s data is not included.
For a particular belief distribution $b$ and transcript $t$, we can then define $n+1$ [*a posteriori*]{} distributions $\bar{b}_0,\dots,\bar{b}_n$, where the $\bar{b}_0$ is the same as $\bar{b}$ (defined in \[eqn:bel\]) and, for larger $i$, the $i$-th belief distribution is defined with respect to Game $i$: $$\bar{b}_i[{{\mathrm{x}}}| t] = \frac{\Pr[{\mathcal{A}}({{\mathrm{x}}}_{-i})=t] b[{{\mathrm{x}}}]}{\sum_{{\mathrm{y}}}\Pr[{\mathcal{A}}({{\mathrm{y}}}_{-i})=t]b[{{\mathrm{y}}}]}.$$
Given a particular transcript $t$, the privacy has been breached if the adversary would draw different conclusions about the world and, in particular, about a person $i$ depending on whether or not $i$’s data was used. It turns out that the exact measure of “different”
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Recent high resolution spectroscopic analysis of nearby FGK stars suggests that a high C/O ratio of greater than 0.8, or even 1.0, is relatively common. Two published catalogs find C/O$>0.8$ in 25-30% of systems, and C/O$>1.0$ in $\sim$ 6-10%. It has been suggested that in protoplanetary disks with C/O$>0.8$ that the condensation pathways to refractory solids will differ from what occurred in our solar system, where C/O$=0.55$. The carbon-rich disks are calculated to make carbon-dominated rocky planets, rather than oxygen-dominated ones. Here we suggest that the derived stellar C/O ratios are overestimated. One constraint on the frequency of high C/O is the relative paucity of carbon dwarfs stars ($10^{-3}-10^{-5}$) found in large samples of low mass stars. We suggest reasons for this overestimation, including a high C/O ratio for the solar atmosphere model used for differential abundance analysis, the treatment of a Ni blend that affects the O abundance, and limitations of one-dimensional LTE stellar atmosphere models. Furthermore, from the estimated errors on the measured stellar C/O ratios, we find that the significance of the high C/O tail is weakened, with a true measured fraction of C/O$>0.8$ in 10-15% of stars, and C/O$>1.0$ in 1-5%, although these are still likely overestimates. We suggest that infrared T-dwarf spectra could show how common high C/O is in the stellar neighborhood, as the chemistry and spectra of such objects would differ compared to those with solar-like abundances. While possible at C/O$>0.8$, we expect that carbon-dominated rocky planets are rarer than others have suggested.'
author:
- 'Jonathan J. Fortney'
title: 'On the Carbon-to-Oxygen Ratio Measurement in Nearby Sunlike Stars: Implications for Planet Formation and the Determination of Stellar Abundances'
---
Introduction
============
The Composition of Stars and Planets
------------------------------------
The determination of the abundances of atoms in the atmospheres of stars is an essential element of modern astronomy. Recently, tremendous work has occurred on understanding the relationship between planets and the abundances of planet-hosting and non-planet-hosting stars. Since the pioneering work of [Gonzalez97]{}, many investigators have worked to understand connections between stellar abundances and the observed frequency [Santos04,Fischer05,Johnson10]{} and composition [Guillot06,Burrows07,Miller11]{} of planets.
Our solar system is one realization of the complex planet formation process. The raw materials that made up the Sun and solar nebula, through a process of condensation, grain growth, and accumulation, gave rise to four rocky planets in our inner solar system that are predominantly composed of Mg-Si-O-bearing rocks and Fe-Ni metals. In other solar systems, with parent star disks with other abundances, a different selection of refractory materials, or in different relative abundances, surely occur. For instance, if a nebula’s carbon-to-oxygen (C/O) ratio is $\gtrsim$0.8, condensation pathways can change dramatically, leading to carbon-dominated rocky planets, as recently discussed in detail by [Bond10]{}.
There had been prior intermittent interested in carbon-dominated planets in the past decade, from [Gaidos00]{}, [Lodders04]{}, and [Kuchner05]{}, to name three examples. In particular, [Gaidos00]{} discussed different formation scenarios for giant planet cores and rocky planets in disks with varied C/O ratios, as well as how the chemical evolution of the galaxy generally can lead to enhanced C/O through time. [Lodders04]{} suggested that the planetesimals that make up Jupiter’s heavy element enrichment were carbon-rich, and that Jupiter initially formed at the “tar line” rather than the “ice line.” This is one possible explanation for the low water abundance measured by the *Galileo Entry Probe* [Wong04]{}. [Kuchner05]{}, similar to [Gaidos00]{}, were interested in giant planets and terrestrial planets that could form in environments where the local (or entire disk’s) C/O$>1$, leading to carbon-dominated (rather than oxygen-dominated) silicates.
More recently [Bond10]{} coupled protoplanetary disk abundances derived from stellar spectra to a model of disk chemistry, which yields the condensation sequence of solids. Their work further coupled the formation of solids to an N-body model of planet formation [OBrien06]{}. For particular planetary systems, with measured C/O and Mg/Si ratios of the host star, they calculated the equilibrium disk chemistry and solid composition for the initial planetesimal distribution. [Bond10]{} furthermore kept track of the contribution of particular planetesimals as they add their mass to growing protoplanets, and in the end find the relative contributions of C, O, Mg, Si, etc. to the masses of formed planets.
Within the context of giant planets, [Madhu11]{} suggested that day-side photometry of the transiting planet WASP-12b indicates an atmosphere with C/O$>1.0$. More recently [Madhu11b]{} and [Oberg11]{} have investigated the accumulation of gas and icy planetesimals in disks with a range of C/O ratios to understand possible pathways to forming “carbon-rich” gas giants.
C/O Ratio in Stars
------------------
Composition-dependent planet formation models depend on the stellar abundances of C and O for the initial conditions of disk chemistry. The stellar C/O ratios in [Bond10]{} were taken from determinations of C from [Ecuvillon04]{} and of O from [Ecuvillon06]{}. Motivated by [Bond10]{}, larger tabulations of C and O abundances were recently made by [Delgado10]{}, for 370 FGK stars from the HARPS planet-search sample, and [Petigura11]{}, for 457 F and G stars from the California Planet Survey sample. These two studies are relatively similar, as they cover large samples that include planet-hosting stars and those not found to host planets.
The two studies do have some differences in the lines of C and O chosen. For the carbon abundance, the [Delgado10]{} work used CI lines at 5380.3 and 5052.2 Å, with only 5380.3 Å used for stars with [$T_{\rm eff}$]{}$< 5100$ K. For oxygen, the forbidden lines of \[OI\] at 6300 and 6363 Å were used. The derivation of the abundances was done with a combination of the code MOOG, for the generation of synthetic spectra \[\]\[as updated in 2002\][Sneden73]{}, the Kurucz ATLAS9 atmosphere grid with overshooting [Kurucz93]{}, and the equivalent widths were measured using the ARES program [Sousa07]{}. The [Petigura11]{} study used a CI line for carbon at 6587 Å, and the \[OI\] line for oxygen at 6300 Å. The derivation of the abundances was performed with the Spectroscopy Made Easy (SME) code [Valenti96]{} with Kurucz stellar atmospheres.
Tabulations from [Bond10]{} (who quote Ecuvillon et al. values), [Delgado10]{}, and [Petigura11]{} are shown in . Both of the large studies found a somewhat similar shape. They found a maximum at C/O ratios modestly higher that that of the Sun \[0.55, from\]\[\][Asplund09]{}, with a noticeably enhanced peak in the distribution found by [Delgado10]{}, shown most clearly in *b*. Of particular interest to all of these authors, and our *Letter*, is the tail off to higher C/O ratios $>0.8$ (dotted line) and even further onto $>1.0$ (short dashed line). [Delgado10]{} find C/O$>0.8$ for 24% of their stars, and C/O$>1.0$ for 6%. For the [Petigura11]{} sample, they find C/O$>0.8$ for 29% of their stars, and C/O$>1.0$ for 10%. The numbers quoted are for the mixed sample of planet-hosting and non-planet hosting stars. Taken at face value, and the condensation chemistry in [Bond10]{}, this potentially implies carbon planets (formed when C/O$>0.8$) in $\sim$25% of planetary systems.
However, one must take care when estimating the fraction of stars with high C/O ratios, given observational error bars. Of interest is the positive tail at high C/O, and a tail such as this is expected since the error is approximately constant in the logarithmic abundance ratio, \[C/O\]. This leads to an error distribution that is log-normal in the C/O ratio, and is seen in, for instance, Table 6 in [Petigura11]{}. Their average error at C/O$=1$ is 0.23, which is 1.61$\sigma$ above the solar C/O ratio they use, of 0.63. From this 1.61$\sigma$, 5.4% of the stellar sample is thus expected to be found with C/O$>1$, just due to observational errors. This would yield a “true” fraction of stars with
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'A search is performed for collimated muon pairs displaced from the primary vertex produced in the decay of long-lived neutral particles in proton-proton collisions at $\sqrt{s}$ = 7 centre-of-mass energy, with the ATLAS detector at the LHC. In a [1.9 fb$^{-1}$]{}event sample collected during 2011, the observed data are consistent with the Standard Model background expectations. Limits on the product of the production cross section and the branching ratio of a Higgs boson decaying to hidden-sector neutral long-lived particles are derived as a function of the particles’ mean lifetime.'
author:
- The ATLAS Collaboration
title: '[Search for displaced muonic lepton jets from light Higgs boson decay in proton-proton collisions at [$\sqrt{s}$]{} = 7 with the ATLAS detector]{}'
---
A search is presented for long-lived neutral particles decaying to final states containing collimated muon pairs in proton-proton collisions at $\sqrt{s}$ = 7 centre-of-mass energy. The event sample, collected during 2011 at the LHC with the ATLAS detector, corresponds to an integrated luminosity of [1.9 fb$^{-1}$]{}. The model considered in this analysis consists of a Higgs boson decaying to a new hidden sector of particles which finally produce two sets of collimated muon pairs, but the search described is equally valid for other, distinct models such as heavier Higgs boson doublets, singlet scalars or a $Z^\prime$ that decay to a hidden sector and eventually produce collimated muon pairs.\
Recently, evidence for the production of a boson with a mass of about 126 has been published by ATLAS [@higgsatl] and CMS [@higgscms]. The observation is compatible with the expected production and decay of the Standard Model (SM) Higgs boson [@HIGGS1; @HIGGS2; @HIGGS3] at this mass. Testing the SM Higgs hypothesis is currently of utmost importance. To this end two effects may be considered: (i) additional resonances which arise in an extended Higgs sector found in many extensions of the SM, or (ii) rare Higgs boson decays which may deviate from those predicted by the SM. In this Letter we search for a scalar that decays to a light hidden sector, focusing on the 100 GeV to 140 GeV mass range. In doing so, we cover both of the above aspects, deriving constraints on additional Higgs-like bosons, as well as placing bounds on the branching ratio of the discovered 126 GeV resonance into a hidden sector of the kind described below.\
The phenomenology of light hidden sectors has been studied extensively over the past few years [@b1; @b2; @b3; @b4; @b5]. Possible characteristic topological signatures of such extensions of the SM are “lepton jets". A lepton jet is a cluster of highly collimated particles: electrons, muons and possibly pions [@b2; @b6; @b7; @b8]. These arise if light unstable particles with masses in the to range (for example dark photons, [$\gamma_{d}$]{}) reside in the hidden sector and decay predominantly to SM particles. At the LHC, hidden-sector particles may be produced with large boosts, causing the visible decay products to form jet-like structures. Hidden-sector particles such as [$\gamma_{d}$]{}may be long-lived, resulting in decay lengths comparable to, or larger than, the detector dimensions. The production of lepton jets can occur through various channels. For instance, in supersymmetric models, the lightest visible superpartner may decay into the hidden sector. Alternatively, a scalar particle that couples to the visible sector may also couple to the hidden sector through Yukawa couplings or the scalar potential. This analysis is focused on the case where the Higgs boson decays to the hidden sector [@b9; @b10]. The SM Higgs boson has a narrow width into SM final states if $m_{H} < 2 m_W$. Consequently, any new (non-SM) coupling to additional states, which reside in a hidden sector, may contribute significantly to the Higgs boson decay branching ratios. Even with new couplings, the total Higgs boson width is typically small, well below the order of one GeV. If a SM-like Higgs boson is confirmed, it will remain important to constrain possible rare decays, e.g. into lepton jets.\
Neutral particles with large decay lengths and collimated final states represent, from an experimental point of view, a challenge both for the trigger and for the reconstruction capabilities of the detector. Collimated particles in the final state can be hard to disentangle due to the finite granularity of the detectors; moreover, in the absence of inner tracking detector information and a primary vertex constraint, it is difficult to reconstruct charged-particle tracks from decay vertices far from the interaction point (IP). The ATLAS detector [@ATLASTDR] is equipped with a muon spectrometer (MS) with high-granularity tracking detectors that allow charged-particle tracks to be reconstructed in a standalone configuration using only the muon detector information (MS-only). This is a crucial feature for detecting muons not originating from the primary interaction vertex.\
The search presented in this Letter focuses on neutral particles decaying to the simplest type of muon jets (MJs), containing only two muons; prompt MJ searches have been performed both at the Tevatron [@tevatron1; @tevatron2] and at the LHC [@CMS]. Other searches for displaced decays of a light Higgs boson to heavy fermion pairs have also been performed at the LHC [@Hiddenv].\
The benchmark model used for this analysis is a simplified scenario where the Higgs boson decays to a pair of neutral hidden fermions ($f_{d2}$) each of which decays to one long-lived [$\gamma_{d}$]{}and one stable neutral hidden fermion ($f_{d1}$) that escapes the detector unnoticed, resulting in two lepton jets from the [$\gamma_{d}$]{}decays in the final state (see Fig. \[fig:model\]). The mass of the [$\gamma_{d}$]{}(0.4 ) is chosen to provide a sizeable branching ratio to muons [@b9].
![Schematic picture of the Higgs boson decay chain, H$\rightarrow$2($f_{d2}\rightarrow f_{d1}$[$\gamma_{d}$]{}). The Higgs boson decays to two hidden fermions ($f_{d2}$). Each hidden fermion decays to a [$\gamma_{d}$]{}and to a stable hidden fermion ($f_{d1}$), resulting in two muon jets from the [$\gamma_{d}$]{}decays in the final state.[]{data-label="fig:model"}](fig_01a.pdf){width="55mm"}
ATLAS is a multi-purpose detector [@ATLASTDR] at the LHC, consisting of an inner tracking system (ID) embedded in a superconducting solenoid, which provides a 2 T magnetic field parallel to the beam direction, electromagnetic and hadronic calorimeters and a muon spectrometer using three air-core toroidal magnet systems[^1]. The trigger system has three levels [@L1TRIG] called Level-1 (L1), Level-2 (L2) and Event Filter (EF). L1 is a hardware-based system using information from the calorimeter and muon spectrometer, and defines one or more Regions of Interest (ROIs), geometrical regions of the detector, identified by ($\eta$, $\phi$) coordinates, containing interesting physics objects. L2 and the EF (globally called the High Level Trigger, HLT) are software-based systems and can access information from all sub-detectors. The ID, consisting of silicon pixel and micro-strip detectors and a straw-tube tracker, provides precision tracking of charged particles for . The electromagnetic and hadronic calorimeter system covers and, at , has a total depth of 9.7 interaction lengths (22 radiation lengths in the electromagnetic part). The MS provides trigger information () and momentum measurements () for charged particles entering the spectrometer. It consists of one barrel and two endcap parts, each with 16 sectors in $\phi$, equipped with precision tracking chambers and fast detectors for triggering. Monitored drift tubes are used for precision tracking in the region and cathode strip chambers are used for 2.0 $\leq$ . The MS detectors are arranged in three stations of increasing distance from the IP: inner, middle and outer. The air core toroidal magnetic field allows an accurate charged particle reconstruction independent of the ID information. The three planes of trigger chambers (resistive plate chambers in the barrel and the thin gap chambers in the endcaps) are located in middle and outer (only in the barrel) stations. The L1 muon trigger requires hits in the middle stations to create a low tranverse momentum () muon ROI or hits in both the middle and outer stations for a high ROI. The muon ROIs have a spatial extent of () in the barrel and of in the endcap. L1 ROI information seeds, at HLT level, the reconstruction of muon momenta using the precision chamber information. In this way sharp trigger thresholds up to 40 can be obtained.
The set of parameters used to generate the signal Monte Carlo samples is listed in Table \[tab:param\]. The Higgs boson is generated through the gluon-gluon fusion production mechanism which is the dominant process for a low mass Higgs boson. The gluon-gluon fusion Higgs boson production cross section in [*pp*]{} collisions at [$\sqrt{s}$]{}= 7 , estimated at the next-to-next-to-leading order (NNLO) [@HiggsCrossS], is $\sigma_{\textrm{\small SM}} = $ 24.0 pb
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In the search engine of Google, the PageRank algorithm plays a crucial role in ranking the search results. The algorithm quantifies the importance of each web page based on the link structure of the web. We first provide an overview of the original problem setup. Then, we propose several distributed randomized schemes for the computation of the PageRank, where the pages can locally update their values by communicating to those connected by links. The main objective of the paper is to show that these schemes asymptotically converge in the mean-square sense to the true PageRank values. A detailed discussion on the close relations to the multi-agent consensus problems is also given.'
author:
- |
Hideaki Ishii\
Department of Computational Intelligence and Systems Science\
Tokyo Institute of Technology\
4259 Nagatsuta-cho, Midori-ku, Yokohama 226-8502, Japan\
E-mail: ishii@dis.titech.ac.jp\
Roberto Tempo\
CNR-IEIIT, Politecnico di Torino\
Corso Duca degli Abruzzi 24, 10129 Torino, Italy\
E-mail: roberto.tempo@polito.it
title: |
Distributed Randomized Algorithms for\
the PageRank Computation[^1]
---
Introduction {#sec:intro}
============
In the last decade, search engines have become widely used indispensable tools for searching the web. For such engines, it is essential that the search results not only consist of web pages related to the query terms, but also rank the pages properly so that the users quickly have access to the desired information. The PageRank algorithm at Google is one of the successful algorithms that quantify and rank the importance of each web page. This algorithm was initially proposed in [@BriPag:98], and an overview can be found in, e.g., [@LanMey:06; @BryLei:06].
One of the main features of the PageRank algorithm is that it is based solely on the link structure inherent in the web. The underlying key idea is that links from important pages make a page more important. More concretely, each page is considered to be voting the pages to which it is linked. Then, in the ranking of a page, the total number of votes as well as the importance of the voters are reflected. This problem is mathematically formulated as finding the eigenvector corresponding to the largest eigenvalue of a certain stochastic matrix associated with the web structure.
For the PageRank computation, a critical aspect is the size of the web. The web is said to be composed of over 8 billion pages, and its size is still growing. Currently, the computation is performed centrally at Google, where the data on the whole web structure is collected by crawlers automatically browsing the web. In practice, the class of algorithms that can be applied is limited. In fact, the basic power method is employed, but it is reported that this computation takes about a week [@LanMey:06]. This clearly necessitates more efficient computational methods.
In this regard, several approaches have recently been proposed. In [@KamHavGol:04], an adaptive computation method is developed, which classifies web pages into groups based on the speed of convergence to the PageRank values and allocates computational resources accordingly. Another line of research is based on distributed approaches, where the computation is performed on multiple servers communicating to each other. For example, Monte Carlo methods are used in [@AvrLitNem:07], while the work in [@ZhuYeLi:05] utilizes the block structure of the web to apply techniques from the Markov chain literature. In [@deJBra:07; @KolGalSzy:06], methods based on the so-called asynchronous iterations [@BerTsi:89] in numerical analysis are discussed.
In this paper, we follow the distributed approach and, in particular, develop a randomized algorithm for the PageRank computation; for recent advances on probabilistic approaches in systems and control, see [@TemCalDab_book]. This algorithm is fully distributed and has three main features as follows: First, in principle, each page can compute its own PageRank value locally by communicating with the pages that are connected by direct links. That is, each page exchanges its value with the pages that it is linked to and those linked to it. Second, the pages make the decision to initiate this communication at random times which are independent from page to page. This means that, in its implementation, there is neither a fixed order among the pages nor a centralized agent in the web that determines the pages to update their values. Third, the computation required for each page is very mild.
The main result of the paper shows that the algorithm converges to the true PageRank values in the mean-square sense. This is achieved by computing the time average at each page. From a technical viewpoint, an important characteristic of the approach is that the stochasticity of the matrix in the original problem is preserved and exploited. We first propose a basic distributed update scheme for the pages and then extend this into two directions to enhance its performance and flexibility for implementation. It is further noted that in [@IshTem_acc:09; @IshTemBaiDab:09], this approach has been generalized to incorporate failures in the communication as well as aggregation of the web structure. In [@IshTem_sice:09], a related result on finding the variations in the PageRank values when the web data may contain errors is given.
We emphasize that the approach proposed here is particularly motivated by the recent research on distributed consensus, agreement, and flocking problems in the systems and control community; see, e.g., [@JadLinMor:03; @BerTsi:07; @Wu:06; @HatMes:05; @BoyGhoPra:06; @QuWanHul:08; @TahJad:08; @TemIsh:07; @CarFagFoc:05; @KasBasSri:07; @YuHenFid:07; @MarBroFra:04; @RenBea:05; @Moreau:05; @LinFraMag:05]. For additional details, we refer to [@AntBai:07; @csm:07; @BerTsi:89]. Among such problems, our approach to PageRank computation is especially related to the consensus, where multiple agents exchange their values with neighboring agents so that they obtain consensus, i.e., all agents reach the same value. The objective is clearly different from that of the PageRank problem, which is to find a specific eigenvector of a stochastic matrix via the power method. However, the recursion appearing in the consensus algorithm is exactly in the same form as the one for our distributed PageRank computation except that the class of stochastic matrices is slightly different. These issues will be discussed further.
The organization of this paper is as follows: In Section \[sec:pagerank\], we present an overview of the PageRank problem. The distributed approach is introduced in Section \[sec:dist1\], where we propose a basic scheme and prove its convergence. Its relation with multi-agent consensus problems is discussed in Section \[sec:consensus\]. We then develop two extensions of the basic distributed algorithm: One in Section \[sec:simul\] is to improve the rate of convergence by allowing multiple pages to simultaneously update and the other in Section \[sec:approx\] to reduce the communication load among the pages. The proposed algorithm is compared with an approach known as asynchronous iteration from the field of numerical analysis in Section \[sec:asynch\]. Numerical examples are given in Section \[sec:example\] to show the effectiveness of the proposed schemes. We conclude the paper in Section \[sec:concl\]. Part of the material of this paper has appeared in a preliminary form in [@IshTem_cdc:08]. [*Notation*]{}: For vectors and matrices, inequalities are used to denote entry-wise inequalities: For $X,Y\in\R^{n\times m}$, $X\leq Y$ implies $x_{ij}\leq y_{ij}$ for $i=1,\ldots,n$ and $j=1,\ldots,m$; in particular, we say that the matrix $X$ is nonnegative if $X\geq 0$ and positive if $X> 0$. A probability vector is a nonnegative vector $v\in\R^n$ such that $\sum_{i=1}^n v_i = 1$. Unless otherwise specified, by a stochastic matrix, we refer to a column-stochastic matrix, i.e., a nonnegative matrix $X\in\R^{n\times n}$ with the property that $\sum_{i=1}^n x_{ij}=1$ for $j=1,\ldots,n$. Let $\one\in\R^n$ be the vector with all entries equal to $1$ as $\one:=[1\;\cdots\;1]^T$. Similarly, $S\in\R^{n\times n}$ is the matrix with all entries being $1$. For $x\in\R^n$, we denote by $\abs{x}$ the vector containing the absolute values of the corresponding entries of $x$. The norm $\norm{\cdot}$ for vectors is the Euclidean norm. The spectral radius of the matrix $X\in\R^{n\times n}$ is denoted by $\rho(X)$. We use $I$ for the identity matrix.
The PageRank problem {#sec:pagerank}
====================
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'An exact diagonalization study reveals that a matter-wave bright soliton and the Goldstone mode are simultaneously created in a quasi-one-dimensional attractive Bose-Einstein condensate by superpositions of quasi-degenerate low-lying many-body states. Upon formation of the soliton the maximum eigenvalue of the single-particle density matrix increases dramatically, indicating that a fragmented condensate converts into a single condensate as a consequence of the breaking of translation symmetry.'
author:
- Rina Kanamoto
- Hiroki Saito
- Masahito Ueda
title: |
Symmetry Breaking and Enhanced Condensate Fraction\
in a Matter-Wave Bright Soliton
---
Fragmentation of a Bose-Einstein condensate (BEC), which occurs as a consequence of a certain exact symmetry of the system, has recently been discussed in a number of articles [@NS; @fr; @UL]. In contrast to the conventional BEC, characterized by a unique macroscopic eigenvalue in the single-particle density matrix [@PO], the fragmented BEC is characterized by more than one macroscopic eigenvalue [@NS]. If the system has an exact symmetry and if the many-body theory predicts fragmentation of the ground state, the Gross-Pitaevskii (GP) mean-field theory does not predict a fragmented condensate but approximates it with a single condensate whose symmetry is spontaneously broken. For example, a quasi-one-dimensional (1D) BEC with attractive interaction forms bright solitons [@BS-EX], which are well described by the GP theory [@BS-MF]. Efforts to elucidate how such symmetry-broken states emerge from exact many-body states have been made in diverse systems [@symmetry].
In this Letter, we show that the formation of a broken-symmetry soliton and an enhancement of the condensate fraction are caused by superpositions of the low-lying states of the symmetry-preserving many-body Hamiltonian. We find that the many-body spectrum exhibits a number of quasi-degenerate states in the regime where the exact ground state is a fragmented condensate. Superposition of these quasi-degenerate levels simultaneously generates the broken-symmetry bright soliton and the Goldstone mode, accompanied by a significant increase in the condensate fraction. By introducing a small symmetry-breaking perturbation or by considering the action of a quantum measurement, we explicitly show that the fragmented condensate is very fragile against the soliton formation. Also elucidated in the language of the many-body theory is the mechanism underlying a partial breaking of the quantized circulation in the presence of a rotating drive.
We consider a system of $N$ attractive bosons with mass $m$ on a 1D ring with circumference $2\pi R$. Length and energy are measured in units of $R$ and $\hbar^2/(2mR^2)$, respectively. The Hamiltonian for our system is given by $$\begin{aligned}
\label{hamiltonian}
\hat{H}=\!\int_0^{2\pi}\!\!d\theta
\left[-\hat{\psi}^{\dagger}(\theta)\frac{\partial^2}{\partial\theta^2}\hat{\psi}(\theta)-\frac{\pi g}{2}
\hat{\psi}^{\dagger 2}(\theta){\hat{\psi}}^2(\theta)\right],\end{aligned}$$ where $\hat{\psi}(\theta)$ is the field operator, which annihilates an atom at position $\theta$, and $g$ $(>0)$ denotes the strength of attractive interaction. According to the GP mean-field approximation for the Hamiltonian (\[hamiltonian\]), the ground state is either a uniform condensate or a broken-symmetry bright soliton, depending on whether the parameter $gN$ is below or above the critical value, $gN= 1$. In contrast, all eigenstates of the original Hamiltonian are translation invariant, and many-body theory predicts that the ground state is either a [*single*]{} ($gN\lesssim 1$) or [*fragmented*]{} ($gN\gtrsim 1$) condensate [@QPT].
![ (a) Excitation spectrum obtained by exact diagonalization of Hamiltonian (1) for $N=200$. The inset shows the corresponding result obtained with truncation $l_{\rm c}=2$ near the critical point. (b) Bogoliubov spectrum corresponding to (a), where branch $A'$ represents the Goldstone mode, $B'$ the breathing mode of a bright soliton, and $C'$ the second harmonic of $B'$. (c) Energy gap $\Delta E$ between the ground and the first excited states in the many-body spectrum versus the total number of atoms $N$ with $gN=1.4$ held fixed. Triangles and circles denote results obtained with $l_{\rm c}=1$ and $l_{\rm c}=2$, respectively. []{data-label="fig1"}](fig1.eps)
Figure \[fig1\] (a) shows the low-lying spectrum obtained by exact diagonalization of the Hamiltonian (\[hamiltonian\]). The dramatic change in the landscape of the energy spectrum around $gN \simeq 1$ is a consequence of the quantum phase transition between a single condensate and a fragmented one. Figure \[fig1\] (b) presents the Bogoliubov spectrum obtained from the Bogoliubov-de Gennes equations. By comparing Figs. 1(a) and (b), we find that the Bogoliubov spectrum has a one-to-one correspondence with the many-body spectrum for $gN \lesssim 1$. For $gN\gtrsim 1$, however, the many-body spectrum becomes much more intricate than the Bogoliubov one. In the Bogoliubov spectrum for $gN \gtrsim 1$, there appears a Goldstone mode $A'$ (the translation mode of the soliton) associated with the symmetry breaking of the ground state, the breathing mode $B'$, and the second harmonic of the breathing mode $C'$. In Fig. \[fig1\] (a) for $gN\gtrsim 1$, in contrast, a number of quasi-degenerate levels appear with the density of states peaking around the Bogoliubov levels; we denote the corresponding groups as $A, B$, and $C$, respectively. The basis states for the diagonalization are restricted to the angular-momentum states $l=0,\pm 1$ ($l_{\rm c}=1$) unless otherwise stated, and the field operator is given by ${\hat \psi}(\theta)=({\hat c}_0+e^{i\theta}{\hat c}_1+e^{-i\theta}{\hat c}_{-1})/\sqrt{2\pi}$, where ${\hat c}_l$ is the annihilation operator of a boson with angular momentum $\hbar l$. The validity of this cutoff has been confirmed by the inclusion of higher angular-momentum states as shown in the inset of Fig. \[fig1\] (a) with $l_{\rm c}=2$ [@NP], where the energy landscape and the degree of degeneracy are unchanged from those of $l_{\rm c}=1$.
We denote the eigenstates of the Hamiltonian as $|{\cal L}\rangle_{\sigma}$, where ${\cal L}$ is the total angular momentum. The index $\sigma=A,B,\cdots$ labels the eigenstate in the ascending order of energy ($E_{|{\cal L}\rangle_A} < E_{|{\cal L}\rangle_B} < \cdots$) for each ${\cal L}$. The states $|{\cal L}\rangle_{\sigma}$ and $|-{\cal L}\rangle_{\sigma}$ are degenerate in the absence of a rotating drive. At $gN=0$, the eigenstates are the Fock states $|{\cal L}\rangle_{\sigma}=|n_1,n_{-1}\rangle$, where $n_{\pm 1}$ denote the numbers of atoms with angular momenta $l=\pm 1$ and ${\cal L}=n_1-n_{-1}$. The energies of the states $|n_1,n_{-1}\rangle$ are given by $n_1+n_{-1}\equiv j$, and these states are thus $(j+1)$-fold degenerate. The index $\sigma=A,B, \cdots$ corresponds to the number of $l=\pm 1$ pairs being given by $(j-|{\cal L}|)/2=0,1,\cdots$. For $0< gN \lesssim 1$, the energy branches are characterized by $j$, and the $(j+1)$-fold degeneracy is almost maintained. As $gN$ approaches 1, each branch begins to ramify, and the energy landscape for $gN \gtrsim 1$ is characterized by the index $\sigma$ and ${\cal L}$ as $E_{|0\rangle_{\sigma}}\lesssim E_{|\pm 1\rangle_{\sigma}}
\lesssim E_{|\pm 2\rangle_{\sigma}}\lesssim\cdots$. There is no Goldstone mode because the ground state possesses the translation symmetry, and the lowest excited states $|\pm 1\rangle_A$ have a finite energy gap $\Delta E\equiv E_{|\pm 1\rangle_A}-E_{|0\rangle_A}$, since the system is finite. However, the density of states above the ground state becomes higher for larger $N$, and the gap $\Delta E$ collapses as $1/N$ \[Fig. \[fig1\] (c)\]. The ground state is therefore unstable against excitations of the quasi-degenerate low-lying states.
We construct the many-body counterparts of the bright soliton $|\Psi_{\theta}\rangle$ and the Goldstone mode $|\Phi_{\theta}\rangle$, such that $\langle\Psi_{\theta}|\Phi_{\theta}\rangle=0$ by superpositions of the ground and quasi-degenerate
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study the harvesting of entanglement and mutual information by Unruh-DeWitt particle detectors from thermal and squeezed coherent field states. We prove (for arbitrary spatial dimensions, switching profiles and detector smearings) that while the entanglement harvesting ability of detectors decreases monotonically with the field temperature $T$, harvested mutual information grows linearly with $T$. We also show that entanglement harvesting from a general squeezed coherent state is independent of the coherent amplitude, but depends strongly on the squeezing amplitude. Moreover, we find that highly squeezed states i) allow for detectors to harvest much more entanglement than from the vacuum, and ii) ensure that the entanglement harvested does not decay with their spatial separation. Finally we analyze the spatial inhomogeneity of squeezed states and its influence on harvesting, and investigate how much entanglement one can actually extract from squeezed states when the squeezing is bandlimited.'
author:
- Petar Simidzija
- 'Eduardo Martín-Martínez'
bibliography:
- 'references.bib'
title: Harvesting correlations from thermal and squeezed coherent states
---
Introduction {#sec:intro}
============
The entanglement structure of a quantum field has been an important area of research over the last few decades. Besides being an interesting focus of study in its own right, the presence of entanglement between local degrees of freedom in general field states (and in particular the vacuum [@Summers1985; @Summers1987]) has been used as a means to better understand important fundamental questions, from the black hole information loss problem [@Hawking1975; @Hawking1976; @Susskind1993; @Almheiri2013; @Braunstein2013; @Hawking2016], to the dynamics of quantum phase transitions in statistical mechanics [@Vidal2002; @Calabrese2004]. Moreover, operational approaches which harness this entanglement to perform useful tasks have also been studied, leading to, for example, the development of protocols for *quantum energy teleportation* [@Hotta2008; @Hotta2009; @Frey2014].
Another widely studied protocol making use of the entanglement present in a quantum field is concerned with the extraction of field entanglement onto a pair of initially uncorrelated first-quantized systems (detectors). These so called *entanglement harvesting* protocols were initially studied in the 90s by Valentini [@Valentini1991], then later by Reznik *et al.* [@Reznik2003; @Reznik2005], and have in the last decade or so experienced a great deal of attention from many different perspectives [@Steeg2009; @Brown2013a; @Martinez2013a; @Brown2014; @Drago2014; @Pozas2015; @Salton2015; @Pozas2016; @Martinez2016a; @Nambu; @Sachs2017; @Guillaume2017; @Sachs2018; @Trevison2018].
Many of these recent lines of research into entanglement harvesting are related to the fact that the amount of harvestable entanglement is generally sensitive to the many variable parameters of the setup. For instance, the sensitivity of entanglement harvesting on the position and motion of the detectors has resulted in harvesting-based proposals in metrology — from rangefiding [@Salton2015] to precise vibration detection [@Brown2014] — while, on the more fundamental side, it has also been shown that entanglement harvesting is sensitive to the geometry [@Steeg2009] and topology [@Martinez2016a] of the background spacetime. Furthermore, while most of these entanglement harvesting studies have focused on conventional linear Unruh-DeWitt (UDW) particle detectors [@DeWitt1979] coupled to real scalar fields [@Pozas2015], there have also been several interesting results coming from other variations of the setup. Some examples include: hydrogenoid atomic detectors coupled to the full electromagnetic field [@Pozas2016], non-linear couplings of UDW detectors to neutral [@Sachs2017] and charged [@Sachs2018] scalar fields, tripartite entanglement in flat spacetime [@Drago2014], and multiple detector harvesting in curved spacetimes [@Nambu]. Entanglement harvesting using infinite dimensional harmonic oscillator detectors has been looked at in several works as well. An example which is very relevant to this paper is an article by Brown where the issue of harvesting from thermal states is considered [@Brown2013a].
While some of the above mentioned parameters affecting entanglement harvesting are difficult to control in a lab setting (such as the geometry and topology of spacetime), other parameters, such as the energy gap of the detectors or the state of the field, are more easily tunable. A major motivation for studying the sensitivity of entanglement harvesting to these types of parameters is that it may lead to experimental realizations of entanglement harvesting protocols. This would not only be an important achievement from a fundamental perspective, but it could also potentially be a method of obtaining entanglement that could then be used for quantum information purposes [@Martinez2013a].
With this ultimate motivation in mind, it has been shown that a non-zero detector energy gap is crucial in protecting an entanglement harvesting UDW pair against fluctuation induced, entanglement harming, local noise [@Pozas2017; @Simidzija2018]. Furthermore, for harmonic oscillator detectors, this noise has been found to increase with field temperature, leading to detrimental effects on the amount of entanglement harvested [@Brown2013a] by oscillator pairs. Meanwhile, and perhaps surprisingly, for UDW detectors interacting with coherent states of the field, the presence of leading order local noise does not end up affecting the amount of entanglement that can be harvested from the field [@Simidzija2017b; @Simidzija2017c].
In this paper, we fill in significant gaps in the study of entanglement harvesting sensitivity on thermal and general squeezed coherent field states. While, to our knowledge, this is the first study of squeezed state entanglement harvesting, we would also like to point out that our study of thermal state harvesting differs in several crucial regards to the previous work in [@Brown2013a]. In [@Brown2013a] it was shown that for a pair of pointlike oscillator detectors interacting with a massless field in a one-dimensional cavity, the amount of entanglement extracted decays rapidly with the temperature. In contrast, i) we consider spatially smeared qubit detectors interacting with a field of any mass in a spacetime of any dimensionality, rather than pointlike oscillator detectors interacting with a massless field in (1+1)-dimensions, ii) we look at the continuum free space case rather than being in a cavity, and hence we are not forced to introduce any UV cutoffs to handle numerical sums, and iii) we directly compute the evolved detectors’ density matrix from the field’s one and two-point functions, rather than using the significantly different formalism of Gaussian quantum mechanics (see, e.g. [@Adesso2007]).
Despite these significant differences between our approach and that in [@Brown2013a], we will find that, for thermal states, our results are in qualitative agreement with their general conclusions, i.e. that temperature is detrimental to entanglement harvesting. However, since we obtain analytical expressions for entanglement measures, rather than being restricted to numerical calculations, we are able to provide an explicit proof that the amount of entanglement that (qubit) detectors can harvest from the field rapidly decays with its temperature. In particular, we will show that the optimal thermal state for harvesting entanglement from the field is the vacuum. On the other hand, we will see that this is not the case for the harvesting of mutual information, which is a measure of the total (quantum and classical) correlations of the detector pair. In fact we will see that for high field temperatures $T$ (while still in the perturbative regime) the mutual information harvested by the detectors *increases* proportionally with $T$.
We will then consider the case of squeezed coherent states [@Loudon1987], where, to the authors’ knowledge, no previous literature exists. We will first prove that the statement “entanglement harvesting is independent of the field’s coherent amplitude" is true not only for non-squeezed coherent states, as was shown in [@Simidzija2017b], but also for arbitrarily squeezed coherent states. On the other hand we will show that, unlike the coherent amplitude, the choice of field’s squeezing amplitude $\zeta(\bm k)$ does in fact affect the ability of UDW detectors to become entangled, and moreover the Fourier transform of $\zeta(\bm k)$ directly gives the locations in space near which entanglement harvesting is optimal. Perhaps surprisingly, we will also find that for highly and uniformly squeezed field states, the amount of entanglement that the detectors can harvest is independent of their spatial separation, and is often much higher than the amount obtainable from the vacuum. We will also analyze whether this advantage carries over to more experimentally attainable field configurations where states are squeezed across a narrow frequency range of field modes.
This paper is structured as follows: We begin in Sec. \[sec:setup\] by reviewing the setup of entanglement harvesting by UDW detectors from arbitrary states of a scalar field. In Sec. \[sec:thermal\] we particularize to the case of thermal field states, and study the harvesting of entanglement and mutual information in this setting. Then, in Sec. \[sec:squeezed\] we look at entanglement harvesting from squeezed field states, both those with uniform and bandlimited squeezing amplitudes. Finally, Sec. \[sec:conclusions\] is left for the conclusions. Units of $\hbar=c=k_\textsc{b}=1$ are used throughout.
Correlation harvesting setup {#sec:setup}
============================
Before studying the harvesting of correlations from thermal and squeezed coherent field states, let us review the general correlation harvesting setup that can
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- |
Shuchi Chawla\
UW-Madison\
[shuchi@cs.wisc.edu]{}
- |
Evangelia Gergatsouli\
UW-Madison\
[gergatsouli@wisc.edu]{}
- |
Yifeng Teng\
UW-Madison\
[yifengt@cs.wisc.edu]{}
- |
Christos Tzamos\
UW-Madison\
[tzamos@wisc.edu]{}
- |
Ruimin Zhang\
UW-Madison\
[rzhang274@wisc.edu]{}
bibliography:
- 'allrefs.bib'
title: Learning Optimal Search Algorithms from Data
---
Extension to other feasibility constraints {#sec:extensions}
==========================================
In this section we extend the problem in cases where there is a feasibility constraint $\mathcal{F}$, that limits what or how many boxes we can choose. We consider the cases where we are required to select $k$ distinct boxes, and $k$ independent boxes from a matroid. In both cases we design SPA strategies that can be converted to PA. These two variants are described in more detail in subsections \[subsec:generalK\] and \[subsec:matroids\] that follow.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'An Archimedean copula is characterised by its generator. This is a real function whose inverse behaves as a survival function. We propose a semiparametric generator based on a quadratic spline. This is achieved by modelling the first derivative of a hazard rate function, in a survival analysis context, as a piecewise constant function. Convexity of our semiparametric generator is obtained by imposing some simple constraints. The induced semiparametric Archimedean copula produces Kendall’s tau association measure that covers the whole range $(-1,1)$. Inference on the model is done under a Bayesian approach and for some prior specifications we are able to perform an independence test. Properties of the model are illustrated with a simulation study as well as with a real dataset.'
author:
- |
[Ricardo Hoyos & Luis Nieto-Barajas]{}\
[*Department of Statistics, ITAM, Mexico*]{}\
\
title: '**A Bayesian semiparametric Archimedean copula**'
---
[*Keywords*]{}: Archimedean copula, Bayes nonparametrics, piecewise constant, survival analysis, quadratic spline.
[*AMS Classification*]{}: 60E05 $\cdot$ 62G05 $\cdot$ 62N86.
Introduction {#sec:intro}
============
Let $\varphi(\cdot)$ be a continuous, strictly decreasing function from $[0,1]$ to $[0,\infty)$ such that $\varphi(1)=0$. Let $\varphi^{-1}(\cdot)$ be the inverse or the pseudo-inverse of $\varphi$, where the latter is defined as zero for $t>\varphi(0)$. If $\varphi(0)=\infty$ the generator is called strict. An Archimedean copula $C(u,v)$ with generator $\varphi$ is a function from $[0,1]^2$ to $[0,1]$ defined as $$\label{eq:arqc}
C(u,v)=\varphi^{-1}\left(\varphi(u)+\varphi(v)\right).$$ A further requirement for to be well defined is that $\varphi$ must be convex [e.g. @nelsen:06].
There are many properties that characterize Archimedean copulas, for instance, they are symmetric, associative and their diagonal section $C(u,u)$ is always less than $u$ for all $u\in(0,1)$. Generators $\varphi(\cdot)$ are usually parametric families defined by a single parameter. Most of them are summarised in @nelsen:06 [Table 4.1].
Association measures induced by Archimedean copulas are a function of the generator. For instance, Kendall’s tau becomes $$\label{eq:ktau}
\kappa_\tau=1+4\int_0^1\frac{\varphi(t)}{\varphi^{\prime}(t^+)}\d t,$$ where $\varphi^{\prime}(t^+)$ denotes right derivative of $\varphi$ at $t$.
In this work we propose a Bayesian semiparametric generator defined through a quadratic spline. Within a survival analysis context, we model the first derivative of a hazard rate function with a piecewise constant function. The hazard rate and the cumulative hazard functions become linear and quadratic continue functions, respectively. The induced survival function is used as an inverse generator for an Archimedean copula. Convexity constrains are properly addressed and inference on the model is done under a Bayesian approach.
Other studies on semiparametric generators for Archimedean copulas can be found in where their model is based on an empirical Kendall’s process. A new approach and extensions of this latter methodology can be found in . In the model arises from the one-to-one correspondence between an Archimedean generator and a distribution function of a nonnegative random variable. In particular they use a mixture of Pólya trees as a prior for the corresponding distribution function under a Bayesian nonparametric approach. In a work more related to ours, use the relationship between quantile functions and Archimedean generators to define a semiparametric generator by supplementing a parametric generator with $n+1$ dependence parameters. Differing to their work, our model is not based on any parametric generator and the Kendall’s tau can take values on the whole interval $(-1,1)$.
The contents of the rest of the paper is as follows. In Section \[sec:model\] we present our proposal and characterise its properties. In Section \[sec:post\] we provide details of how to make posterior inference under a Bayesian approach. In Section \[sec:illust\] we illustrate the performance of our model with a simulation study as well as with a real data set. We conclude with some remarks in Section \[sec:concl\].
Model {#sec:model}
=====
To define our proposal we realise that $\varphi^{-1}$ is a decreasing function from $[0,\infty)$ to $[0,1]$, so it behaves as a survival function, in a failure time data analysis context . The idea is to propose a semi/non parametric form for the inverse generator $\varphi^{-1}$ by using survival analysis ideas. For that we recall some basic definitions.
Let $h(t)$ be a nonnegative function with domain in $[0,\infty)$ such that $H(t)=\int_0^t h(s)\d s\to\infty$ as $t\to\infty$. Then $S(t)=\exp\{-H(t)\}$ is a decreasing function from $[0,\infty)$ to $[0,1]$, so it behaves like an inverse generator $\varphi^{-1}(t)$. In a survival analysis context, functions $h(\cdot)$, $H(\cdot)$ and $S(\cdot)$ are the hazard rate, cumulative hazard and survival functions, respectively.
In particular, if $h(t)=\theta$, i.e. constant for all $t$, then $S(t)=e^{-\theta t}$. If we take $\varphi(t)^{-1}=e^{-\theta t}$, then $\varphi(t)=-(\log t)/\theta$. Using we obtain that the resulting copula $C(u,v)=uv$ is the independence copula, and what is interesting, is that it does not depend on $\theta$.
Using these ideas we construct a semiparametric generator in the following way. We first consider a partition of size $K$ of the positive real line, with interval limits given by $0=\tau_0<\tau_1<\cdots<\tau_K=\infty$. Then, we define the first derivative of the hazard rate, as a piecewise constant function of the form $$\label{eq:hp}
h'(t)=\sum_{k=1}^K \theta_k I(\tau_{k-1}<t\leq\tau_k),$$ where $\theta_K\equiv 0$. We recover the hazard rate function as $h(t)=\int_0^t h'(s)\d s+\theta_0$, where $h(0)=\theta_0>0$ is an initial condition. Using , the hazard rate becomes a piecewise linear function of the form $$\label{eq:h}
h(t)=\sum_{k=1}^K \left(A_k+\theta_k t\right) I(\tau_{k-1}<t\leq\tau_k),$$ where $A_1=\theta_0$ and $A_k=\theta_0+\sum_{j=1}^{k-1}(\theta_j-\theta_{j+1})\tau_j$, for $k=2,\ldots,K$.
Integrating now the hazard function , the cumulative hazard is a piecewise quadratic function given by $$\label{eq:H}
H(t)=\sum_{k=1}^K \left(B_k+A_k t+\frac{\theta_k}{2} t^2\right) I(\tau_{k-1}<t\leq\tau_k),$$ where $B_1=0$ and $B_k=\sum_{j=2}^k(\theta_j-\theta_{j-1})\tau_{j-1}^2/2$, for $k=2,\ldots,K$.
We therefore define a semiparametric inverse generator as the induced survival function, which can be written as $$\label{eq:phiinv}
\varphi^{-1}(t)=\exp\{-H(t)\},$$ where $H(t)$ is given in .
After doing some algebra, we can invert this function to obtain an expression for the generator $$\begin{aligned}
\nonumber
\varphi(t)=\sum_{k=1}^{K}&\left(\left[\operatorname{sgn}(\theta_k)\left\{\frac{2}{\theta_k}\left(\frac{A_k^2}{2\theta_k}-B_k-\log(t)\right)\right\}^{1/2}-\frac{A_k}{\theta_k}\right]I(\theta_k\neq 0)\right.\\
\nonumber
&\left.\hspace{5mm}-\frac{B_k+\log(t)}{A_k}I(\theta_k=0)\right)
I\left(\varphi^{-1}(\tau_k)\leq t< \varphi^{-1}(\tau_{k-1})\right). \\
\label{eq:phi}\end{aligned}$$
The value $K$ controls the flexibility of the generator, and thus of the copula. If $K=1$, the induced
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The variability inherent in solar wind composition has implications for the variability of the physical conditions in its coronal source regions, providing constraints on models of coronal heating and solar wind generation. We present a generalized prescription for constructing a wavelet power significance measure (confidence level) for the purpose of characterizing the effects of missing data in high cadence solar wind ionic composition measurements. We describe the data gaps present in the 12-minute ACE/SWICS observations of ${\rm O}^{7+}/{\rm O}^{6+}$ during 2008. The decomposition of the in-situ observations into a ‘good measurement’ and a ‘no measurement’ signal allows us to evaluate the performance of a filler signal, i.e., various prescriptions for filling the data gaps. We construct Monte Carlo simulations of synthetic ${\rm O}^{7+}/{\rm O}^{6+}$ composition data and impose the actual data gaps that exist in the observations in order to investigate two different filler signals: one, a linear interpolation between neighboring good data points, and two, the constant mean value of the measured data. Applied to these synthetic data plus filler signal combinations, we quantify the ability of the power spectra significance level procedure to reproduce the ensemble-averaged time-integrated wavelet power per scale of an ideal case, i.e. the synthetic data without imposed data gaps. Finally, we present the wavelet power spectra for the ${\rm O}^{7+}/{\rm O}^{6+}$ data using the confidence levels derived from both the Mean Value and Linear Interpolation data gap filling signals and discuss the results.'
author:
- 'J. K. Edmondson, B. J. Lynch, S. T. Lepri, and T. H. Zurbuchen'
title: 'Analysis of High Cadence In-Situ Solar Wind Ionic Composition Data Using Wavelet Power Spectra Confidence Levels'
---
Introduction
============
Decades of in-situ plasma observations have revealed a rich picture of the solar wind [@Zurbuchen07 and references therein], whose overall structure and magnetic topology follows the solar magnetic activity cycle. Heliospheric solar wind observations reflect the structure of their coronal source regions: a relatively cool, fast solar wind with relatively homogeneous ionic composition and elemental abundances originating from coronal holes [@Geiss95; @McComas02], and a relatively hot, slow solar wind that exhibits considerably more variability in ionic composition and elemental abundances, originating either directly from within the vicinity of coronal streamers [@Gosling97; @Zurbuchen02]. In-situ observations of ionic charge state composition, especially of carbon (${\rm C}^{6+}/{\rm C}^{4+}$) and oxygen (${\rm O}^{7+}/{\rm O}^{6+}$) offer insight into coronal dynamics at temperatures of order one million degrees [e.g., @vonSteiger1997; @Zhao09; @Landi12; @Gilbert12]. Identifiable temporal scales from within the compositional variability may provide insights into the nature of the source regions of the solar wind.
Wavelet transforms are used to identify transient structure coherency as well as global periodicities in time series data [see e.g., @Daubechies1992; @TorrenceCompo1998; @Liu2007]. Wavelet analyses have an advantage over traditional spectral methods by being able to isolate both large timescale and small timescale periodic behavior that occur over only a subset of the time series. Thus, we are able to analyze the frequency decomposition as a function of time. This is extremely useful if we expect the time series to originate from either a time varying source region or, equivalently, to be consecutively sampling many different source regions with varying physical properties, such as in the solar wind.
Recently, @Katsavrias12 used wavelets to examine four solar cycles worth of solar wind plasma, interplanetary magnetic field, and geomagnetic indices to verify intermittent periodicities on timescales shorter than the solar cycle. Common solar timescales ranging from a decade down to hours have been characterized, and timescales of the order of a Carrington Rotation period (approx. 27 days) and shorter (e.g., 14, 9, and 7 days) have been consistently identified in various heliospheric and geomagnetic data [e.g., @Bolzan05; @Fenimore78; @Gonzalez87; @Gonzalez93; @Mursula98; @Nayar01; @Nayar02; @Svalgaard75]. @Temmer07 linked the 9 day timescale to coronal hole variability in the declining phase of solar cycle 23 and @Neugebauer97 used wavelet analyses of [*Ulysses*]{} solar wind speed data to investigate polar microstreams occurring on timescales of 16 hours.
Wavelet power spectra are a powerful tool to identify and characterize structures with specific transient timescales and global periodicities, but all commonly used algorithms require fully populated data-sets. That is inconsistent with solar wind composition data – as well as almost all in situ data-sets – because data gaps occur for a number of reasons. The experiment may undergo maintenance and data may not be available, or the signal to noise of the instrument at a given time may have prevented a valid and accurate measurement. Thus, care must be taken to account for spurious results caused by such data gaps. Thus, to identify characteristic timescales smaller than the largest gap duration, one must either break-up the full data set into disjoint segments of continuous data measurements, or quantify the spurious information introduced into the data set by filling-in the no-measurement times. It is with the latter solution that the methodology described in this paper is concerned.
Our purpose here is to describe a generalized procedure for the construction of wavelet power significance levels that quantify the relative influence of a filler signal of generally arbitrary form interleaved within a measured data signal. The decomposition of the time series allows for a similar decomposition of the total wavelet power spectrum, and thereby quantifying the power spectra associated with the filler signal and nonlinear interference, for comparison against the measured data signal power. Using the decomposition of the signal power spectra, we identify a statistical confidence level against the null hypothesis that a given feature in the total wavelet power spectrum is due to the filler signal and/or interference effects; in other words, we construct a significance measure for the the total wavelet power spectrum that identifies power spectral features resulting from the measured signal.
The structure of the paper is as follows. In Section \[S:WaveletCharacteristics\] we briefly review the wavelet transform, power spectrum, and methods for identifying global periodicities (akin to Fourier modes) as well as transient coherency characteristics. In Section \[S:Data\] we discuss the solar wind ionic composition data obtained by ACE/SWICS during the quiet solar conditions of 2008, and the origin and characteristics of no-measurement data gaps in the context of wavelet analysis. In Section \[S:DataReductionScheme\] we derive the wavelet power statistical confidence level to characterize the effects, and quantify the influence of no-measurement gaps in the data. In Section \[S:MonteCarlo\] we evaluate the performance of two filler signal forms (Linear Interpolation and constant Mean Value) using ensemble-averaged Monte Carlo simulations of a statistical ${\rm O}^{7+}/{\rm O}^{6+}$ ratio model random (1$^{st}$-order Markov) process. In Section \[S:WaveletO7O6\] we examine the wavelet power spectra of actual 12 minute ${\rm O}^{7+}/{\rm O}^{6+}$ data from 2008 with the Linear Interpolation filler signal for the high cadence data gaps, and present our conclusions in Section \[S:Conclusions\].
Rectified Wavelet Power Spectrum Analysis {#S:WaveletCharacteristics}
=========================================
The wavelet transform of a time series $T(t)$ is given by $$\label{E:WaveletTransform}
W_{\rm T}( t , s ) = \int {\rm T} ( t' ) \ \psi^* ( t' , t , s ) \ dt'.$$ In our calculations, the wavelet bases are generated from the Morlet family, though we note all following analysis is valid for any wavelet basis family. The Morlet family is a time-shifted, time-scaled, complex exponential modulated by a Gaussian envelope, $$\psi \left( t' , t , s \right) = \frac{\pi^{1/4}}{\vert s \vert^{1/2}} {\rm exp}\left[ i \omega_{0} \left( \frac{t' - t}{s} \right) \right] {\rm exp}\left[ - \frac{1}{2} \left( \frac{t' - t}{s} \right)^2 \right]$$ where $ \left( t' , t \right) \in I_{T} \times I_{T} \subset \mathbb{R} \times \mathbb{R}$ is the time and time-translation center, respectively, and $s \in I_{S} \subset \mathbb{R}$ is the timescale over which the Gaussian envelope is substantially different from zero. The $\omega_{0} \in \mathbb{R}$ is a non-dimensional frequency parameter defining the number of oscillations of the complex exponential within the Gaussian envelope; we set $\omega_{0} = 6$, yielding approximately three oscillations within the envelope.
The wavelet power spectrum is given by, $\vert W_{\rm T}( t , s ) \vert^2$, for $\psi , T \in L^2 \left( \mathbb{R} \right)$. @TorrenceCompo
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We report long-slit spectroscopic observations of the quasar SDSS J082303.22+052907.6 ($z_{\rm CIV}$$\sim$3.1875), whose Broad Line Region (BLR) is partly eclipsed by a strong damped Lyman-$\alpha$ (DLA; log$N$(HI)=21.7) cloud. This allows us to study the Narrow Line Region (NLR) of the quasar and the Lyman-$\alpha$ emission from the host galaxy. Using [cloudy]{} models that explain the presence of strong NV and PV absorption together with the detection of SiII$^*$ and OI$^{**}$ absorption in the DLA, we show that the density and the distance of the cloud to the quasar are in the ranges 180 $<$ $n_{\rm H}$ $<$ 710 cm$^{-3}$ and 580 $>$ $r_0$ $>$230 pc, respectively. Sizes of the neutral($\sim$2-9pc) and highly ionized phases ($\sim$3-80pc) are consistent with the partial coverage of the CIV broad line region by the CIV absorption from the DLA (covering factor of $\sim$0.85). We show that the residuals are consistent with emission from the NLR with CIV/Lyman-$\alpha$ ratios varying from 0 to 0.29 through the profile. Remarkably, we detect extended Lyman-$\alpha$ emission up to 25kpc to the North and West directions and 15kpc to the South and East. We interpret the emission as the superposition of strong emission in the plane of the galaxy up to 10kpc with emission in a wind of projected velocity $\sim$500km s$^{-1}$ which is seen up to 25kpc. The low metallicity of the DLA (0.27 solar) argues for at least part of this gas being in-falling towards the AGN and possibly being located where accretion from cold streams ends up.'
date: 'Accepted ....... Received ....... '
title: ' A coronagraphic absorbing cloud reveals the narrow-line region and extended Lyman-$\alpha$ emission of QSO J0823$+$0529 [^1]'
---
\[firstpage\]
galaxies: evolution – intergalactic medium – quasars: absorption lines – quasars: individual: SDSS J082303.22$+$052907.6
Introduction
============
Luminous high-redshift quasars consist of supermassive black holes residing at the center of massive galaxies and growing through mass accretion of gas in an accretion disk. Bright quasars play an important role in shaping their host galaxies through the emission of ionizing flux but also through launching powerful and high-velocity outflows of gas. These outflows inject energy and material to the disk of the galaxy and may be influencing the physical state up to larger distances. It has remained unclear however what are the mechanisms that drive energy from the very center of the active galactic nuclei (AGN) to the outskirts of the galaxy.
Observational evidence for outflows and winds in AGNs is seen as prominent radio-jets in radio-loud sources, broad absorption lines observed in broad absorption line (BAL) quasars, or through the photoionized warm absorber which is frequently observed in the soft X-rays (e.g. Crenshaw et al. 2003). Gravitational micro-lensing studies have shown that the primary X-ray emission region in AGN is of the order of a few tens of gravitational radii in size (e.g. Dai et al. 2010) and X-ray spectroscopy shows that highly ionized outflows launched from this region are seen in high-$z$ quasars (Chartas et al. 2009) and in at least 40% of them with velocities up to 0.1 c (Gofford et al. 2013).
Outflows are observed also on large scales. Mullaney et al. (2013) used the SDSS spectroscopic data base to study the one-dimensional kinematic properties of \[OIII\]$\lambda$5007 by performing multicomponent fitting to the optical emission-line profiles of about 24000, $z<0.4$ optically selected AGNs. They showed that approximately 17 percent of the AGNs have emission-line profiles that indicate their ionized gas kinematics are dominated by outflows and a considerably larger fraction are likely to host ionized outflows at lower levels. Harrison et al. (2014) find high-velocity ionized gas (velocity widths of about 600-1500 km s$^{-1}$) with observed spatial extents of (6-16) kpc in a representative sample of $z<0.2$, luminous (i.e. $L$\[O [iii]{}\]$>$10$^{41.7}$ erg s$^{-1}$) type 2 AGNs. Therefore galaxy-wide energetic outflows are not confined to the most extreme star-forming galaxies or radio-luminous AGNs.
If outflows are observed both on small and large scales, how the small scale outflows are transported at larger distances remains unclear (Faucher-Giguère et al. 2012, Ishibashi & Fabian 2015, King & Pounds 2015). This is however a crucial question as these outflows are massive and energetic enough to significantly influence star formation in the host galaxy and provide significant metal enrichment to the interstellar and intergalactic media (e.g. Dubois et al. 2013). At high redshift where quasars are more luminous, the consequences of such outflows are of first importance for galaxy formation. One way to study the interplay between the quasar and its surrounding is to search for Lyman-$\alpha$ emission around quasars (e.g. Hu & Cowie 1987, Hu et al. 1996, Petitjean et al. 1996, Bunker et al. 2003, Christensen et al. 2006). These observations reveal gas infalling onto the galaxy (Weidinger et al. 2005), positive feed-back from the AGN (Rauch et al. 2013) or a correlation between the luminosity of the extended emission and the ionizing flux from the quasar (North et al. 2012).
Very recently, we searched quasar spectra from the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al. 2014) for the rare occurrences where a strong damped Lyman-$\alpha$ absorber (DLA) blocks the Broad Line Region emission (BLR) from the quasar and acts as a natural coronagraph to reveal narrow Lyman-$\alpha$ emission from the host galaxy (Finley et al. 2013; see also Hennawi et al. 2009). This constitutes a new way to have direct access to the quasar host galaxy and possibly, when the size of the DLA is small enough, to the very center of the AGN. Out of a total of more than 70,000 spectra of $z>2$ quasars (Pâris et al. 2012), we gathered a sample of 57 such quasars and followed-up six of them with the slit spectrograph Magellan-MagE to search for the very special cases where the DLA coronagraph reveals the very center of the host galaxy and extended Lyman-$\alpha$ emission. In the course of this follow-up program, we found one object SDSS J0823+0529 where the DLA does not cover the Lyman-$\alpha$ broad line region entirely and reveals the emission of the Lyman-$\alpha$ and C [iv]{} narrow line regions. We show here that this is a unique opportunity to study the link between the properties of the central regions of the AGN to that of the gas in the halo of the quasar.
The paper is organized as follows. In Section 2 we describe the observations and data reduction. We derive properties of the gas associated with the DLA (metallicity, ionization state, density, distance to the quasar, typical size) in Section 3. We discuss the properties of the quasar narrow line region and of the extended Lyman-$\alpha$ emission in Sections 4 and 5, respectively, We then finally, present our conclusions in Section 6. In this work, we use a standard CDM cosmology with $\Omega_{\Lambda}$ = 0.73, $\Omega_{m}$ = 0.27, and H$_0$ =70 km s$^{-1}$ Mpc$^{-1}$ (Komatsu et al. 2011). Therefore 1 arcsec corresponds to about 7.1 kpc at the redshift of the quasar ($z$ = 3.1875 see below). In the following we will use solar metallicities from Asplund et al. (2009).
[c]{}
[c]{}
Observations and data reduction
===============================
The spectrum of the quasar SDSS J0823$+$0529 was observed with the Magellan Echellete spectrograph (MagE; Marshall et al. 2008) mounted on the 6.5 m Clay telescope located at Las Campanas Observatory. MagE is a medium-resolution long-slit spectrograph that covers the full wavelength range of the visible spectrum (3200 $\textup{\AA}$ $-$ 1 $\mu$ m). Its 10 arcsec long slit and 0.30$^{"}$ per pixel sampling in the spatial direction, are ideal for observing high-redshift extended astrophysical objects. The spectrograph was designed to have high throughput in the blue, with the peak throughput reaching 22 $\%$ at 5600 $\textup{\AA}$ including the telescope
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We review different computation methods for the renormalised energy momentum tensor of a quantised scalar field in an Einstein Static Universe. For the extensively studied conformally coupled case we check their equivalence; for different couplings we discuss violation of different energy conditions. In particular, there is a family of masses and couplings which violate the weak and strong energy conditions but do not lead to spacelike propagation. Amongst these cases is that of a minimally coupled massless scalar field with no potential. We also point out a particular coupling for which a massless scalar field has vanishing renormalised energy momentum tensor. We discuss the backreaction problem and in particular the possibility that this Casimir energy could both source a short inflationary epoch and avoid the big bang singularity through a bounce.'
author:
- |
[Carlos A. R. Herdeiro and Marco Sampaio]{}\
\
[*Departamento de Física e Centro de Física do Porto*]{}\
[*Faculdade de Ciências da Universidade do Porto*]{}\
[*Rua do Campo Alegre, 687, 4169-007 Porto, Portugal*]{}
date: October 2005
title: |
[**[Casimir energy and a\
cosmological bounce]{}**]{}
---
Introduction
============
The Casimir effect [@Casimir] is a manifestation of the vacuum fluctuations of a quantum field. It was first considered in systems with boundaries, and it is known that the Casimir force is highly sensitive to the size, geometry and topology of such boundaries. In particular it may change from attractive to repulsive depending on such shape [@review]. But the Casimir force is also present in systems with no boundaries and a compact topology, since the latter imposes periodicity conditions which resemble boundary conditions.
If our universe is either open or flat, with non-trivial topology, or closed, every quantum field living on it should generate a Casimir type force, which led many authors to study the Casimir effect in FRW models (see [@review] and references therein for a review). In the case of a spherical universe, most computations of the Casimir energy, or more generically, of the renormalised stress energy tensor, have focused on conformally coupled scalar fields. For instance, a massless conformally coupled scalar field, the electromagnetic field and the massless Dirac field on an Einstein static universe, have been considered in [@Ford1; @Ford2]; their Casimir energies have been shown to be of the form $\alpha/R^4$, with $\alpha$, respectively, $1/(480\pi^2)$, $11/(240\pi^2)$ and $17/(1920\pi^2)$. Note that all of these are positive. Since these are all conformally coupled fields, they have equation of state $p=\rho/3$, and obey the strong energy condition. This means the Casimir force is attractive. But this is not always so in the cosmological context. Zel’dovich and Starobinskii [@zeldovich] have indeed verified long ago that the Casimir energy of a scalar field could drive inflation in a flat universe with toroidal topology.
The purpose of this letter is to exhibit a family of quantum scalar fields which originate a repulsive Casimir force in a closed universe, since they violate the strong energy condition. Interestingly, this family includes the simplest case one could consider: a minimally coupled massless scalar field with no potential. Our computation will be performed in the Einstein Static Universe (ESU), which can be faced as an approximation to a dynamical FRW model in sufficiently small time intervals, and avoids having to deal with the complexities of quantum field theory in a time dependent spacetime, like particle creation.[^1]
Quantum scalar field with arbitrary coupling in the ESU
=======================================================
We consider the ESU, which is a well known solution of the Einstein equations sourced by a perfect fluid with positive energy density ($\rho>0$) and zero pressure ($p=0$) together with a positive cosmological constant ($\Lambda>0$): $G_{\mu \nu}+\Lambda g_{\mu \nu}
=T_{\mu \nu}$, with $T_{\mu \nu}=\rho u_{\mu}u_{\nu}$. The metric is $$ds^2_{ESU}=-dt^2+R^2d\Omega_{S^3}=-dt^2+\frac{R^2}{4}\left((\sigma_R^1)^2+(\sigma_R^2)^2+(\sigma_R^3)^2\right) \ .$$ We have written the metric on the unit 3-sphere, $d\Omega_{S^3}$, in terms of right forms on $SU(2)$. In order to achieve a static solution, the cosmological constant and the energy density are related by $\Lambda=\rho/2=1/R^2$. Another viewpoint is that the ESU is supported by a perfect fluid with $p=-\rho/3=-1/R^2$.
It is well known that this universe is unstable against small radial perturbations as was first argued by de Sitter. The reason is that the energy density of the perfect fluid increases/decreases with decreasing/increasing radius, whereas the one of the cosmological constant is kept constant. Since the former gives an attractive contribution and the latter a repulsive one, any displacement from the original equilibrium position will grow, rendering such original position as unstable. But even without such classical perturbations this universe is unstable due to quantum mechanics. These are the instabilities we will focus on, in the spirit discussed at the end of the introduction.
Let us consider a free (i.e with no potential) scalar field $\Phi$, with mass $\mu$ and with a coupling (not necessarily conformal) to the Ricci scalar of the background $\mathcal{R}=6/R^2$, governed by $$\left(\Box -\xi \mathcal{R}\right)\Phi=\mu^2\Phi \ .$$ Conformal coupling is obtained in four spacetime dimensions by taking the coefficient $\xi=1/6$ (and the theory is then conformal if $\mu=0$), whereas minimal coupling corresponds to $\xi=0$. The compactness of the spatial sections of the background guarantees a discrete mode spectrum which can be easily obtained using elementary group theory. We take the D’Alembertian in the form $$\Box=-\frac{\partial^2}{\partial t^2}+\frac{4}{R^2}\left(({\bf k}^R_1)^2+({\bf k}^R_2)^2+({\bf k}^R_3)^2\right)=-\frac{\partial^2}{\partial t^2}+\frac{4}{R^2}{\bf k}^2 \ ,$$ where ${\bf k}^R_i$ are the right vector fields dual to $\sigma^i_R$ and ${\bf k}^2$ is one of the two Casimirs in $SO(4)$. Notice that the eigenfunctions of the Klein-Gordon operator $(\Box -\xi \mathcal{R}-\mu^2)$ may be taken in the form \_n= e\^[-it]{}\_j\^[m\_L,m\_R]{} , \[eigenfunctions\] where the index $n$ represents all quantum numbers $j,m_L,m_R$ and $\mathcal{D}_j^{m_L,m_R}$ represents a Wigner D-function [@Edmonds]. Such function may be thought of as a spherical harmonic on the 3-sphere or as a matrix element of the rotation operator $\langle j, m_L|\hat{R}(\alpha,\beta,\gamma)| j, m_R\rangle$, where $|j,m\rangle$ is the basis of a representation of $SU(2)$ and $(\alpha,\beta,\gamma)$ are Euler angles. It follows straightforwardly that the dispersion relation becomes \_j\^2=+\^2 , j=0,,1,,…\[scalaresu\] with the degeneracy of each frequency being $d_j=(2j+1)^2$, in agreement with the spectrum found in [@Ford1]. Note that there are no unstable modes for $\xi\in \mathbb{R}_0^+$, which includes minimal and conformal coupling. This is the range of couplings we will analyse in the following.
Canonical quantisation of the scalar field can be performed unambiguously. One finds the mode expansion $$\hat{\Phi}=\sum_n \hat{a}_n^{\dagger}\Psi_n+\hat{a}_n\Psi^*_n \ , \ \ \ \ \ \Psi_n=\sqrt{\frac{2j+1}{2\omega_j V}}\Phi_n \ ,$$ with $V=2\pi^2R^3$ being the volume of the constant $t$ hypersurfaces and with the operators $\hat{a}_n^{\dagger},\hat{a}_n$ obeying the usual commutation relation $[\hat{a}_n,\hat{a}_{n'}^{\dagger}]=\delta_{nn'}$.
The classical energy momentum tensor of the scalar field is more conveniently written in the natural tetrad basis ${\bf e}^a=\{dt,R\sigma^1_R/2,R\sigma^2_R/2,R\sigma^3_R/2\}$, T\_[ab]{}=[**k**]{}\_a\_b-\_c\^c-g\_[ab]{}\^2+(G\_[ab]{}-\_a[**k**]{}\_b+g\_[ab]{})\^2 , \[emtensor\] where we have denoted ${\bf k}_a=\{\partial/\partial t, {\bf k}_i^R\}$. The conformal case ($\xi=1/6,\mu=0
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study a natural notion of decoherence on quantum random walks over the hypercube. We prove that this model possesses a decoherence threshold beneath which the essential properties of the hypercubic quantum walk, such as linear mixing times, are preserved. Beyond the threshold, we prove that the walks behave like their classical counterparts.'
author:
- Gorjan Alagic$^1$
- Alexander Russell$^2$
title: Decoherence in quantum walks on the hypercube
---
Introduction
============
The notion of a *quantum random walk* has emerged as an important element in the development of efficient quantum algorithms. In particular, it makes a dramatic appearance in the most efficient known algorithm for element distinctness [@A03]. The technique has also provided simple separations between quantum and classical query complexity [@CCD03], improvements in mixing times over classical walks [@NV00; @MR01], and some interesting search algorithms [@CG04; @AA05].
The basic model has two natural variants, the *continuous* model of Childs, et al. [@CFG01], on which we will focus, and the *discrete* model introduced by Aharonov, et al. [@AAKV01]. We refer the reader to Szegedy’s [@S04] article for a more detailed discussion. In the continuous model, a quantum walk on a graph $G$ is determined by the time-evolution of the Schrödinger equation using $kL$ as the Hamiltonian, where $L$ is the Laplacian of the graph and $k$ is a positive scalar to which we refer as the “jumping rate” or “energy”. In addition to being a physically attractive model, it has been successfully applied to some algorithmic problems as indicated above.
Such walks have been studied over a variety of graphs with special attention given to Cayley graphs, whose algebraic structure has provided immediate methods for determining the spectral resolution of the linear operators that determine the system’s dynamics. Once it had been discovered that quantum random walks can offer improvement over their classical counterparts with respect to such basic phenomena as mixing and hitting times, it was natural to ask how robust these walks are in the face of decoherence, as this would presumably be an issue of primary importance for any attempt at implementation [@LP03; @SBTK03; @DRKB02].
In this article, we study the effects of a natural notion of decoherence on the hypercubic quantum walk. Our notion of decoherence corresponds, roughly, to independent measurement “accidentally” taking place in each coordinate of the walk at a certain rate $p$. We discover that for values of $p$ beneath a threshold depending on the energy of the system, the walk retains the basic features of the non-decohering walk; these features disappear beyond this threshold, where the behavior of the classical walk is recovered.
Moore and Russell [@MR01] analyzed both the discrete and the continuous quantum walk on a hypercube. Kendon and Tregenna [@KT03] performed a numerical analysis of the effect of decoherence in the discrete case. In this article, we extend the continuous case with the model of decoherence described above. In particular, we show that up to a certain rate of decoherence, both linear instantaneous mixing times and linear instantaneous hitting times still occur. Beyond the threshold, however, the walk behaves like the classical walk on the hypercube, exhibiting $\Theta(n \log n)$ mixing times. As the rate of decoherence grows, mixing is retarded by the quantum Zeno effect.
Results
-------
Consider the continuous quantum walk on the $n$-dimensional hypercube with energy $k$ and decoherence rate $p$, starting from the initial wave function $\Psi_0 = \vert 0 \rangle ^{\otimes n}$, corresponding to the corner with Hamming weight zero. We prove the following theorems about this walk.
When $p < 4k$, the walk has instantaneous mixing times at $$t_{mix} = \frac {n (2\pi c - \arccos(p^2/8k^2-1))}{\sqrt{16k^2 - p^2}}$$ for all $c \in \mathbb{Z}$, $c > 0$. At these times, the total variation distance between the walk distribution and the uniform distribution is zero.
This result is an extension of the results in [@MR01], and an improvement over the classical random walk mixing time of $\Theta(n \log n)$. Note that the mixing times decay with $p$ and disappear altogether when $p \geq 4k$. Further, for large $p$, we will see that the walk is retarded by the quantum Zeno effect.
When $p < 4k$, the walk has approximate instantaneous hitting times to the opposite corner $(1, \dots , 1)$ at times $$t_{hit} = \frac{2 \pi n (2c + 1)}{\sqrt{16k^2 - p^2}}$$ for all $c \in \mathbb{Z}$, $c \geq 0$. However, the probability of measuring an exact hit decays exponentially in $c$; the probability is $$P_{hit} = \left[\frac{1}{2} + \frac{1}{2}e^{-\frac{p \pi (2c + 1)}
{\sqrt{16k^2 - p^2}}}\right]^n\enspace.$$ In particular, when no decoherence is present, the walk hits at $t_{hit} = \frac{n \pi(2c+1)}{2k},$ and it does so exactly, i.e. $P_{hit} = 1$. For $p \geq 4k$, no such hitting occurs.
This result is a significant improvement over the exponential hitting times of the classical random walk, with the caveat that decoherence has a detrimental effect on the accuracy of repeated hitting times.
Finally, we show that under high levels of decoherence, the measurement distribution of the walk actually converges to the uniform distribution in time $\Theta(n \log n)$, just as in the classical case.
For a fixed $p \geq 4k$, the walk mixes in time $\Theta(n \log n)$.
In the remainder of the introduction, we describe the continuous quantum walk model, and recall the graph product analysis of Moore and Russell [@MR01]. In the second section, we describe our model of decoherence, derive a superoperator that governs the behavior of the decohering walk, and prove that it is decomposable into an $n$-fold tensor product of a small system. We then fully analyze the small system in the third section, and use those results to draw conclusions about the general walk in 3 distinct regimes: $p < 4k$, $p = 4k$, and $p > 4k$. These regimes are roughly analogous to underdamping, critical damping, and overdamping (respectively) of a simple harmonic oscillator with damping rate $p$ and angular frequency $2k$.
The continuous quantum walk on the hypercube
--------------------------------------------
A continuous quantum walk on a graph $G$ begins at a distinguished vertex $v_0$ of $G$, the initial wave function of the walk being $\Psi_0$, where $\langle \Psi_0 \vert v \rangle = 1$ if $v = v_0$ and $0$ otherwise. The walk then evolves according to the Schrödinger equation. In our case, the graph is the $n$-dimensional hypercube. Concretely, we identify the vertices with $n$-bit strings, with edges connecting those pairs of vertices that differ in exactly one bit. Since the hypercube is a regular graph, we can let the Hamiltonian $H$ be the adjacency matrix instead of the Laplacian [@GW03]; the dynamics are then given by the unitary operator $U_t = e^{iHt}$ and the state of the walk at time $t$ is $\Psi_t = U_t \Psi_0$.
The following analysis makes use of the hypercube’s product graph structure; this structure will be useful again later when we consider the effects of decoherence. The analysis below diverges from that of Moore and Russell [@MR01] only in that we allow each qubit to have energy $k/n$ instead of $1/n$. The energy of the entire system is then $k$. Let $$\sigma_x = \left(\begin{matrix} 0 & k/n \\ k/n & 0 \end{matrix}
\right),$$ and let $$H = \sum_{j=1}^n \identity \otimes \cdots \otimes \sigma_x \otimes
\cdots \otimes \identity\enspace,$$ where the $j$th term in the sum has $\sigma_x$ as the $j$th factor in the tensor product. Then we have $$\begin{aligned}
U_t &= e^{iHt} = \prod_{j=1}^n \identity \otimes \cdots \otimes e^{it\sigma_x} \otimes \cdots
\otimes \identity = \left[e^{it\sigma_x} \right]^{\otimes n} \\
& = \left[\begin{matrix}\cos(kt/n) & i~\sin(kt/n) \\ i~\sin(kt/n) & \cos(kt/n) \end{matrix}\right]^{\otimes n}\enspace.\end{aligned}$$ Applying $U_t$ to the initial state $\Psi_0 = \vert 0 \rangle ^{\otimes n}$, we have $$U_
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
We study nine S0–Sb galaxies with (photometric) bulges consisting of two distinct components. The outer component is a flattened, kinematically cool, disklike structure: a “disky pseudobulge”. Embedded inside is a rounder, kinematically hot spheroid: a “classical bulge”. This indicates that pseudobulges and classical bulges are not mutually exclusive: some galaxies have both.
The disky pseudobulges almost always have an exponential disk (scale lengths = 125–870 pc, mean $\sim 440$ pc) with disk-related subcomponents: nuclear rings, bars, and/or spiral arms. They constitute 11–59% of the galaxy stellar mass (mean $PB/T = 0.33$), with stellar masses $\sim 7 \times 10^{9}$–$9 \times 10^{10}$ [$\mathrm{M}_{\sun}$]{}. Classical-bulge components have Sérsic indices of 0.9–2.2, effective radii of 25–430 pc and stellar masses of $5 \times 10^{8}$–$3 \times
10^{10}$ [$\mathrm{M}_{\sun}$]{} (usually $< 10$% of the galaxy’s stellar mass; mean $B/T = 0.06$). The classical bulges show rotation, but are kinematically hotter than the disky pseudobulges. Dynamical modeling of three systems indicates that velocity dispersions are isotropic in the classical bulges and equatorially biased in the disky pseudobulges.
In the mass–radius and mass–stellar mass density planes, classical-bulge components follow sequences defined by ellipticals and (larger) classical bulges. Disky pseudobulges *also* fall on this sequence; they are more compact than similar-mass large-scale disks. Although some classical bulges are quite compact, they are distinct from nuclear star clusters in both size and mass, and coexist with nuclear clusters in at least two galaxies.
Since almost all the galaxies in this study are barred, they probably *also* host boxy/peanut-shaped bulges (vertically thickened inner parts of bars). NGC 3368 shows evidence for such a zone outside its disky pseudobulge, making it a galaxy with all three types of “bulge”.
author:
- |
Peter Erwin$^{1,2,7}$, Roberto P. Saglia$^{1,2}$, Maximilian Fabricius$^{1,2}$, Jens Thomas$^{1,2}$, Nina Nowak$^{3}$, Stephanie Rusli$^{1,2}$, Ralf Bender$^{1,2}$, Juan Carlos Vega Beltr[á]{}n$^{4}$, and John E. Beckman$^{4,5,6}$\
$^{1}$Max-Planck-Insitut für extraterrestrische Physik, Giessenbachstrasse, 85748 Garching, Germany\
$^{2}$Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München, Germany\
$^{3}$Stockholm University, Department of Astronomy, Oskar Klein Centre, SE-10691 Stockholm, Sweden\
$^{4}$Instituto de Astrofísica de Canarias, C/ Via Láctea s/n, 38200 La Laguna, Tenerife, Spain\
$^{5}$Departamento de Astrofísica, Universidad de La Laguna, Avda. Astrofísico Fco. Sánchez s/n, 38200, La Laguna, Tenerife, Spain\
$^{6}$Consejo Superior de Investigaciones Científicas, Spain\
$^{7}$Guest investigator of the UK Astronomy Data Centre
nocite:
- '[@erwin14-smbh]'
- '[@comeron10]'
- '[@delorenzo-caceres12; @delorenzo-caceres13]'
- '[@kormendy04]'
title: 'Composite Bulges: The Coexistence of Classical Bulges and Disky Pseudobulges in S0 and Spiral Galaxies'
---
\[firstpage\]
galaxies: bulges – galaxies: structure – galaxies: elliptical and lenticular, cD – galaxies: spiral – galaxies: kinematics and dynamics – galaxies: evolution.
Introduction
============
In the standard picture of galaxy structure, disk galaxies have two main stellar components. The defining component is the disk: a highly flattened structure dominated by rotation, often (but not always) with a radial density profile which is exponential; disks often have significant substructure, particularly bars and spiral arms. The secondary component, present in early and intermediate Hubble types, is the bulge. Traditionally, the bulge has been seen as something very like a small elliptical galaxy embedded within the disk: more spheroidal than the disk, with stellar motions dominated by velocity dispersion rather than rotation, and having a strongly concentrated structure – e.g., having a surface brightness profile similar or identical to the stereotypical $R^{1/4}$ profile of an elliptical). In addition, the stellar populations of bulges were said to resemble those of ellipticals in being older (and possibly more metal-rich and alpha-enhanced) than the majority of stars in the disk. (See, e.g., @wyse97 and @renzini99 for reviews.) Taken all together, this seemed to argue for a bulge formation mechanism similar to that proposed for ellipticals, either via monolithic collapse or by rapid, violent mergers of initial subcomponents at high redshift.
The past decade or two has seen the growing realization that this picture is probably *not* true for many bulges, at least when bulges are defined as the excess stellar light in the central regions of the galaxy when compared to the dominant exponential profile of the disk.[^1] Instead, bulges are now seen as falling into two rather different classes: classical bulges (the traditional model) and *pseudobulges* [e.g., @kormendy93; @kormendy04], which are conceived of as something much more like disks than spheroids; i.e., they are flattened and dominated by rotation, with profiles which are close to exponential. Added to this complexity is the existence of so-called “boxy” and “peanut-shaped” bulges, which are now well understood as the vertically thickened inner parts of bars [see @athanassoula05 for a discussion of the distinctions]; confusingly, these structures are also sometimes called pseudobulges.
Although some authors are careful to point out the possibility that classical bulges could coexist with pseudobulges [see, e.g., @athanassoula05; @fisher-drory10], it is common to suggest that galaxies have one or the other, but not both. For example, in observational surveys such as those of @fisher-drory08 or @gadotti09, photometrically identified bulges are classified as either classical or pseudobulge. Similarly, in studies of how central supermassive black holes (SMBHs) relate to their host galaxies, disk galaxies are divided into those with classical bulges and those with pseudobulges [e.g., @hu08; @greene10; @kormendy11].
In this paper, we present evidence for the *coexistence* in nine galaxies of both a classical bulge – that is, a round, kinematically hot stellar structure which is significantly larger than a nuclear star cluster – and a disky pseudobulge – that is, a flattened stellar system, distinct from the main disk, whose kinematics are at least partly dominated by rotation and which (usually) hosts nuclear bars, nuclear rings, or other disky morphology.
Two of the galaxies discussed here – NGC 3368 and NGC 3489 – were previously discussed, in an abbreviated fashion, in @nowak10, using the term “composite pseudobulges”. Some analysis of the morphological substructure in NGC 3945 and NGC 4371 has been previously presented in @erwin99 and @erwin03-id.
The outline of this paper is as follows. After some initial discussion of data sources and reduction (Section \[sec:obs\]), we lay out our terminology in Section \[sec:terms\]. We introduce our methodology for identifying classical bulges by considering in Section \[sec:simple-classical\] two examples of *simple* classical-bulge-plus-disk systems (galaxies with *only* a classical bulge in addition to their disk). With this as a reference, we then consider two galaxies (NGC 3945 and NGC 4371) in some detail in Section \[sec:n3945n4371\], demonstrating first that much of their photometrically defined bulges are *not* classical bulges as previously defined, but something else: (disky) pseudobulges. We then go on to show that inside each pseudobulge is an additional structure which *does* resemble a classical bulge. The evidence for composite bulges in seven other galaxies follows a similar pattern, but is postponed to the Appendix (Section \[sec:others-full\]) so as not to interrupt the flow of the paper. Section \[sec:dynamics\] uses the results of Schwarzschild modeling for three composite-bulge galaxies to investigate the 3D stellar dynamics of the classical-bulge and disky-pseudobulge components. Section \[sec:discussion
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
We consider the dynamical Gross-Pitaevskii (GP) hierarchy on $\R^d$, $d\geq1$, for cubic, quintic, focusing and defocusing interactions. For both the focusing and defocusing case, and any $d\geq1$, we prove local existence and uniqueness of solutions in certain Sobolev type spaces $\cH_\xi^\alpha$ of sequences of marginal density matrices. The regularity is accounted for by $$\alpha \, \left\{
\begin{array}{rcl}
> &\frac12& {\rm if} \; d=1 \\
>&\frac d2-\frac{1}{2(p-1)} & {\rm if} \; d\geq2 \; {\rm and} \; (d,p)\neq(3,2)\\
\geq & 1 & {\rm if} \; (d,p)=(3,2) \,,
\end{array}
\right.$$ where $p=2$ for the cubic, and $p=4$ for the quintic GP hierarchy; the parameter $\xi>0$ is arbitrary and determines the energy scale of the problem. This result includes the proof of an a priori spacetime bound conjectured by Klainerman and Machedon for the cubic GP hierarchy in $d=3$. In the defocusing case, we prove the existence and uniqueness of solutions globally in time for the cubic GP hierarchy for $1\leq d\leq3$, and of the quintic GP hierarchy for $1\leq d\leq 2$, in an appropriate space of Sobolev type, and under the assumption of an a priori energy bound. For the focusing GP hierarchies, we prove lower bounds on the blowup rate. Also pseudoconformal invariance is established in the cases corresponding to $L^2$ criticality, both in the focusing and defocusing context. All of these results hold without the assumption of factorized initial conditions.
address:
- 'T. Chen, Department of Mathematics, University of Texas at Austin.'
- 'N. Pavlović, Department of Mathematics, University of Texas at Austin.'
author:
- Thomas Chen
- Nataša Pavlović
title: 'On the Cauchy problem for focusing and defocusing Gross-Pitaevskii hierarchies'
---
Introduction
============
The derivation of the nonlinear Schrödinger equation as the dynamical mean field limit of the manybody quantum dynamics of interacting Bose gases is a research area that is recently experiencing remarkable progress, see [@esy1; @esy2; @ey; @kiscst; @klma; @rosc] and the references therein, and also [@adgote; @eesy; @frgrsc; @frknpi; @frknsc; @he; @sp]. A main motivation to investigate this problem is to understand the dynamical behavior of Bose-Einstein condensates. For recent developments in the mathematical analysis of Bose gases and their condensation, we refer to the fundamental work of Lieb, Seiringer, Yngvason, et al.; see [@ailisesoyn; @lise; @lisesoyn; @liseyn] and the references therein.
The procedure developed in the landmark works of Erdös, Schlein, and Yau, [@esy1; @esy2; @ey], to obtain the dynamical mean field limit of an interacting Bose gas, comprises the following main ingredients. One determines the BBGKY hierarchy of marginal density matrices for particle number $N$, and derives the Gross-Pitaevskii (GP) hierarchy in the limit $N\rightarrow\infty$, for a scaling where the particle interaction potential tends to a delta distribution; see also [@kiscst; @sc]. For factorized initial data, the solutions of the GP hierarchy are governed by a cubic NLS for systems with 2-body interactions, [@esy1; @esy2; @ey; @kiscst], and quintic NLS for systems with 3-body interactions, [@chpa]. The proof of the uniqueness of solutions of the GP hierarchy is the most difficult part of this analysis, and is obtained in [@esy1; @esy2; @ey] by use of highly sophisticated Feynman graph expansion methods inspired by quantum field theory.
Recently, an alternative method to prove the uniqueness of solutions in the $d=3$ case has been developed by Klainerman and Machedon in [@klma], using spacetime bounds on the density matrices in the GP hierarchy; this result makes the assumption of a particular a priori spacetime bound on the density matrices which has so far remained conjectural. In the work [@kiscst] of Kirkpatrick, Schlein, and Staffilani, the corresponding problem in $d=2$ is solved, and the assumption made in [@klma] is replaced by a spatial a priori bound which is proven in [@kiscst]. Alternative methods to obtain dynamical mean field limits of interacting Bose gases using operator-theoretic methods are developed by Fröhlich et al in [@frgrsc; @frknpi; @frknsc].
All of the above mentioned works discuss Bose gases with [*repulsive*]{} interactions; it is currently not known how to obtain a GP hierarchy from the $N\rightarrow\infty$ limit of a BBGKY hierarchy with attractive interactions. In the work at hand, we have nothing to add to this issue. Instead, we start here directly from the level of the GP hierarchy, and are thus free to also consider [*attractive*]{} interactions within this context. Accordingly, we will refer to the corresponding GP hierarchies as [*cubic*]{}, [*quintic*]{}, [*focusing*]{}, or [*defocusing GP hierarchies*]{}, depending on the type of the NLS governing the solutions obtained from factorized initial conditions.
In the present work, we investigate the Cauchy problem for the cubic and quintic GP hierarchy with focusing and defocusing interactions. Our results do not assume any factorization of the initial data. As a crucial ingredient of our arguments, we introduce Banach spaces $\cH_\xi^\alpha=\{ \, \Gamma\in\Gspace \, | \, \| \, \Gamma \, \|_{\cH_\xi^\alpha} <\infty \, \}$ where \[bigG\] = { = ( \^[(k)]{}(x\_1,…,x\_k;x\_1’,…,x\_k’) )\_[k]{} | \^[(k)]{} < } is the space of sequences of $k$-particle density matrices, and \_[\_\^]{} := \_[k]{} \^k \^[(k)]{} \_[H\^(\^[dk]{}\^[dk]{})]{} . The parameter $\xi>0$ is determined by the initial condition, and it sets the energy scale of a given Cauchy problem. If $\Gamma\in\cH_\xi^\alpha$, then $\xi^{-1}$ is the typical $H^\alpha$-energy per particle.
The parameter $\alpha$ determines the regularity of the solution, and our results hold for $\alpha\in\alphaset(d,p)$ where \[eq-alphaset-def-0\] (d,p) := {
[cc]{} (12,) & [if]{} d=1\
(d2-, ) & [if]{} d2 (d,p)(3,2)\
\[1,) & [if]{} (d,p)=(3,2) ,
. in dimensions $d\geq1$, and where $p=2$ for the cubic, and $p=4$ for the quintic GP hierarchy. The parameter $\xi>0$ determines the energy scale of the problem.
The main results proven in this paper are:
1. We prove local existence and uniqueness of solutions for the cubic and quintic GP hierarchy with focusing or defocusing interactions, in $\cH_\xi^\alpha$, for $\alpha\in \alphaset(d,p)$, which satisfy a spacetime bound $\|\opB\Gamma\|_{L^1_{t\in I}\cH^{\alpha}_{\xi}}<\infty$ for some $\xi>0$ (the operator $\opB$ is defined in Section \[sec-defandresults-1\] below). This spacetime bound has been conjectured by Klainerman and Machedon in [@klma]. It is of Strichartz-type, and is proven in Section \[sec-locwp-1\] using a Picard-type fixed point argument on the space $L^1_{t\in [0,T]}\cH_\xi^\alpha$; see inequality (\[eqn-BGamma-spacetime-bd-0\]) and Remark \[rem-Strichartz-1\] below.
Accordingly, we conclude that a solution of the GP hierarchy in $\cH_\xi^\alpha$ is unique if and only if this spacetime bound holds.\
2. We prove the global existence and uniqueness of solutions in $\cH_\xi^1$ satisfying the above noted spacetime bound, for the defocusing cubic GP hierarchy for $1\leq d\leq3$, and the defocusing quintic GP hierarchy for $1\leq d\leq 2$, provided that an a priori bound $\|\Gamma(t)\|_{\cH_\xi^1}<c$ holds for $\xi>0$ sufficiently small.\
3. We indroduce generalized pseudoconformal transformations, and prove the invariance of the cubic GP hierarchy in $d=2$, and of the quintic GP hierarchy in $d=1$, under their application. Because the NLS obtained from factorized initial data in these cases are $L^2$-critical, we will,
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Recently Marcus, Spielman and Srivastava gave a spectacular proof of a theorem which implies a positive solution to the Kadison–Singer problem. We extend (and slightly sharpen) this theorem to the realm of hyperbolic polynomials. A benefit of the extension is that the proof becomes coherent in its general form, and fits naturally in the theory of hyperbolic polynomials. We also study the sharpness of the bound in the theorem, and in the final section we describe how the hyperbolic Marcus–Spielman–Srivastava theorem may be interpreted in terms of strong Rayleigh measures. We use this to derive sufficient conditions for a weak half-plane property matroid to have $k$ disjoint bases.'
address: 'Department of Mathematics, Royal Institute of Technology, SE-100 44 Stockholm, Sweden'
author:
- Petter Brändén
title: 'Hyperbolic polynomials and the Marcus–Spielman–Srivastava theorem'
---
This work is based on notes from a graduate course focused on hyperbolic polynomials and the recent papers [@MSS1; @MSS2] of Marcus, Spielman and Srivastava, given by the author at the Royal Institute of Technology (Stockholm) in the fall of 2013.
Introduction
============
Recently Marcus, Spielman and Srivastava [@MSS2] gave a spectacular proof of Theorem \[MSSmain\] below, which implies a positive solution to the infamous Kadison–Singer problem [@KS]. One purpose of this work is to extend Theorem \[MSSmain\] to the realm of hyperbolic polynomials. Although our proof essentially follows the setup in [@MSS2], a benefit of the extension (Theorem \[t1\]) is that the proof becomes coherent in its general form, and fits naturally in the theory of hyperbolic polynomials. We study the sharpness of the bound in Theorem \[t1\]. We prove that a conjecture in [@MSS2] on the sharpness of the bound (Conjecture \[maxmax\] in this paper) is equivalent to the seemingly weaker Conjecture \[maxmax2\]. Using known results about the asymptotic behavior of the largest zero of Jacobi polynomials, we prove in Section \[sbound\] that the bound is close to being optimal in the hyperbolic setting, see Proposition \[lowprop\].
In the final section we describe how Theorem \[t1\] may be interpreted in terms of strong Rayleigh measures. We use this to derive sufficient conditions for a weak half-plane property matroid to have $k$ disjoint bases. These conditions are very different from Edmonds characterization in terms of the rank function of the matroid [@Edm].
The following theorem is a stronger version of Weaver’s $KS_k$ conjecture [@We] which is known to imply a positive solution to the Kadison–Singer problem [@KS]. See [@Cas] for a review of the many consequences of Theorem \[MSSmain\].
\[MSSmain\] Let $k \geq 2$ be an integer. Suppose ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m \in {\mathbb{C}}^d$ satisfy $\sum_{i=1}^m {\mathbf{v}}_i{\mathbf{v}}_i^* = I$, where $I$ is the identity matrix. If $\|{\mathbf{v}}_i \|^2 \leq \epsilon$ for all $1\leq i \leq m$, then there is a partition of $S_1\cup S_2 \cup \cdots \cup S_k=[m]:=\{1,2,\ldots,m\}$ such that $$\label{sqbound1}
\left\| \sum_{i \in S_j} {\mathbf{v}}_i{\mathbf{v}}_i^* \right\| \leq \frac {(1+\sqrt{k\epsilon})^2} k,$$ for each $j \in [k]$, where $\|\cdot \|$ denotes the operator matrix norm.
Hyperbolic polynomials are multivariate generalizations of real–rooted polynomials, which have their origin in PDE theory where they were studied by Petrovsky, Gårding, Bott, Atiyah and Hörmander, see [@ABG; @Ga; @Horm]. During recent years hyperbolic polynomials have been studied in diverse areas such as control theory, optimization, real algebraic geometry, probability theory, computer science and combinatorics, see [@Pem; @Ren; @Vin; @Wag] and the references therein.
A homogeneous polynomial $h({\mathbf{x}}) \in {\mathbb{R}}[x_1, \ldots, x_n]$ is *hyperbolic* with respect to a vector ${\mathbf{e}}\in {\mathbb{R}}^n$ if $h({\mathbf{e}}) \neq 0$, and if for all ${\mathbf{x}}\in {\mathbb{R}}^n$ the univariate polynomial $t \mapsto h(t{\mathbf{e}}-{\mathbf{x}})$ has only real zeros. Here are some examples of hyperbolic polynomials:
1. Let $h({\mathbf{x}})= x_1\cdots x_n$. Then $h({\mathbf{x}})$ is hyperbolic with respect to any vector ${\mathbf{e}}\in {\mathbb{R}}_{++}^n=(0,\infty)^n$: $$h(t{\mathbf{e}}-{\mathbf{x}}) = \prod_{j=1}^n (te_j-x_j).$$
2. Let $X=(x_{ij})_{i,j=1}^n$ be a matrix of $n(n+1)/2$ variables where we impose $x_{ij}=x_{ji}$. Then $\det(X)$ is hyperbolic with respect to $I=\diag(1, \ldots, 1)$. Indeed $t \mapsto \det(tI-X)$ is the characteristic polynomial of the symmetric matrix $X$, so it has only real zeros.
More generally we may consider complex hermitian $Z=(x_{jk}+iy_{jk})_{j,k=1}^n$ (where $i = \sqrt{-1}$) of $n^2$ real variables where we impose $x_{jk}=x_{kj}$ and $y_{jk}=-y_{kj}$, for all $1\leq j,k \leq n$. Then $\det(Z)$ is a real polynomial which is hyperbolic with respect to $I$.
3. Let $h({\mathbf{x}})=x_1^2-x_2^2-\cdots-x_n^2$. Then $h$ is hyperbolic with respect to $(1,0,\ldots,0)^T$.
Suppose $h$ is hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$. We may write $$\label{dalambdas}
h(t{\mathbf{e}}-{\mathbf{x}}) = h({\mathbf{e}})\prod_{j=1}^d (t - \lambda_j({\mathbf{x}})),$$ where ${\lambda_{\rm max}}({\mathbf{x}})=\lambda_1({\mathbf{x}}) \geq \cdots \geq \lambda_d({\mathbf{x}})={\lambda_{\rm min}}({\mathbf{x}})$ are called the *eigenvalues* of ${\mathbf{x}}$ (with respect to ${\mathbf{e}}$), and $d$ is the degree of $h$. In particular $$\label{prolambda}
h({\mathbf{x}}) = h({\mathbf{e}})\lambda_1({\mathbf{x}}) \cdots \lambda_d({\mathbf{x}}).$$
By homogeneity $$\label{dilambdas}
\lambda_j(s{\mathbf{x}}+t{\mathbf{e}})=
\begin{cases}
s\lambda_j({\mathbf{x}})+t &\mbox{ if } s\geq 0 \mbox{ and } \\
s\lambda_{d-j}({\mathbf{x}})+t &\mbox{ if } s \leq 0
\end{cases},$$ for all $s,t \in {\mathbb{R}}$ and ${\mathbf{x}}\in {\mathbb{R}}^n$.
The (open) *hyperbolicity cone* is the set $$\Lambda_{\tiny{++}}= \Lambda_{\tiny{++}}({\mathbf{e}})= \{ {\mathbf{x}}\in {\mathbb{R}}^n : {\lambda_{\rm min}}({\mathbf{x}}) >0\}.$$ We denote its closure by $\Lambda_{\tiny{+}}= \Lambda_{\tiny{+}}({\mathbf{e}})=\{ {\mathbf{x}}\in {\mathbb{R}}^n : {\lambda_{\rm min}}({\mathbf{x}}) \geq 0\}$. Since $h(t{\mathbf{e}}-{\mathbf{e}})=h({\mathbf{e}})(t-1)^d$ we see that ${\mathbf{e}}\in \Lambda_{\tiny{++}}$. The hyperbolicity cones for the examples above are:
1. $\Lambda_{\tiny{++}}({\mathbf{e}})= {\mathbb{R}}_{++}^n$.
2. $\Lambda_{\tiny{++}}(I)$ is the cone of positive definite matrices.
3. $\Lambda_{\tiny{++}}(1,0,\ldots,0)$ is the *Lorentz cone* $$\left\{{\mathbf{x}}\in {\mathbb{R}}^n : x_1 > \sqrt{x_2^2+\cdots+x_n^2}\right\}.$$
The following theorem collects a few fundamental facts about hyperbolic polynomials and their hyperbolicity cones. For proofs see [@Ga; @Ren].
\[hy
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We perform full 3D numerical simulations of compact objects, such as black holes or neutron stars, boosted through an ambient force-free plasma that posses a uniform magnetization. We study jet formation and energy extraction from the resulting stationary late time solutions. The implementation of appropriate boundary conditions has allowed us to explore a wide range of boost velocities, finding the jet power scales as $\gamma v^2$ (being $\gamma$ the Lorentz factor). We also explore other parameters of the problem like the orientation of the motion respect to the asymptotic magnetic field or the inclusion of black hole spin. Additionally, by comparing a black hole with a perfectly conducting sphere in flat spacetime, we manage to disentangle curvature effects from those produced by the perfect conducting surface. It is shown that when the stellar compactness is increased these two effects act in combination, further enhancing the luminosity produced by the neutron star.'
author:
- Ramiro Cayuso
- Federico Carrasco
- Barbara Sbarato
- Oscar Reula
bibliography:
- 'FFE.bib'
title: Astrophysical jets from boosted compact objects
---
Introduction
============
Enormous amounts of energy, in the form of Poynting winds or highly collimated relativistic jets, are often observed in various astrophysical scenarios. Such energetic phenomena are believed to be powered by compact objects like black holes (BH) and neutron stars (NS), from the interactions with strong and large-scale magnetic fields on their surrounding magnetospheres. In the seminal works of Goldreich & Julian [@goldreich] and Blandford & Znajek [@Blandford] (describing pulsars and active galactic nuclei, respectively), it was first demonstrated that the vicinity of these spinning compact objects would be filled with a tenuous plasma. In such rarefied environments, the electromagnetic force dominates over particle inertia and leads to a great simplification in the problem, allowing to capture the basic mechanism that taps rotational energy by means of the electromagnetic field. While pulsars admits a classical interpretation as Faraday disks [@faraday1832], in the black hole scenario the energy is instead extracted in a form of generalized Penrose process (see e.g. [@lasota2014]) known as the Blandford-Znajek mechanism. This low-inertia limit of relativistic magnetohydrodynamics, referred as force-free electrodynamics (FFE), has been –since then– widely used to study global properties of neutron stars and black holes magnetospheres, like for instance Refs. [@contopoulos1999; @Komissarov2004b; @mckinney2006relativistic; @timokhin2006force; @spitkovsky2006; @Palenzuela2010Mag; @Gralla2014].
In the force-free approximation, when there is a perturbation of an otherwise constant magnetic field, the dynamics makes these perturbation travel preponderantly along the magnetic field lines, thus carrying energy with them along this direction. In this work, we simulate a couple of astrophysically relevant situations where this happens, which consist on a black hole or a neutron star moving through a plasma-filled region of constant magnetic field. Galactic mergers could provide a likely scenario for the black hole case [@begelman1980; @milosavljevic2005], since the resulting circumbinary disk of the merged galaxy will anchor magnetic field lines, some of which traverse the central region where the binary –and eventually the final supermassive black hole– moves. Another example could be a BH-NS binary, in which the black hole would move through the magnetic field of a neutron star [@mcwilliams2011; @paschalidis2013]. In such cases, we expect the black hole to loose some kinetic energy, transforming it by enlarging its mass but also into electromagnetic energy propagated away by the jets. There has been a number of previous numerical studies on this scenario, [@palenzuela2010dual; @Palenzuela2010Mag; @Luis2011], which we use as starting point for the present work. All of them analyze the problem from the point of view of the stationary magnetic field, namely, in their numerical grid the black hole moves and creates the jets which carry the energy. The advantage is that they can readily compute an approximate -since their time direction is a not Killing direction for the background geometry- energy flux. It is precisely this absence of a timelike Killing vector field and the correspondingly lack of a conserved positive-definite energy, which permits to have energy transport via jets in this approximation where the background is fixed. The disadvantage is that they can not model high speed black holes for they move too quickly outside the grid. In our case we choose to describe the problem from the black hole static geometry. The black hole sees a boosted magnetic field and the corresponding electric field, the interaction of its geometry with that electromagnetic configuration generates a stationary solution which takes energy away through jets. In our case we do have a background Killing vector field, and so conservation of the energy it defines, but this is not the energy an observer for which the (uniform) magnetic field is at rest would see. Thus, we also have to define approximate energy fluxes corresponding to these –for our description– moving observers. The energies so defined are transported away, as expected.
The other situation we model is that of a neutron star, also moving on a region of uniformly magnetized plasma. This could happen if a neutron star orbits near an active supermassive black hole, where both strong magnetic fields and force-free plasma are expected around the central region. It could also be relevant in the context of electromagnetic precursor signals from neutron stars mergers [@palenzuela2013electromagnetic; @palenzuela2013linking; @ponce2014], the likely progenitors of gamma-ray burst. We consider here an idealized setting where the neutron star is represented by a perfectly conducting spherical surface and there is no field generated at the stellar interior. This might be regarded as the limiting case in which the exterior magnetic field is much stronger than the one associated to the star, so that the later can be neglected. We defer the inclusion of the star’s own magnetic field and rotation to a more detailed analysis on a future work. A similar behavior to the boosted (nonspinning) black hole case is found, although the details of the operating mechanisms are not the same. Here, kinetic energy from the motion is transformed into Alfvén waves, sourced by the boundary conditions at the conductor. One nice aspect of this problem is that it allows to take the flat spacetime limit, in which the boosted time direction is also a Killing direction. This means there is no ambiguity on defining the energy fluxes used for the description; and thus, it might help gaining some insight into the previous –more delicate– black hole scenario.
In section II, we describe the setup for both cases: the numerical scheme, geometry and evolution equations; the initial data, the boundary conditions and, finally, the energy fluxes definitions. With all of these information one should be able to reproduce our results unambiguously. Except for the boundary conditions and energy fluxes, the setting is very similar to the one in [@palenzuela2010dual; @Luis2011]. In section III we present the results of our simulations, where different aspects of the problem were explored. Conclusions and perspectives are drawn on Section IV. Throughout, we adopt geometrized units in which $c=G=1$ and Lorentz-Heaviside units for the electromagnetic field.
Setup
=====
We are interested on modeling the magnetosphere of a compact object (BH or NS) that travels across a uniform magnetic field by solving the equations of force-free electrodynamics. The code used here was first described in [@FFE2] for black holes and later extended in [@NS], by developing appropriate boundary conditions for the perfectly conducting surface of a neutron star. Since we adopt the reference frame of the central object, its motion relative to the uniform magnetic field will be accomplished through suitable boundary conditions at the external surface of the domain. We shall look for stationary solutions obtained by evolving the fields until they do not change appreciably. The resulting state is determined only by boundary conditions, the background geometry, and to some extent on the handling of the electric field growth on the current sheets that the dynamics generates.
Although a detailed description of our numerical implementation can be found on previous works [@FFE2; @NS], we shall start this section by briefly summarizing its basic features along with information about the metric and the set of evolution equations employed. Then, we shall describe initial data and boundary conditions for the two scenarios we want to study. And finally, we shall discuss the energy fluxes definitions used to analyze the results.
Numerical Implementation
------------------------
We evolve a particular version of force-free electrodynamics derived at [@FFE], which has some improved properties in terms of well posedness and involves the full force-free current density [^1]. More concretely, we shall consider the evolution system given by Eqs. (8)-(9)-(10) in [@NS]. The numerical scheme to solve these equations is based on the *multi-block approach* [@Leco_1; @Carpenter1994; @Carpenter1999; @Carpenter2001], in which the numerical domain is built from several non-overlapping grids where only grid-points at their boundaries are sheared. The equations are discretized at each individual subdomain by using difference operators constructed to satisfy summation by parts (SBP). In particular, we employ difference operators which are sixth-order accurate on the interior and third-order at the boundaries. Numerical dissipation is incorporated through the use of adapted Kreiss-Oliger operators. These compatible difference and dissipation operators were both taken from Ref. [@Tiglio2007]. *Penalty terms* [@Carpenter1994; @C
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'A general method is developed for deriving Quantum First and Second Fundamental Theorems of Coinvariant Theory from classical analogs in Invariant Theory, in the case that the quantization parameter $q$ is transcendental over a base field. Several examples are given illustrating the utility of the method; these recover earlier results of various researchers including Domokos, Fioresi, Hacon, Rigal, Strickland, and the present authors.'
author:
- 'K R Goodearl and T H Lenagan[^1]'
title: Quantized coinvariants at transcendental $q$
---
2000 Mathematics Subject Classification: 16W35, 16W30, 20G42, 17B37, 81R50.
Keywords: coinvariants, First Fundamental Theorem, Second Fundamental Theorem, quantum group, quantized coordinate ring.
Introduction and background {#introduction-and-background .unnumbered}
===========================
In the classic terminology of Hermann Weyl [@Weyl], a full solution to any invariant theory problem should incorporate a *First Fundamental Theorem*, giving a set of generators (finite, where possible) for the ring of invariants, and a *Second Fundamental Theorem*, giving generators for the ideal of relations among the generators of the ring of invariants. Many of the classical settings of Invariant Theory have quantized analogs, and one seeks corresponding analogs of the classical First and Second Fundamental Theorems. However, the setting must be dualized before potential quantized analogs can be framed, since there are no quantum analogs of the original objects, only quantum analogs of their coordinate rings. Hence, one rephrases the classical results in terms of rings of invariants (see below), and then seeks quantized versions of these. A first stumbling block is that in general these coactions are not algebra homomorphisms, and so at the outset it is not even obvious that the coinvariants form a subalgebra. However, this can often be established; cf [@GLRrims], Proposition 1.1; [@DLjpaa], Proposition 1.3.
Typically, a classical invariant theoretical setting is quantized uniformly with respect to a parameter $q$, that is, there is a family of rings of coinvariants to be determined, parametrized by nonzero scalars in a base field, such that the case $q=1$ is the classical one. (Many authors, however, restrict attention to special values of $q$, such as those transcendental over the rational number field, and do not address the general case.) As in the classical setting, one usually has identified natural candidates to generate the ring of coinvariants. Effectively, then, one has a parametrized inclusion of algebras (candidate subalgebras inside the algebras of coinvariants), which is an equality at $q=1$, and one seeks equality for other values of $q$. In the best of all worlds, the equality at $q=1$ could be “lifted” by some general process to equality at arbitrary $q$. Lifting to transcendental $q$ has been done succesfully in some cases, by ad hoc methods – see, for example, [@FiHa], [@Str]. Also, an early version of [@GLRrims] for the transcendental case was obtained in this way. Quantum Second Fundamental Theorems can be approached in a similar manner. We develop here a general method for lifting equalities of inclusions from $q=1$ to transcendental $q$, which applies to many analyses of quantized coinvariants.
In order to apply classical results as indicated above, we must be able to transform invariants to coinvariants. For morphic actions of algebraic groups, the setting of interest to us, invariants and coinvariants are related as follows. Suppose that $\gamma: G\times V\rightarrow V$ is a morphic action of an algebraic group $G$ on a variety $V$. This action induces an action of $G$ on $\O(V)$, where $(x.f)(v)= f(x^{-1}.v)$ for $x\in G$, $f\in \O(V)$, and $v\in
V$. The invariants for this action are, of course, those functions in $\O(V)$ which are constant on $G$-orbits. The comorphism of $\gamma$ is an algebra homomorphism $\gamma^*: \O(V) \rightarrow \O(G)\otimes \O(V)$, with respect to which $\O(V)$ becomes a left $\O(G)$-comodule. Now a function $f\in \O(V)$ is a coinvariant in this comodule when $\gamma^*(f)=1\otimes f$. Since $1\otimes f$ corresponds to the function $(x,v) \mapsto f(v)$ on $G\times V$, we see that $\gamma^*(f)=1\otimes f$ if and only if $f(x.v)=
f(v)$ for all $x\in G$ and $v\in V$, that is, if and only if $f$ is an invariant function. To summarize: $$\O(V)^{\co\O(G)}= \O(V)^G.$$
Quantized coordinate rings have been constructed for all complex semisimple algebraic groups $G$. These quantized coordinate rings are Hopf algebras, which we denote $\Oq(G)$, since we are concentrating on the single parameter versions. In those cases where a morphic action of $G$ on a variety $V$ has been quantized, we have a quantized coordinate ring $\Oq(V)$ which supports an $\Oq(G)$-coaction. The coaction is often not an algebra homomorphism, but nonetheless – as mentioned above – the set of $\Oq(G)$-coinvariants in $\Oq(V)$ is typically a subalgebra. The goal of Quantum First and Second Fundamental Theorems for this setting is to give generators and relations for the algebra $\Oq(V)^{\co\Oq(G)}$ of $\Oq(G)$-coinvariants in $\Oq(V)$. We discuss several standard settings in later sections of the paper, and outline how our general method applies. We recover known Quantum First and Second Fundamental Theorems at transcendental $q$ in these settings, with some simplifications to the original proofs, and in some cases extending the range of the theorems.
Throughout, $k$ will denote a field, which may be of arbitrary characteristic and need not be algebraically closed.
Reduction modulo $q-1$
======================
Throughout this section, we work with a field extension $\kc \subset k$ and a scalar $q\in k\setminus \kc$ which is transcendental over $\kc$. Thus, the $\kc$-subalgebra $R=
\kc[q,q^{-1}] \subset k$ is a Laurent polynomial ring. Let us denote reduction modulo $q-1$ by overbars, that is, given any $R$-module homomorphism $\phi:A\rightarrow B$, we write $\phibar:
\Abar\rightarrow \Bbar$ for the induced map $A/(q-1)A \rightarrow
B/(q-1)B$.
Let $A \stackrel{\phi}\goesto B \stackrel{\psi}\goesto C$ be a complex of $R$-modules, such that $C$ is torsionfree. Suppose that there are $R$-module decompositions $$A= \bigoplus_{j\in J} A_j, \qquad\qquad B= \bigoplus_{j\in J} B_j,
\qquad\qquad C=
\bigoplus_{j\in J} C_j$$ such that $B_j$ is finitely generated, $\phi(A_j)\subseteq B_j$, and $\psi(B_j)\subseteq C_j$ for all $j\in J$.
If the reduced complex $\Abar \stackrel{\phibar}\goesto
\Bbar \stackrel{\psibar}\goesto
\Cbar$ is exact, then so is $$\xymatrixcolsep{3pc}
\xymatrix{
k\otimes_R A \ar[r]^{\id\otimes\phi} &k\otimes_R B
\ar[r]^{\,\id\otimes\psi\,} &k\otimes_R C. }$$
The hypotheses and the conclusions all reduce to the direct sum components of the given decompositions, so it is enough to work in one component. Hence, there is no loss of generality in assuming that $B$ is finitely generated.
Let $S$ denote the localization of $R$ at the maximal ideal $(q-1)R$. Set $$\Atil= S\otimes_R A, \qquad\qquad \Btil= S\otimes_R B, \qquad\qquad
\Ctil= S\otimes_R C,$$ and let $\Atil \stackrel{\phitil}\goesto \Btil
\stackrel{\psitil}\goesto \Ctil$ denote the induced complex of $S$-modules. Since $\Atil/(q-1)\Atil$ is naturally isomorphic to $A/(q-1)A= \Abar$, and similarly for $B$ and $C$, there is a commutative diagram
$$\xymatrixcolsep{4pc}
\xymatrix{
\fiddle{\Atil} \ar[r]^{\phitil} \ar[d]_{\alpha} &\fiddle{\Btil}
\ar[r]^{\psitil} \ar[d
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study the mechanism of the ‘pearling’ instability seen recently in experiments on lipid tubules under a local applied laser intensity. We argue that the correct boundary conditions are fixed chemical potentials, or surface tensions $\Sigma$, at the laser spot and the reservoir in contact with the tubule. We support this with a microscopic picture which includes the intensity profile of the laser beam, and show how this leads to a steady-state flow of lipid along the surface and gradients in the local lipid concentration and surface tension (or chemical potential). This leads to a natural explanation for front propagation and makes several predictions based on the tubule length. While most of the qualitative conclusions of previous studies remain the same, the ‘ramped’ control parameter (surface tension) implies several new qualitative results. We also explore some of the consequences of front propagation into a noisy (due to pre-existing thermal fluctuations) unstable medium.'
author:
- |
Peter D. Olmsted[^1] and F. C. MacKintosh\
[Department of Physics, University of Michigan, Ann Arbor MI 48109-2210]{}
title: 'Instability and front propagation in laser-tweezed lipid bilayer tubules'
---
=10000
5truemm PACS: 47.20.Dr, 47.20.-k, 47.20.Hw, 82.65.Dp 5truemm Short Title: [**Dynamic instability in bilayer tubules.**]{}
[2]{}
Introduction
============
A recent series of exciting experiments [@barziv94] demonstrated a dynamic instability induced on tubules of single lipid bilayers by application of laser ‘tweezers’, whereby the cylindrical tubule of radius $R_0$ modulates with a wavenumber given by $q^{\ast}R_0\simeq 0.8$. This instability has been attributed to an excess surface tension due to the gain in electrostatic energy when surfactant molecules, of higher dielectric constant than water, displace water in the electric field of the laser.
The starting point for understanding this phenomenon is the Rayleigh instability [@rayleigh1; @rayleigh2; @tomotika] of a thin cylindrical thread of liquid with positive surface tension, whereby the thread can reduce its surface area at fixed volume by modulating and evolving towards a string of beads. Rayleigh calculated the preferred wavelength of a cylinder of fluid in air in the inviscid [@rayleigh1] and non-inertial (viscous) [@rayleigh2] limits, finding in the former case a characteristic non-zero wavenumber and in the latter case a preferred wavenumber of zero (or infinite wavelength). Later, Tomotika [@tomotika] calculated the instability for a viscous fluid surrounded by another viscous fluid, again in the non-inertial regime, finding that the change in boundary conditions restores a finite characteristic wavelength. See Olami and Granek [@granek95] for a discussion of this point. The present problem, however, requires a much different detailed dynamical analysis which relates the flow of lipid molecules in the interface to the bulk flow in the surrounding fluid. An important physical ingredient is a new conserved quantity, the lipid on the surface.
At present there are (at least) two theoretical treatments of the experiments of Ref.[@barziv94]. Bar-Ziv and Moses [@barziv94] and Nelson and co-workers [@nelsontube95a] have proposed the picture that the surface tension rapidly equilibrates everywhere to an induced value $\Sigma_0$, and the instability proceeds from this state. In contrast, Granek and Olami [@granek95] have postulated that the correct treatment of the problem is to impose a constant rate at which lipid molecules are drawn into the trap from the tubule. This loss of lipid is accommodated by stretching out small wavelength surface fluctuations and the result is again a uniform surface tension $\Sigma_0$. Goldstein, [*et al.*]{} (GNPS) [@nelsontube95b] demonstrated quantitatively how the equilibration of the tension in the tube stays ‘ahead’ of a shape change, so that a treatment with a constant (in time) surface tension is reasonable; and argued that the primary loss of area is in the shape instability itself, rather than through the removal of small-scale wrinkles.
We propose a slightly different picture of the steady state before the onset of the instability, which follows from consideration of the experimental configuration. The tubules, as formed, are several hundred microns long and are attached at either end to ‘massive lipid globules’ [@barziv94] of order $10\mu\hbox{m}$ in diameter. Hence, the tubules must be in contact with a reservoir which fixes the lipid chemical potential (or, equivalently, the surface tension). If we assume the system is equilibrated, it follows that the chemical potential for exchange between the tubule, reservoir, and solvent/lipid bath vanishes [@schulman61], and we may assume a reference chemical potential of zero or, equivalently, zero surface tension. This coincides with the experimental observation of visible thermal fluctuations on the tubules [@barziv94].
Now imagine applying a laser to the tubule. In the electric field of the chemical potential of a lipid molecule is lowered by an amount $\delta\varepsilon{\cal E\/} D a$, where $D$ is the molecular length, $\delta\varepsilon$ is the dielectric constant relative to water, $a$ the area of the lipid, and ${\cal E\/}$ the energy density deposited in the trap. Nelson [*et al.*]{} [@nelsontube95b] calculated that this yields an energy gain per area of bilayer of $\Sigma_0\sim 2 \cdot
10^{-3} \,\hbox{erg cm}^{-2}$, for a laser power of $50\,\hbox{mW}$.
Hence there is a large reduction in the local chemical potential as the lipid suddenly finds it advantageous to move into the laser spot. The surface tension in the adjacent portion of the tubule increases as lipids start to move out of the surface. Since the other end of the tubule is in contact with a reservoir at zero chemical potential, the final state (prohibiting, for the moment, surface undulations) must be a non-equilibrium steady state in which:
1. Lipid is transported at constant velocity from the reservoir at zero chemical potential to the laser trap at a negative chemical potential.
2. The chemical potential drops linearly along the tubule, with a gradient that balances the frictional drag of the bulk fluid in steady state.
3. The local lipid concentration also varies linearly, since the two-dimensional lipid fluid membrane is compressible.
This differs significantly from the treatments of Nelson [*et al.*]{} and Granek and Olami in that [*lipid must flow out of the anchoring globules*]{} and the chemical potential (or surface tension) never attains a non-zero constant over the duration of the experiment. In fact, prohibiting the shape instability, the boundary conditions specified by both Olami and Granek [*and*]{} Nelson [*et al.*]{} yield a tense final state as (a small amount of) area is drawn out of surface fluctuations, while the treatment of the anchoring globules as reservoirs yields the steady-state described below.[^2]
Several consequences follow from this observation. First, a chemical potential gradient suggests a mechanism for front propagation [@nelsontube95b; @sarloos88]. The front starts at the laser spot where the surface tension is largest, and ‘propagates’ outward toward the anchoring globule simply because the amplitude of the instability grows at different rates along the tube. Our results predict a speed of front propagation which is inversely proportional to the length of the tube, and is largest near the laser spot, decreasing to zero somewhere near the anchoring reservoirs; and a characteristic wavenumber which also decreases (much slower, see Fig. \[fig:dispersion2\] below) away from the laser spot.
The outline of this paper is as follows. In Section 2 we derive the linear concentration gradient in the absence of surface undulations. We predict a ‘ramped’, or spatially-varying control parameter, the effective surface tension, which is in fact the two-dimensional pressure whose gradient drives the flow of lipid against the viscous drag of the bulk fluid. In Section 3 we present a detailed microscopic picture of the uptake of surfactant by the trap, and argue that a competition between bending and compression energies modifies the effective surface tension of the trap. This leads to a prediction of a critical laser power for the onset of an instability. While this section may safely be omitted in reading this paper, it illuminates the nature of the instability by treating a realistic scenario for how the trap buckles to initiate flow.
In Section 4 we discuss the implications of a slowly varying surface tension on the detailed calculation of Goldstein, [*et al.*]{} [@nelsontube95a; @nelsontube95b]. We also discuss front propagation within the picture of a surface tension gradient, which relates the problem to a large body of work on front propagation with ‘ramped’ parameters [@kramer82]. The issue of front propagation in this system is delicate [@nelsontube95b], and our results suggest at least two possibilities, which we briefly raise in this work and pose for further investigation. Depending on whether noise ([*i.e.*]{} existing thermal fluctuations in the tubule) is present, we expect front propagation which is either (a) characteristic of that predicted by the so-called Marginal Stability Criteria (MSC) [@nelsontube95b
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The maximum genus $\gamma_M(G)$ of a graph $G$ is the largest genus of an orientable surface into which $G$ has a cellular embedding. Combinatorially, it coincides with the maximum number of disjoint pairs of adjacent edges of $G$ whose removal results in a connected spanning subgraph of $G$. In this paper we prove that removing pairs of adjacent edges from $G$ arbitrarily while retaining connectedness leads to at least $\gamma_M(G)/2$ pairs of edges removed. This allows us to describe a greedy algorithm for the maximum genus of a graph; our algorithm returns an integer $k$ such that $\gamma_M(G)/2\le k \le \gamma_M(G)$, providing a simple method to efficiently approximate maximum genus. As a consequence of our approach we obtain a $2$-approximate counterpart of Xuong’s combinatorial characterisation of maximum genus.'
author:
- 'Michal Kotrbčík[^1]'
- 'Martin Škoviera[^2]'
bibliography:
- 'bibl.bib'
title: 'Simple greedy $2$-approximation algorithm for the maximum genus of a graph'
---
[**Keywords:**]{} maximum genus, embedding, graph, greedy algorithm.
[**AMS subject classification.**]{} Primary: 05C10. Secondary: 05C85, 05C40.
Introduction
============
The *maximum genus* $\gamma_M(G)$ of a graph $G$ is the maximum integer $g$ such that $G$ has a cellular embedding in the orientable surface of genus $g$. A result of Duke [@duke] implies that a graph $G$ has a cellular embedding in the orientable surface of genus $g$ if and only if $\gamma(G) \le g \le \gamma_M(G)$ where $\gamma(G)$ denotes the (minimum) genus of $G$. The problem of determining the set of genera of orientable surfaces upon which $G$ can be embedded thus reduces to calculation of $\gamma(G)$ and $\gamma_M(G)$.
Computing the minimum genus of a graph is a notoriously difficult problem, which is known to be NP-complete even for cubic graphs (see [@Th; @T2]). Nevertheless, the minimum genus can be calculated in linear time for graphs with bounded genus or bounded treewidth by [@KMR:2008]. Moreover, for graphs with fixed treewidth and bounded maximum degree [@gross:2014] provides a polynomial-time algorithm obtaining the complete genus distribution $\{g_i\}$ of the graph $G$, where $g_i$ denotes the number of cellular embeddings of $G$ into the orientable surface of genus $i$. For graphs of bounded maximum degree [@CS:2013] has recently proposed a polynomial-time algorithm constructing an embedding with genus at most $O(\gamma(G)^{c_1}\log^{c_2} n)$ where $c_1$ and $c_2$ are constants. On the other hand, for every $\epsilon>0$ and every function $f(n) = O(n^{1-\epsilon})$ there is no polynomial-time algorithm that constructs an embedding of any graph $G$ with $n$ vertices into the surface of genus at most $\gamma(G) + f(n)$ unless P $=$ NP (see [@CKK; @CKK2]).
For maximum genus, the situation is quite different, as maximum genus admits a good (min-max) characterisation by Xuong’s and Nebeský’s theorems, see [@KOK; @X] and [@KG; @N], respectively. From among these results the best known is Xuong’s theorem stating that $\gamma_M(G) = (\beta(G) - \min_T
\odd(G-E(T)))/2$, where $\beta(G)$ is the cycle rank of $G$, $\odd(G-E(T))$ is the number of components of $G-E(T)$ with an odd number of edges, and the minimum is taken over all spanning trees $T$ of $G$. Building on these results, Furst et al. [@FGM] and Glukhov [@G] independently devised polynomial-time algorithms for determining the maximum genus of an arbitrary graph. The algorithm of [@FGM] uses Xuong’s characterisation of maximum genus and exploits a reduction to the linear matroid parity on an auxiliary graph; its running time is bounded by $O(mn\Delta\log^6m)$, where $n,m$, and $\Delta$ are the number of vertices, edges, and the maximum degree of the graph, respectively. A matroidal structure is also in the backgroung of the algorithm derived in [@G], albeit in a different way. Starting with any spanning tree $T$ of $G$, the algorithm greedily finds a sequence of graphs $F_i$ such that $T = F_0 \subseteq \cdots \subseteq F_n
\subseteq G$, $|E(F_{i+1}) - E(F_i)| = 2$, and $\gamma_M(F_i) =
i$ for all $i$, and $\gamma_M(F_n) = \gamma_M(G)$. The running time of this algorithm is bounded by $O(m^6)$.
Although two polynomial-time algorithms for the maximum genus problem are known, both are relatively complicated. It is therefore desirable to have a simpler way to determine the maximum genus, at least approximately. A greedy approximation algorithm for the maximum genus of a graph was proposed by Chen [@chen]. The algorithm has two main phases. First, it modifies a given graph $G$ into a $3$-regular graph $H$ by vertex splitting, chooses an arbitrary spanning tree $T$ of $H$, and finds a set $P$ of disjoint pairs of adjacent edges in $H-E(T)$ with the maximum possible size. Second, it constructs a single-face embedding of $T\cup P$ and then inserts the remaining edges into the embedding while trying to raise the genus as much as possible. A high-genus embedding of $G$ in the same surface is then constructed by contracting the edges created by vertex splitting. The algorithm constructs an embedding of $G$ with genus at least $\gamma_M(G)/4$ and its running time $O(m\log n)$ is dominated by the second phase, that is, by operations on an embedded spanning subgraph of $H$.
In this paper we show that there is a much simpler way to approximate maximum genus. Our algorithm repeatedly removes arbitrary pairs of adjacent edges from $G$ while keeping the graph connected. We prove that this simple idea leads to at least $\gamma_M(G)/2$ pairs removed, providing an algorithm that returns an integer $k$ such that $\gamma_M(G)/2 \le k \le
\gamma_M(G)$. This process can be implemented with running time $O(m^2\log^2n/(n\log\log n))$. The algorithms developed in [@chen] can then be used to efficiently construct an embedding with genus $k$. Our result provides the first method to approximate maximum genus that can be easily implemented and improves the previous more complicated algorithm of Chen [@chen], which can guarantee embedding with genus only $\gamma_M(G)/4$. Structurally, our approach yields a natural $2$-approximate counterpart of Xuong’s theorem.
Background
==========
In this section we present definitions and results that provide the background for our algorithm.
Our terminology is standard and consistent with [@MT]. By a graph we mean a finite undirected graph with loops and parallel edges permitted. Throughout, all embeddings into surfaces are cellular, forcing our graphs to be connected, and the surfaces are orientable. For more details and the necessary background we refer the reader to [@GT] or [@MT]; a recent survey of maximum genus can be found in [@topics Chapter 2].
One of the earliest results on embeddings of graphs is the following observation, which is sometimes called Ringeisen’s edge-addition lemma. Although it is implicit in [@NSW], Ringeisen [@ringeisen:1972] was perhaps the first to draw an explicit attention to it.
\[lemma:edge-addition-technique\] Let $\Pi$ be an embedding of a connected graph $G$ and let $e$ be an edge not contained in $G$, but incident with vertices in $G$
- If both ends of $e$ are inserted into the same face of $\Pi$, then this face splits into two faces of the extended embedding of $G+e$ and the genus does not change.
- If the ends of $e$ are inserted into two distinct faces of $\Pi$, then in the extended embedding of $G+e$ these faces are merged into one and the genus raises by one.
The next lemma, independently obtained in [@KOK], [@jungerman:1978], and [@X], constitutes the cornerstone of proofs of Xuong’s theorem. It follows easily from Lemma \[lemma:edge-addition-technique\].
\[lemma:adding-pairs-single-face\] Let $G$ be a connected graph and $\{e,f\}$ a pair of adjacent edges not contained in $G$, but incident with vertices in $G$. If $G$ has an embedding with a single face, then so does $G\cup\{e,f\}$.
Recall that by Xuong’s theorem $\gamma_
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We investigate interfacial properties between two highly incompatible polymers of different stiffness. The extensive Monte Carlo simulations of the binary polymer melt yield detailed interfacial profiles and the interfacial tension via an analysis of capillary fluctuations. We extract an effective Flory-Huggins parameter from the simulations, which is used in self-consistent field calculations. These take due account of the chain architecture via a partial enumeration of the single chain partition function, using chain conformations obtained by Monte Carlo simulations of the pure phases. The agreement between the simulations and self-consistent field calculations is almost quantitative, however we find deviations from the predictions of the Gaussian chain model for high incompatibilities or large stiffness. The interfacial width at very high incompatibilities is smaller than the prediction of the Gaussian chain model, and decreases upon increasing the statistical segment length of the semi-flexible component.'
author:
- |
M. Müller${}^{1,2}$ and A. Werner${}^{1}$\
[${}^1$ Institut f[ü]{}r Physik, Johannes Gutenberg Universit[ä]{}t]{}\
[D-55099 Mainz, Germany]{}\
[${}^2$Department of Physics, Box 351560, University of Washington,]{}\
[Seattle, Washington 98195-1560]{}
title: |
Interfaces between highly incompatible polymers of different stiffness:\
Monte Carlo simulations and self-consistent field calculations
---
epsf
Introduction
==============
Melt blending of polymers has proven useful in designing new composite materials with improved application properties. In many practical situations the constituents of the blend are characterized by some degree of structural asymmetry. For example, a flexible component might contribute to a higher resistance to fracture, while blending it with a stiffer polymer can increase the tensile strength of the material. Since the entropy of mixing in polymeric systems decreases with increasing degree of polymerization, a small unfavorable mismatch in enthalpic interactions, entropic packing effects or the combination of both, generally leads to materials which are not homogeneous on mesoscopic scales, but rather fine dispersions of one component in another. Therefore properties of interfaces between unmixed phases are crucial in controlling the application properties of composites[@GENERAL] and have found abiding experimental interest[@FRIEND1; @FRIEND2; @FRIEND3; @FRIEND4].
Recently, the bulk phase behavior and surface properties[@WU] of polyolefins[@BATES; @OLEFINS] with varying microstructure has attracted considerable experimental and theoretical interest. These mixtures are often modeled[@BATES; @LIU; @SCHWEIZER] as blends of polymers with different bending rigidities, the less branched polymer corresponding to the more flexible component. For pure hard core interactions, field theoretical calculations by Fredrickson, Liu and Bates[@LIU], polymer reference interaction site model (P-RISM) computations by Singh and Schweizer[@SCHWEIZER], lattice cluster theories by Freed and Dudowicz[@FREED] and Monte Carlo simulations[@M1] find a small positive contribution to the Flory-Huggins parameter $\chi$. Monte Carlo simulations which include a repulsion between unlike species reveal an additional increase of the effective Flory-Huggins parameter with chain stiffness, because a back folding of chains becomes less probable with increasing stiffness and the number of intermolecular contacts increases[@M1] respectively. Qualitatively similar effects were found analytically in P-RISM[@SCHWEIZER] and lattice cluster[@FREED] theories.
In spite of their ubiquitous occurrence, interfacial properties in asymmetric blends have attracted comparably little interest. When entropic packing contributions to the Flory-Huggins parameter $\chi$ are small and composition fluctuations are negligible, the self-consistent field theory is expected to yield an adequate quantitative description. Helfand and Sapse[@HS] extended the self-consistent field theory to Gaussian chains with different statistical segment lengths. In the limit of infinite long Gaussian chains and strong segregation, they obtained analytical expressions for the interfacial width $w$ and the interfacial tension $\sigma$. Both increase upon increasing the statistical segment length of one component, leaving $\chi$ and the architecture of the other component unaltered.
However, there are other models, that incorporate structural disparities on the monomer level. Freed and coworkers model monomers as clusters of various shape on a lattice[@FREED2] and have explored corrections to the energy of mixing and entropic contributions to the Flory-Huggins parameter.
Stiffness disparities have also been investigated using the worm-like chain model[@WORM], which captures the crossover between rod-like behavior on small length scales and Gaussian statistics on length scales much larger than the persistence length. Morse and Fredrickson[@MORSE] extended the self-consistent field calculation to a symmetric blend of worm-like chains. For vanishing bending rigidity $\kappa$ they reproduced the Gaussian chain result. In the limit of high bending rigidities and strong segregation ($\kappa\chi \gg 1$), however, they found that the width $w$ of the monomer density profile can be considerably smaller than for a Gaussian chain with the same long distance behavior. At large $\kappa \chi$, increasing the statistical segment length even leads to a decrease of the interfacial width in qualitative contrast to the Gaussian chain result. They also observed that the width of the bond orientation profile is of the order of the persistence length, which is much larger than $w$ in that limit. Thus the interfacial width $w$ and the persistence length constitute two independent length scales of the interfacial profiles. A reduction of the interfacial width in the case of small bending rigidities was obtained numerically by Schmid and Müller[@SCHMID1]. They noted that the local structure might become important if its length scale is comparable to the interfacial width; a situation which occurs at rather large incompatibilities.
In the present study we extend our Monte Carlo studies[@M1] of structural asymmetric blends to the investigation of interfacial properties between well segregated phases of flexible and semi-flexible polymers. We consider rather small bending rigidities of the semi-flexible component, so that the long distance behavior of both species is Gaussian. However, we chose the incompatibility $\chi$ high enough, such that the interfacial width and the persistence length are comparable for the higher bending rigidities. The Monte Carlo simulations highlight the architectural influences and give a detailed picture of interfaces between structural asymmetric polymers. They yield density and orientation profiles for bonds and chains as a whole. Extracting an effective Flory-Huggins parameter $\chi$ from the simulation data, we compare our Monte Carlo results to self-consistent field calculations which take due account of the chain architecture via a partial enumeration procedure[@SZLEIFER; @M2; @M2A], and to Gaussian chain results. Therefore we can assess the importance of the level of coarse graining on the interfacial properties.
Our paper is organized as follows: In the next section we describe our polymer model, especially the dependence of single chain properties on the stiffness. We comment on some computational aspects of the Monte Carlo simulations and describe the measurement of the interfacial tension. We also introduce the salient features of our self-consistent field calculations for arbitrary molecular architecture. In the following, we present our simulational results and compare them to the self-consistent field calculations. We close with a brief discussion of our findings and an outlook on future work.
Model and technical details
=============================
Bond fluctuation model and single chain properties
--------------------------------------------------
In the framework of our coarse grained lattice model, a small number of chemical repeat units, say 3-5, is mapped onto a lattice monomer, such that the relevant features - chain connectivity and excluded volume interaction between monomeric units - are retained. We use the three dimensional bond fluctuation model (BFM)[@BFM], which has found widely spread application in computer simulations, because it combines the computational efficiency of lattice models with a rather faithful approximation of continuous space properties. Each effective monomer blocks a cube of 8 neighboring sites from further occupancy on a simple cubic lattice. Due to the extended monomer size, the model captures some nontrivial packing effects. We consider a blend of $n_A$ flexible polymers of length $N_A$ and $n_B$ semi-flexible B-polymers comprising $N_B$ monomers in a volume $V$. At a total monomer density $\Phi_0 = (N_An_A+N_Bn_B)/V= 0.5/8$, the model reproduces many properties of a dense polymeric melt. We use chain lengths $N=N_A=N_B=32$ and $64$, which correspond to a degree of polymerization of the order 120 and 240 in more chemically realistic polymer models. Monomers are connected via one of 108 bond vectors with lengths $2,\sqrt{5},\sqrt{6},3$ or $\sqrt{10}$, where here, and henceforth, all lengths are measured in units of the lattice spacing. The large number of bond vectors permits 87 different bond angles.
The persistence length of the semi-flexible B-polymers is tuned by imposing an intermolecular potential, which favors straight bond angles. We use a particular simple choice[@M1]: $E(\theta) = f k_BT \cos(\theta)$ where $\theta$ denotes the complementary angle to two successive bonds. Previous Monte Carlo simulations[@M1] of the bulk thermodynamics for $N=32$ and $f=1.0$ revealed a purely entropic Flory-Huggins parameter $\Delta \chi = 0.0018(2)$ for the athermal blend. This small value is in good quantitative agreement with theories[@LIU; @SCHWE
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We introduce new motivic invariants of arbitrary varieties over a perfect field. These cohomological invariants take values in the category of one-motives (considered up to isogeny in positive characteristic). The algebraic definition of these invariants presented here proves a conjecture of Deligne. Other applications include some cases of conjectures of Serre, Katz, and Jannsen on the independence of $\ell$ of parts of the étale cohomology of arbitrary varieties over number fields and finite fields.'
address:
- 'Department of Mathematics, University of Maryland, College Park MD 20742, USA'
- |
Max-Planck-Institut für Mathematik\
Vivatsgasse 7\
D-53111 Bonn, Germany
author:
- Niranjan Ramachandran
title: 'One-motives and a conjecture of Deligne'
---
\[section\] \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Conjecture]{} \[thm\][Question]{}
\[thm\][Remark]{} \[thm\][Definition]{} \[thm\][Example]{} \[thm\][Observation]{}
[^1]
$\frak{Wenn~die~K\ddot{o}nige~bau'n,}$
$\frak{haben~die~K\ddot{a}rrner~zu~tun.}$
$\frak{F.~Schiller}$
[**Introduction.**]{} P. Deligne [@h 10.4.1] has attached one-motives to complex algebraic varieties using the theory of mixed Hodge structures. He has conjectured that these one-motives admit a *purely algebraic* definition. The aim of this article is to prove his conjecture (Theorem \[peddha\]).
Recall the well known result of Riemann [@h2 4.4.3], presented here in modern guise: the “Hodge realization" $T_{{{\mathbb{Z}}}}$ — this is $A \mapsto H_1(A, {{\mathbb{Z}}})$ — defines an equivalence from the category of complex abelian varieties to the category of torsion-free polarizable Hodge structures of type $\{(0,-1),
(-1,0)\}$. In particular, any such Hodge structure arises as the $H_1$ of an essentially unique complex abelian variety.
Deligne [@h §10.1] has introduced the algebraic notion of a one-motive over a field $k$, generalizing that of an abelian variety — §\[rev\] contains the precise definitions; he has also generalized Riemann’s result by showing that the “Hodge realization” $T_{{{\mathbb{Z}}}}$ defines an equivalence from the category of one-motives over ${{\mathbb{C}}}$ to the category of torsion-free mixed Hodge structures $H$ of type $$(*) \qquad {} \qquad {} \qquad {} \qquad \{(-1, -1), (-1, 0), (0, -1), (0,0)\}
\qquad {}$$ with $Gr^W_{-1}H$ polarizable. Thus, any such mixed Hodge structure $H$ arises from an essentially unique one-motive $I(H)$ over ${{\mathbb{C}}}$. The functor $I$ is a quasi-inverse to $T_{{{\mathbb{Z}}}}$.
For any complex variety $V$ and any integer $n \ge 0$, consider the largest mixed Hodge substructure $t^n(V)$ of type $(*)$ of $H^n(V,
{{\mathbb{Z}}}(1))/{\rm torsion}$; there exists a well-defined one-motive $I^n(V)$ over ${{\mathbb{C}}}$ whose Hodge realization is $t^n(V)$; so $I^n(V):= I(t^n(V))$. Deligne [@h 10.4.1] has conjectured that $I^n(V)$ admits a purely algebraic definition. His proof (ibid. 10.3. — Interprétation algébrique du $H^1$ mixte: cas des courbes) of his conjecture for arbitrary curves suggests a precise formulation of the conjecture. Namely, we have the following (this formulation is due to the referee):
\[dc1\] [(Deligne)]{} For an arbitrary variety $V$ over an arbitrary field $k$ and integer $n$, define a one-motive $L^n(V/k)$ and homomorphisms[^2] $$\begin{aligned}
T_{\ell}(L^n(V/k)) & \to & H^n(V \times \bar{k}, {{\mathbb{Z}}}_{\ell}(1))/{\rm
torsion}, \\ T_{DR}(L^n(V/k)) & \to & H^n_{DR}(V/k)\end{aligned}$$ from the $\ell$-adic and de Rham realizations of $L^n(V/k)$. The definitions of $L^n(V/k)$ and the homomorphisms should be algebraic, canonical, and functorial in $V$ and $k$.
Furthermore, $L^n(V/{{{\mathbb{C}}}})$ should be canonically isomorphic to $I^n(V)$.
(Clearly, $V$ can be replaced by a simplicial scheme.)
The prototype is A. Weil’s construction [@aweil] of the Jacobian; his construction proves the conjecture for smooth projective curves and $n=1$. The conjecture is true for smooth projective varieties (\[dcon\]): it amounts to an algebraic construction of the Picard variety and the Néron-Severi group.
The case $n=1$ of (\[dc1\]) is known (up to $p$-isogeny in characteristic $p >0$) for arbitrary varieties over perfect fields [@bs3; @h; @ra; @jp2]; the case $n=2$ is known for complex proper surfaces [@ca; @ca2]. No general results were known for higher cohomology (i.e., for $n >2$).
A natural approach to Conjecture \[dc1\] is to use proper hypercoverings [@h 6.2] by smooth simplicial schemes; namely, to mimic Deligne’s approach [@h] to the construction of the mixed Hodge structure on $H^*(V,{{\mathbb{Z}}})$ of a complex algebraic variety $V$. This approach, which we follow here, gives a two-step strategy to prove (\[dc1\]):
[**Step 1.**]{} Construct one-motives $L^n$ ($n \ge 0$) for smooth simplicial schemes arising from simplicial pairs (\[simpxy\]) and show that they have the properties given in (\[dc1\]).
[**Step 2.**]{} Prove cohomological descent for these one-motives; more precisely, show that the one-motives $L^n$, given by (i), of a proper hypercovering of a variety $V$ are “independent” of the proper hypercovering; and, thus, $L^n$ depend only on $V$.
Sections \[constr\], \[Tmt\], \[oakland\] are devoted to the first step, but only for fields of characteristic zero; the case of positive characteristic is relegated to Section \[pos+\]. Our construction of the requisite one-motives $L^n$, inspired by [@ca], relies on the theory of the Picard scheme [@blr Chapter 8]; the techniques are those of [@ra] but here applied to truncated simplicial schemes. The realizations of $L^n$ are treated in Sections \[Tmt\] (Hodge, de Rham), \[oakland\] (étale); here a crucial use is made of the validity of the Hodge conjecture for divisors (\[h11\]).
Section \[varieties\] is devoted to the second step. It turns out that, because an important spectral sequence [@h 8.1.19.1] degenerates only with rational coefficients, the method of proper hypercoverings only provides a theory of isogeny one-motives $L^*(-)\otimes{{\mathbb{Q}}}$. More precisely, given two proper hypercoverings $U_{{\bullet}}$ and $'U_{{\bullet}}$ of $V$, we can only show that the associated one-motives $L^n$ and $'L^n$ are isogenous; the isogeny one-motive $L^n\otimes{{\mathbb{Q}}}$ depends only on $V$. Thus, a new ingredient is necessary to complete the second step, i.e., to endow these isogeny one-motives with integral structures. This is done, as in [@milram], via the integral structure on étale cohomology. Thus, we provide a complete proof (\[peddha\]) of Conjecture \[dc1\] for an arbitrary field of characteristic zero.
We now turn to the case of Conjecture \[dc1\] for a field $k$ of characteristic $p >0$; let us begin by indicating why the conjecture must be weakened slightly.
First, in [@jamo Appendix], A. Grothendieck notes that, for a curve $C$ over $k$, the construction of Deligne [@h 10.3] provides a one-motive $H^1_m(C) = L^1(C/k)$ defined over the perfection $k^{perf}$ of $k$; thus, [@h 10.3] proves the case $n=1$ of (\[dc1\]) only for curves over a perfect field. Second, he (loc. cit) expresses doubts about the existence
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The interband $\pi$ and $\pi+\sigma$ plasmons in pristine graphene and the Dirac plasmon in doped graphene are not applicable, since they are broad or weak, and weakly couple to an external longitudinal or electromagnetic probe. Therefore, the [*ab initio*]{} Density Function Theory is used to demonstrate that the chemical doping of the graphene by the alkali or alkaline earth atoms dramatically changes the poor graphene excitation spectrum in the ultra-violet frequency range ($4 - 10$ eV). Four prominent modes are detected. Two of them are the intra-layer plasmons with the square-root dispersion, characteristic for the two-dimensional modes. The remaining two are the inter-layer plasmons, very strong in the long-wavelength limit but damped for larger wave-vectors. The optical absorption calculations show that the inter-layer plasmons are both optically active, which makes these materials suitable for small organic molecule sensing. This is particularly intriguing because the optically active two-dimensional plasmons have not been detected in other materials.'
author:
- 'V. Despoja$^{1,2}$'
- 'L. Maruši'' c$^{3}$'
title: UV active plasmons in alkali and alkaline earth intercalated graphene
---
Extensive research of electronic excitations in graphene showed the existence of several two-dimensional (2D) plasmon modes: the intraband (Dirac) plasmon existing only in the doped graphene [@DasSarma; @PlPhT; @grafen; @Measurenanoribb; @Tip; @IR2], and the interband plasmons, which exist in pristine and doped graphene and originate from the interband electron-hole transition between the $\pi$ and $\pi^*$ bands and between the $\pi$ and $\sigma^*$ bands [@grafen; @politano1; @politano2; @Dino; @Eberlein]. These investigations also showed that the interband $\pi$ and $\pi+\sigma$ plasmons are broad and weak resonances, so their interaction with the external longitudinal or electromagnetic probes is weak as well, which makes them inadequate for most practical applications. The ’tunable’ Dirac plasmon in the doped graphene is also weak (for experimentally feasible doping), and in addition to that, it does not couple to an incident electromagnetic field directly. In the systems proposed so far, light could be coupled to the Dirac plasmon only indirectly, e.g. by using metallic tips, gratings or prisms, or by arranging graphene into nanoribbons[@Measurenanoribb; @Tip; @PRLGNR], which is all hard to fabricate. Also, such indirect coupling additionally reduces the intensity of the plasmon, thus reducing the efficiency of its application.
The alkali or alkaline earth intercalated graphene is much easier to fabricate and offers a broader variety of plasmons, both intraband and (especially) interband. Such systems have recently been extensively studied, both theoretically and experimentally [@exp1; @exp2; @exp3; @exp1SuperC; @exp2SuperC; @theory; @lazic], but the attention has not been on the electronic excitations. Intercalating any alkali or alkaline earth metal to a single graphene layer causes the natural doping of the graphene and results in the formation of two quasi two-dimensional (q2D) plasmas. This supports the existence of two 2D intraband plasmons, acoustic and Dirac, with frequencies up to 4 eV [@2Dplasmons], as well as several interband and even inter-layer modes occurring at higher frequencies. Some of these modes are optically active and some of them can be manipulated by doping, which opens possibilities for their application in various fields, such as plasmonics, photonics, transformation optics, optoelectronics, light emitters, detectors and photovoltaic devices [@IR3; @APP1; @chinos; @appl1; @appl3; @appl4; @appl5; @appl6; @appl7; @appl8]. Moreover, ’tunable’ 2D plasmons could be very useful in the area of chemical or biological sensing [@APP2; @photopto; @appl2; @appl9], which is one of our main suggestions for the potential application of the results of this research.
We performed calculations for several alkali and alkaline earth metals, with different coverages, and found that the effects which are the focus of this letter are valid for all of them. In all these cases, in addition to the graphene $\pi$ and $\sigma$ bands, there are also the $\pi$ and $\sigma$ bands of the intercalated metal. This opens possibilities for various electron-hole (e-h) transitions which may be the origins of the interband plasmons. We limit our investigation to the frequencies between 4 and 10 eV(the UV region), where the dominant interband plasmons occur, and identify four significant modes within this range. Two of them are not very well defined in the long-wavelength limit but they exist at larger wave-vectors as well, and show the square-root dispersion characteristic for the surface and 2D modes. These modes are the intra-layer modes, one located in the graphene layer and the other located in the metallic layer. The other two are very prominent in the long-wavelength limit, but at higher wave-vectors their intensities rapidly decrease, which makes them potentially interesting for optical applications [@IR3; @APP1; @APP2; @stauber; @PRLGNR]. Their dispersions are different from those typical for the 2D modes, indicating that they are different from the usual 2D plasmons. Detailed inspection (including retardation, i.e. finite speed of light, and tensorical response) shows that they are dipolar inter-layer modes (the electric field they produce oscillates perpendicular to the crystal plane), i.e. optically active q2D plasmons, contrary to the widely studied q2D plasmons which produce electric field parallel to the crystal plane, and are not optically active. The extensively studied graphene $\pi$ and $\pi+\sigma$ modes are optically active, but in the long wavelength limit ($Q \rightarrow 0$) they are not plasmons but electron-hole excitations [@Dino].
The theoretical formulation of the electronic response in various q2D systems has already been presented[@grafen; @Duncan2; @wake; @Rukelj], so here we only point out some details of the calculation important for the understanding of the result we want to present. We define the Electron Energy Loss Spectroscopy (EELS) local spectral function as the imaginary part of the excitation propagator $$S_{z_0}({\bf Q},\omega)=-Im D_{z_0}({\bf Q},\omega),
\label{spectrum}$$ where $$D_{z_0}({\bf Q},\omega) = W^{ind}_{\textbf{G}_{\parallel}=0}(\textbf{Q},\omega,z_0,z_0).
\label{propagator}$$
The $S_{z_0}({\bf Q},\omega)$ is also proportional to the probability density for the parallel momentum transfer ${\bf Q}=(Q_x,Q_y)$ and the energy loss $\omega$ of the reflected electron in the Reflection Electron Energy Loss Spectroscopy (REELS)[@REELS]. The induced dynamically screened Coulomb interaction is $W^{ind}=v^{2\textrm{D}}\otimes\chi\otimes v^{2\textrm{D}}$, where $v^{2\textrm{D}} = \frac{2\pi}{Q}e^{-Q\left|z-z'\right|}$ is the 2D Fourier transform of the bare Coulomb interaction and $\otimes=\int^{L/2}_{-L/2}dz$[@Leo]. The response function is obtained as the solution of the matrix Dyson equation $\hat{\chi}=\hat{\chi}^0 +\hat{\chi}^0\hat{v}^{2\textrm{D}}\hat{\chi}$ in the reciprocal space plane-wave basis ${\bf G}=({\bf G}_{\parallel},G_z)$. The non-interacting electrons response matrix is $\hat{\chi}^{0}=\frac{2}{\Omega}\sum_{i,j}(f_i-f_j)/(\omega+i\eta+E_i-E_j)\rho_{{\bf{G}},ij}\rho^*_{{\bf G}',ij}$, where $f_i$ is the Fermi-Dirac distribution, $\rho_{{\bf{G}},ij}$ are charge vertices [@grafen], $\Omega$ is the normalization volume, and $i=(n,\bf{K})$ and $j=(m,{\bf K+Q})$ are Kohn-Sham-Bloch states. The Coulomb interaction with the surrounding supercells in the superlattice arrangement is excluded, as described in detail in Ref.[@Rukelj].
![(color online) The intensity of the electronic excitations in (a) CsC$_8$, (b) CaC$_6$, (c) LiC$_6$ and (d) LiC$_2$. The white and green dotted lines in (d) show the boundaries of the e-h excitation gaps for the graphene $\pi$ bands around the Dirac point and the Li $\sigma$ bands around the $\Gamma$ point, respectively.[]{data-label="Fig1"}](EELS.pdf){width="\textwidth"}
To calculate the Kohn-Sham (KS) wave functions $\phi_{n{\bf K}}$ and energy levels $E_{n{\bf K}}$, i.e. the band structure, of the LiC$_2$, LiC$_6$,
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We present an extension of the tunneling theory for scanning tunneling microcopy (STM) to include different types of vibrational-electronic couplings responsible for inelastic contributions to the tunnel current in the strong-coupling limit. It allows for a better understanding of more complex scanning tunneling spectra of molecules on a metallic substrate in separating elastic and inelastic contributions. The starting point is the exact solution of the spectral functions for the electronic active local orbitals in the absence of the STM tip. This includes electron-phonon coupling in the coupled system comprising the molecule and the substrate to arbitrary order including the anti-adiabatic strong coupling regime as well as the Kondo effect on a free electron spin of the molecule. The tunneling current is derived in second order of the tunneling matrix element which is expanded in powers of the relevant vibrational displacements. We use the results of an ab-initio calculation for the single-particle electronic properties as an adapted material-specific input for a numerical renormalization group approach for accurately determining the electronic properties of a NTCDA molecule on Ag(111) as a challenging sample system for our theory. Our analysis shows that the mismatch between the ab-initio many-body calculation of the tunnel current in the absence of any electron-phonon coupling to the experiment scanning tunneling spectra can be resolved by including two mechanisms: (i) a strong unconventional Holstein term on the local substrate orbital leads to reduction of the Kondo temperature and (ii) a different electron-vibrational coupling to the tunneling matrix element is responsible for inelastic steps in the $dI/dV$ curve at finite frequencies.'
author:
- Fabian Eickhoff
- Elena Kolodzeiski
- Taner Esat
- Norman Fournier
- Christian Wagner
- Thorsten Deilmann
- Ruslan Temirov
- Michael Rohlfing
- 'F. Stefan Tautz'
- 'Frithjof B. Anders'
title: 'Inelastic electron tunneling spectroscopy for probing strongly correlated many-body systems by scanning tunneling microscopy'
---
Introduction
============
The investigation of phonons and molecular vibrations by inelastic electron tunneling spectroscopy dates back more than 50 years [@JaklevicLambe1966; @MolecularVibrationTunnel1968]. For example, point contact spectroscopy [@PCSreview1989] has been successfully used to measure the electron-phonon coupling function that enters the Migdal-Eliashberg theory [@McMillanTc1968; @AllenMitrovic] of superconductivity. Recently, the increasing relevance of quantum nanoscience [@Khajetoorians2011; @Baumann2015; @Donati2016; @Natterer2017; @Esat2018; @Cocker2016; @Doppagne2018; @Kimura2019; @Wagner2019] revitalizes the interest in vibrational inelastic electron tunneling spectroscopy (IETS) of molecules adsorbed on solid surfaces [@Stipe1998; @Guo2016; @Wegner2013; @Burema2013] or contacted in transport junctions [@Kim2011a; @Vitali2010; @Meierott2017; @Bruot2012; @Sukegawa2014]. While the fundamental mechanisms of the electron-phonon and electron-vibron interactions are well-understood (for simplicity, we will refer to both as electron-phonon interaction from now on), a quantitative theory with predicting power beyond a simplified picture comprising independent electronic degrees of freedoms and bosonic excitations is lacking. Even modern reviews [@REED2008] on this subject present the inelastic tunnel process only on the original level of understanding [@JaklevicLambe1966; @MolecularVibrationTunnel1968], i.e. the emission or absorption of a single phonon when a single electron is tunneling, as depicted in Fig. 1 of Ref. [@MolecularVibrationTunnel1968] or Fig. 1(a) of Ref. [@REED2008].
This commonly accepted picture is very adequate in the weak coupling limit [@MolecularVibrationTunnel1968] of the adiabatic regime [@EntelGrewe1979; @galperinNitzanRatner2006; @EidelsteinSchiller2013; @JovchevAnders2013], whence the electron-phonon coupling is small on the energy scale of the hybridization between the relevant molecular orbital(s) and the surface (or electrode in a transport experiment), and provides a basic understanding of the relevant physical processes. However, it becomes problematic in systems dominated by polaron formation, or for systems in the crossover region between the adiabatic and the anti-adiabatic regimes [@EntelGrewe1979; @galperinNitzanRatner2006; @EidelsteinSchiller2013].
This calls for a more general treatment of the inelastic tunneling process. In this paper we provide such a theory, focussing in particular on the case of scanning tunneling spectroscopy (STS). We generalize the original picture [@JaklevicLambe1966; @MolecularVibrationTunnel1968] to strongly correlated electron systems but maintain the notion that inelastic contributions to the tunneling current require absorption or emission of a phonon while the electron is crossing the tunnel barrier. We treat the STM tip and the system of interest as initially decoupled and fully characterized by their exact Green’s functions. After specifying the tunneling Hamiltonian $\hat H_T$, the tunnel current operator is derived from the charge conservation. Then the coupling between the system and the STM tip, $\hat H_T$, is switched on, and the evolving steady-state current is evaluated in second order of the tunneling matrix elements. All material-dependent spectral properties are encoded in the equilibrium spectral functions of the system. Combining an accurate determination of the molecular spectral function using Wilson’s numerical renormalization group (NRG) approach [@Wilson75; @BullaCostiPruschke2008] with a density functional approach [@RevModPhys.74.601] provides a theoretical approach to strongly coupled system with predicting power.
STS is an established technique and its theoretical background is well-understood [@TersoffHamann1983; @TersoffHamann1985]. Setting aside more challenging situations, commonly a featureless density of states in the STM tip is assumed, and the STM is operated in the tunneling regime such that the measured $dI/dV$ curve may be interpreted as being proportional to the local energy-dependent density of states (LDOS) of the sample at the given bias voltage. Using spin-polarized tips [@SplittingRKKY2012] allows for the detection of the spin-dependent LDOS. Since electrons usually can tunnel from the STM tip to different orbitals in the target system, the quantum mechanical interference of different paths [@SchillerHershfield2000a] may lead to Fano line shapes [@FanoResonance1961] in the tunneling spectra.
The interpretation of electron tunneling becomes more complicated if the spectrum is dominated by the Kondo effect. The Kondo effect, originally discovered as resistance anomaly in metals containing magnetic impurities [@Kondo62; @kondo_effect], has been studied experimentally in quantum dots [@Kondo_QD0; @Kondo_QD], atoms and molecules on surfaces [@kondo_atom; @LiSchneider1998; @Manoharan2000; @AgamSchiller2001; @Kondo_molecule; @kondo_molecule2], and molecular junctions [@kondo_SM]. A comprehensive understanding has been developed [@Wilson75; @kondo_anderson]: briefly, the at low temperatures logarithmically diverging antiferromagnetic exchange coupling between the unpaired spin and the itinerant electron states in the substrate (or leads) produces a singlet ground state with a low-energy single-particle excitation spectrum that is characterized by a resonance at zero energy. In such systems with their intrinsically highly non-linear LDOS in the vicinity of the chemical potential, it becomes very challenging to distinguish between elastic tunneling processes governed by the energy-dependent transfer matrix and additional inelastic contributions generated by the presence of an additional electron-phonon coupling. For example, in such systems so-called Kondo replica at vibrational frequencies have been observed [@Kondo_vib_SM; @kondo_vib_bjunc; @kondo_vib_bjunc2; @kondo_vib_bjunc3; @vib_kondo_stm; @vib_kondo_stm2; @vib_kondo_stm3], whose precise nature is, however, not yet understood. The interplay between Kondo physics and electron-vibron coupling has also been studied theoretically [@PaakeFlensberg2005; @vib_kondo_theo3; @vib_kondo_theo].
Since only the total tunneling current is accessible in experiments, its decomposition into individual processes requires guidance by a theory. In this paper, we present an approach providing this guidance. Specifically, we derive an extension to the comprehensive theory of the tunneling current in STM that was originally formulated by Schiller and Hershfield [@SchillerHershfield2000a] in the context of a magnetic adatom and generalized Fano’s analysis [@FanoResonance1961] to inelastic contributions in the tunneling Hamiltonian which includes the calculation of the current operator from the local continuity equation. Notably, our theory accounts for two different types of electron-phonon interactions: (i) the intrinsic electron-phonon coupling in the system in the absence the STM tip and (ii) vibrationally induced fluctuations of the distance between tip and molecule or substrate. The former is included in the system’s Green’s functions and only contribute to the elastic current. The latter enter the tunneling $H_T$ and, therefore, are the origin of the inelastic current contributions.
Having developed said theory, we demonstrate
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
Let $n,k\in\mathbb{N}$ and let $p_{n}$ denote the $n$th prime number. We define $p_{n}^{(k)}$ recursively as $p_{n}^{(1)}:=p_{n}$ and $p_{n}^{(k)}=p_{p_{n}^{(k-1)}}$, that is, $p_{n}^{(k)}$ is the $p_{n}^{(k-1)}$th prime.
In this note we give answers to some questions and prove a conjecture posed by Miska and Tóth in their recent paper concerning subsequences of the sequence of prime numbers. In particular, we establish explicit upper and lower bounds for $p_{n}^{(k)}$. We also study the behaviour of the counting functions of the sequences $(p_{n}^{(k)})_{k=1}^{\infty}$ and $(p_{k}^{(k)})_{k=1}^{\infty}$.
author:
- 'B[ł]{}ażej Żmija'
title: A note on primes with prime indices
---
[^1]
Introduction
============
Let $(p_{n})_{n=1}^{\infty}$ be the sequence of consecutive prime numbers. In a recent paper [@MT] Miska and Tóth introduced the following subsequences of the sequence of prime numbers: $p_{n}^{(1)}:=p_{n}$ and for $k\geq 2$ $$\begin{aligned}
p_{n}^{(k)}:=p_{p_{n}^{(k-1)}}.\end{aligned}$$ In other words, $p_{n}^{(k)}$ is the $p_{n}^{(k-1)}$th prime. They also defined $$\begin{aligned}
{{\rm Diag}\mathbb{P}}:= & \{\ p_{k}^{(k)}\ |\ k\in\mathbb{N}\ \}, \\
\mathbb{P}_{n}^{T}:= & \{\ p_{n}^{(k)}\ |\ k\in\mathbb{N}\ \}\end{aligned}$$ for each positive integer $n$.
The main motivation in [@MT] was the known result that the set of prime numbers is ($R$)-dense, that is, the set $\{\ \frac{p}{q}\ |\ p,q\in\mathbb{P}\ \}$ is dense in $\mathbb{R}_{+}$ (with respect to the natural topology on $\mathbb{R}_{+}$). It was proved in [@MT] that for each $k\in\mathbb{N}$ the sequence $\mathbb{P}_{k}:=(p_{n}^{(k)})_{n=1}^{\infty}$ is ($R$)-dense. This result might be surprising, because the sequences $\mathbb{P}_{k}$ are very sparse. In fact, for each $k$ set $\mathbb{P}_{k+1}$ is a zero asymptotic density subset of $\mathbb{P}_{k}$. On the other hand, it was showed, that the sequences $(p_{n}^{(k)})_{k=1}^{\infty}$ for each fixed $n\in\mathbb{N}$, and $(p_{k}^{(k)})_{k=1}^{\infty}$ are not ($R$)-dense.
Results of another type that were proved in [@MT] concern the asymptotic behaviour of $p_{n}^{(k)}$ as $n\rightarrow\infty$, or as $k\rightarrow\infty$. In particular, as $n\rightarrow\infty$, we have for each $k\in\mathbb{N}$ $$\begin{aligned}
p_{n}^{(k)}\sim n(\log n)^{k},\ \ \ \ p_{n+1}^{(k)}\sim p_{n}^{(k)}, \ \ \ \ \log p_{n}^{(k)}\sim \log n\end{aligned}$$ by [@MT Theorem 1]. Some results from [@MT] concerning $p_{n}^{(k)}$ as $k\rightarrow\infty$ are mentioned later.
For a set $A\subseteq \mathbb{N}$ let $A(x)$ be its counting function, that is, $$\begin{aligned}
A(x):=\# \left(A\cap [1,x]\right).\end{aligned}$$ Miska and Tóth posed four questions concerning the numbers $p_{n}^{(k)}$:
1. Is it true that $p_{k+1}^{(k)}\sim p_{k}^{(k)}$ as $k\rightarrow\infty$?
2. Are there real constants $c>0$ and $\beta$ such that $$\begin{aligned}
\exp\mathbb{P}_{n}^{T}(x)\sim cx(\log x)^{\beta}
\end{aligned}$$ for each $n\in\mathbb{N}$?
3. Are there real constants $c>0$ and $\beta$ such that $$\begin{aligned}
\exp{{\rm Diag}\mathbb{P}}(x)\sim cx(\log x)^{\beta}?
\end{aligned}$$
4. Is it true that $$\begin{aligned}
{{\rm Diag}\mathbb{P}}(x)\sim\mathbb{P}_{n}^{T}(x)
\end{aligned}$$ for each $n\in\mathbb{N}$?
The aim of this paper it to give answers to question B, C and D.
The main ingredients of our proofs are the following inequalities: $$\begin{aligned}
\label{ineqPrimes}
n\log n<p_{n}<2n\log n.\end{aligned}$$ The first inequality holds for all $n\geq 2$, and the second one for all $n\geq 3$. For the proofs, see [@RS]. In Section \[Results\] we use (\[ineqPrimes\]) in order to show explicit bounds for $p_{n}^{(k)}$. In particular, for all $n>e^{4200}$ we have: $$\begin{aligned}
\log p_{n}^{(k)}= & k(\log k+\log\log k+O_{n}(1)), \\
\log p_{k}^{(k)}= & k(\log k+\log\log k+O(\log\log\log k)),\end{aligned}$$ as $k\rightarrow\infty$, where the implied constant in the first line may depend on $n$, see Theorem \[MAIN\] below. In consequence, we improve the (in)equalities $$\begin{aligned}
\lim_{k\rightarrow\infty}\frac{p_{n}^{(k)}}{k\log k}= & 1, \\
1\leq \liminf_{k\rightarrow\infty}\frac{p_{k}^{(k)}}{k\log k}\leq & \limsup_{k\rightarrow\infty}\frac{p_{k}^{(k)}}{k\log k}\leq 2.\end{aligned}$$ that appeared in [@MT]. Then we show in Section \[Coro\] that the answers to questions B and C are negative (Corollary \[CoroBC\]), while the one for question D is affirmative (Theorem \[AsympEqThm\]). In fact, we find the following relation: $$\begin{aligned}
\mathbb{P}_{n}^{T}(x)\sim{{\rm Diag}\mathbb{P}}(x)\sim\frac{\log x}{\log\log x}\end{aligned}$$ for all positive integers $n$.
In their paper, Miska and Tóth also posed a conjecture, that we state here as a proposition, since it is in fact a consequence of a result that had already appeared in [@MT].
Let $n\in\mathbb{N}$ be fixed. Then $$\begin{aligned}
\frac{p_{n}^{(k)}}{p_{k}^{(k)}}\longrightarrow 0\end{aligned}$$ as $k\longrightarrow\infty$.
Let $k> p_{n}$. Then $$\begin{aligned}
0\leq \frac{p_{n}^{(k)}}{p_{k}^{(k)}}<\frac{p_{n}^{(k)}}{p_{p_{n}}^{(k)}}=\frac{p_{n}^{(k)}}{p_{n}^{(k+1)}}.\end{aligned}$$ The expression on the right goes to zero as $k$ goes to infinity, as was proved in [@MT Corollary 3].
It is worth to note, that primes with prime indices have already appeared in the literature, for example in [@BB] and [@BKO]. However, according to our best knowledge, our paper is the second one (after [@MT]), where the number of iterations of indices, that is, the number $k$ in $p_{n}^{(k)}$, is not fixed.
Throughout the paper we use the following notation: $\log x$ denotes the natural logarithm of $x$, and for functions $f$ and $g$ we write $f\sim g$ if $\lim_{x\rightarrow\infty}\frac{f(x)}{g(x)}=1$, $f=O(g)$ if there exists a positive constants $c$ such that $f(x)<cg(x)$, and $f=o(g)$ if $\lim_{x\rightarrow\infty}\frac{f(x)}{g(x)}=0$.
Upper and lower bounds for $p_{n}^{(k)}$ {#Results}
========================================
In this Section, we find explicit upper and lower bounds for $p_{n}^{(k)}$. We start with the upper bound.
\[lemUP\] Let $n\geq 9$. Then for each $k\in\mathbb
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The effective theories for many quantum phase transitions can be mapped onto those of classical transitions. Here we show that the naive mapping fails for the sub-ohmic spin-boson model which describes a two-level system coupled to a bosonic bath with power-law spectral density, $J(\omega)\propto\omega^s$. Using an $\epsilon$ expansion we prove that this model has a quantum transition controlled by an [*interacting*]{} fixed point at small $s$, and support this by numerical calculations. In contrast, the corresponding classical long-range Ising model is known to display mean-field transition behavior for $0<s<1/2$, controlled by a [*non-interacting*]{} fixed point. The failure of the quantum–classical mapping is argued to arise from the long-ranged interaction in imaginary time in the quantum model.'
author:
- Matthias Vojta
- 'Ning-Hua Tong'
- Ralf Bulla
date: 'Jan 20, 2005'
title: |
Quantum phase transitions in the sub-ohmic spin-boson model:\
Failure of the quantum–classical mapping
---
Low-energy theories for certain classes of quantum phase transitions in clean systems with $d$ spatial dimensions are known to be equivalent to the ones of classical phase transitions in $(d+z)$ dimensions, where $z$ ist the dynamical exponent of the quantum transition [@book]. This mapping is usually established in a path integral formulation of the effective action for the order parameter, where imaginary time in the quantum problem takes the role of $z$ additional space dimensions in the classical counterpart. The tuning parameter for the phase transition, being the ratio of certain coupling constants in the quantum problem (where $T$ is fixed to zero), becomes temperature for the classical transition. For the quantum Ising model, where the transverse field can drive the system into a disordered phase at $T=0$, the quantum–classical equivalence in the scaling limit can be explicitly shown using transfer matrix techniques [@book]. While this formal proof is only applicable for [*short-range*]{} interactions in time direction, it is believed that it also holds for long-range interactions, which can arise upon integrating out gapless degrees of freedom coupled to the order parameter. (Counter-examples are phase transitions in itinerant magnets, where the elimination of low-energy fermions produces non-analyticities in the resulting order parameter field theory [@bkv].) A paradigmatic example is the spin-boson model [@Leggett; @Weiss], where an Ising spin (i.e. a generic two-level system) is coupled to a bath of harmonic oscillators: eliminating the bath variables leads to a retarded self-interaction for the local spin degree of freedom, which decays as $1/\tau^2$ in the well-studied case of ohmic damping. Interestingly, the same model is obtained as the low-energy limit of the anisotropic Kondo model which describes a spin-1/2 magnetic impurity coupled to a gas of conduction electrons [@yuval; @emery].
The purpose of this paper is to point out that the naive quantum–classical mapping can fail for long-ranged interactions in imaginary time even for the simplest case of $(0+1)$ dimensions and Ising symmetry. We shall explicitly prove this failure for the sub-ohmic spin-boson model, by showing that the phase transitions in the quantum problem and in the corresponding classical long-range Ising model fall in different universality classes.
The spin-boson model is described by the Hamiltonian $${\cal H}_{\rm SB}=-\frac{\Delta}{2}\sigma_{x}+\frac{\epsilon}{2}\sigma_{z}+
\sum_{i} \omega_{i}
a_{i}^{\dagger} a_{i}
+\frac{\sigma_{z}}{2} \sum_{i}
\lambda_{i}( a_{i} + a_{i}^{\dagger} )
\label{eq:sbm}$$ in standard notation. The coupling between spin $\sigma$ and the bosonic bath with oscillators $\{a_i\}$ is completely specified by the bath spectral function $$J\left( \omega \right)=\pi \sum_{i}
\lambda_{i}^{2} \delta\left( \omega -\omega_{i} \right) \,,$$ conveniently parametrized as $$J(\omega) = 2\pi\, \alpha\, \omega_c^{1-s} \, \omega^s\,,~ 0<\omega<\omega_c\,,\ \ \ s>-1
\label{power}$$ where the dimensionless parameter $\alpha$ characterizes the dissipation strength, and $\omega_c$ is a cutoff energy. The value $s=1$ represents the case of ohmic dissipation, where a Kosterlitz-Thouless transition separates a delocalized phase at small $\alpha$ from a localized phase at large $\alpha$. These two phases asymptotically correspond to eigenstates of $\sigma_x$ and $\sigma_z$, respectively.
In the following, we are interested in sub-ohmic damping, $0<s<1$ [@spohn; @KM]. The standard approach is to integrate out the bath, leading to an effective interaction $$\begin{aligned}
{\cal S}_{\rm int} = \int d\tau d\tau' \sigma_z(\tau) g(\tau-\tau') \sigma_z(\tau')\end{aligned}$$ with $g(\tau) \propto 1/\tau^{1+s}$ at long times. Numerical renormalization group (NRG) calculations in Refs. , performed directly for the sub-ohmic spin-boson model, have established that a second-order quantum transition occurs for all $0<s<1$. Here we use an analytical renormalization group (RG) expansion, controlled by the small parameter $s$, to establish that the spin-boson transition at small $s$ is governed by an interacting fixed point with strong hyperscaling properties. This analytical result is supported by NRG calculations. In contrast, the transition in the classical Ising model is known to display mean-field behavior for $0<s<1/2$ [@fisher; @luijten].
[*Scaling and critical exponents.*]{} A scaling ansatz for the impurity part of the free energy takes the form $$F_{\rm imp} = T f(|\alpha-\alpha_c| T^{-1/\nu}, \epsilon T^{-y_\epsilon} )
\label{fscal}$$ where $|\alpha-\alpha_c|$ measures the distance to criticality. The bias $\epsilon$ takes the role of a local field (with scaling exponent $y_\epsilon$); and $\nu$ is the correlation length exponent which describes the vanishing of the energy scale $T^\ast$, above which critical behavior is observed [@book]: $T^\ast \propto |\alpha-\alpha_c|^{\nu}$. The ansatz (\[fscal\]) assumes the fixed point to be interacting; for a Gaussian fixed point the scaling function will also depend upon dangerously irrelevant variables.
With the local magnetization $M_{\rm loc} = \langle\sigma_z\rangle = -\partial F_{\rm imp}/\partial\epsilon$ and the susceptibility $\chi_{\rm loc} = -\partial^2 F_{\rm imp}/(\partial\epsilon)^2$ we can define critical exponents (see also Ref. ): $$\begin{aligned}
M_{\text{loc}}(\alpha > \alpha_c,T=0,\epsilon=0)
&\propto& (\alpha-\alpha_c)^{\beta}, \nonumber\\
\chi_{\text{loc}}(\alpha < \alpha_c,T=0) &\propto& (\alpha_c-\alpha)^{-\gamma},
\nonumber\\[-1.75ex]
\label{exponents} \\[-1.75ex]
M_{\text{loc}}(\alpha=\alpha_c,T=0) &\propto& | \epsilon |^{1/\delta}, \nonumber\\
\chi_{\text{loc}}(\alpha=\alpha_c,T) &\propto&
T^{-x}, \nonumber \\
\chi_{\text{loc}}''(\alpha=\alpha_c,T=0,\omega) &\propto&
|\omega|^{-y} {\rm sgn}(\omega). \nonumber\end{aligned}$$ The last equation describes the dynamical scaling of $\chi_{\rm loc}$. In the absence of a dangerously irrelevant variable there are only two independent exponents, e.g., $\nu$ and $y_\epsilon$. The scaling form (\[fscal\]) yields hyperscaling relations: $$\begin{aligned}
\beta = \gamma \frac{1-x}{2x},~~
2\beta + \gamma = \nu, ~~
\gamma = \nu x,~~
\delta = \frac{1+x}{1-x} \,.\end{aligned}$$ Hyperscaling also implies $x=y$, which is equivalent to so-called $\omega/T$ scaling in the dynamical behavior.
[*Long-range Ising model.*]{} The classical counterpart of the spin-boson model (\[eq:sbm\]) is the one-dimensional Ising model [@Leggett; @Weiss] $${\cal H}_{\rm cl} = - \sum_{\langle ij \rangle} J_{ij} S_i^z S_j^z + {\cal H}_{\rm SR}
\label{hcl}$$ with interaction $J_{ij} = J/|i-j|^{1+s}$. ${\cal H}_{\rm SR}$ contains an additional generic short-range interaction which arises from the transverse field, but is believed to
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We propose StartNet to address Online Detection of Action Start (ODAS) where action starts and their associated categories are detected in untrimmed, streaming videos. Previous methods aim to localize action starts by learning feature representations that can directly separate the start point from its preceding background. It is challenging due to the subtle appearance difference near the action starts and the lack of training data. Instead, StartNet decomposes ODAS into two stages: action classification (using ClsNet) and start point localization (using LocNet). ClsNet focuses on per-frame labeling and predicts action score distributions online. Based on the predicted action scores of the past and current frames, LocNet conducts class-agnostic start detection by optimizing long-term localization rewards using policy gradient methods. The proposed framework is validated on two large-scale datasets, THUMOS’14 and ActivityNet. The experimental results show that StartNet significantly outperforms the state-of-the-art by $15\%$-$30\%$ p-mAP under the offset tolerance of $1$-$10$ seconds on THUMOS’14, and achieves comparable performance on ActivityNet with $\times 10$ smaller time offset.'
author:
- |
Mingfei Gao$^1$[^1] Mingze Xu$^2$ Larry S. Davis$^1$ Richard Socher$^3$ Caiming Xiong$^3$[^2]\
$^1$University of Maryland $^2$Indiana University $^3$Salesforce Research\
[{mgao,lsd}@umiacs.umd.edu, mx6@indiana.edu, {rsocher,cxiong}@salesforce.com]{}
bibliography:
- 'egbib.bib'
title: 'StartNet: Online Detection of Action Start in Untrimmed Videos'
---
[^1]: Work done when the author was an intern at Salesforce Research.
[^2]: Corresponding author.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
Of the C$_{3}$H$_{x}$ hydrocarbons, propane (C${_3}$H$_{8}$) and propyne (methylacetylene, CH$_{3}$C$_{2}$H) were first detected in Titan’s atmosphere during the Voyager 1 flyby in 1980. Propene (propylene, C$_{3}$H$_{6}$) was first detected in 2013 with data from the Composite InfraRed Spectrometer (CIRS) instrument on Cassini. We present the first measured abundance profiles of propene on Titan from radiative transfer modeling, and compare our measurements to predictions derived from several photochemical models. Near the equator, propene is observed to have a peak abundance of 10 ppbv at a pressure of 0.2 mbar. Several photochemical models predict the amount at this pressure to be in the range 0.3 - 1 ppbv and also show a local minimum near 0.2 mbar which we do not see in our measurements. We also see that propene follows a different latitudinal trend than the other C$_{3}$ molecules. While propane and propyne concentrate near the winter pole, transported via a global convective cell, propene is most abundant above the equator. We retrieve vertical abundances profiles between 125 km and 375 km for these gases for latitude averages between 60$^{\circ}$S to 20$^{\circ}$S, 20$^{\circ}$S to 20$^{\circ}$N, and 20$^{\circ}$N to 60$^{\circ}$N over two time periods, 2004 through 2009 representing Titan’s atmosphere before the 2009 equinox, and 2012 through 2015 representing time after the equinox.
Additionally, using newly corrected line data, we determined an updated upper limit for allene (propadiene, CH$_{2}$CCH$_{2}$, the isomer of propyne). We claim a 3-$\sigma$ upper limit mixing ratio of 2.5$\times$10$^{-9}$ within 30$^\circ$ of the equator. The measurements we present will further constrain photochemical models by refining reaction rates and the transport of these gases throughout Titan’s atmosphere.
address:
- 'Planetary Systems Laboratory, Solar System Exploration Division, NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD, USA'
- 'Center for Space Science and Technology, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD, USA'
- 'Department of Astronomy, University of Maryland College Park, College Park, MD, USA'
- 'Laboratoire Interuniversitaire des Systémes Atmosphériques, Université Paris-Est, Creteil, France'
- 'Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA'
- 'Atmospheric, Oceanic and Planetary Physics, Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK'
author:
- Nicholas A Lombardo
- Conor A Nixon
- Richard K Achterberg
- Antoine Jolly
- Keeyoon Sung
- Patrick G J Irwin
- F Michael Flasar
bibliography:
- 'c3.bib'
title: 'Spatial and Seasonal Variations in C$_{3}$H$_{x}$ Hydrocarbon Abundance in Titan’s Stratosphere from Cassini CIRS Observations'
---
Introduction
============
Titan, the largest moon of Saturn, has a CH$_{4}$ surface mixing ratio of about 5%, measured by the Huygens GCMS [@niemann:2010], and decreasing with altitude into the stratosphere where it remains constant with altitude at 1-1.5%, as measured in [@lellouch:2014]. Titan is thought to have many similarities to the Archean Earth, including an atmosphere abundant in in N$_{2}$ and significant quantities of CH$_{4}$ as well as global haze layers which continually shroud Titan and may have occurred intermittently on Earth. While factors like temperature, sources of atmospheric CH$_{4}$, and minor atmospheric constituents vary between the two bodies, Titan remains a good analog for studying the atmosphere of the Archean Earth [@arney:2016; @izon:2017].
The global haze on Titan is produced through photolysis of CH$_{4}$ as Saturn Magnetospheric Electrons and solar UV photons bombard the upper atmosphere. The products of this process -highly reactive CH$_{3}^{-}$, H$^{+}$, and N$^{+}$ ions, among others- may then react to form C$_{2}$H$_{6}$, C$_{2}$H$_{4}$, and other molecules. As this complex process continues, larger hydrocarbons (C$_{x}$H$_{y}$) and nitriles (C$_{x}$H$_{y}$(CN)$_{z}$) react further to give rise to the ’photochemical zoo’ of molecules present in Titan’s atmosphere [@yung:1984; @wilson:2004; @lavvas:2008; @loison:2015; @dobrijevic:2016; @willacy:2016].
Titan’s 26.7$^{\circ}$ obliquity (the axial tilt relative to the normal of the orbital plane), comparable to the Earth’s 23.5$^{\circ}$ obliquity, causes variations in the insolation of the moon over the course of a Titan year (about 29.5 Earth years). The resulting seasonal variations in the physical state of the atmosphere include molecule abundance [@vinatier:2015; @coustenis:2018], temperature [@achterberg:2011; @teanby:2017], and behavior of the haze layers [@jennings:2012], discussed more in the review by [@horst:17]. Noteworthy is the existence of a global circulation cell, which transports warm gases in the summer hemisphere towards the winter pole, where they subside lower into the stratosphere. This downward advection causes adiabatic warming in the winter stratosphere and entrains short-lived gases produced in the upper stratosphere, increasing their abundance lower in the atmosphere. As northern winter evolved to northern spring, this single circulation cell transformed into two circulation cells, upwelling near the equator and downwelling at both poles, as predicted in [@hourdin:2004] and observed in [@teanby:2012]. For additional explanation of Titan’s atmospheric dynamics and chemistry, the reader is directed to [@titan:2010] and [@titan:2014].
Regarding the C$_{3}$ hydrocarbons, propane (C$_{3}$H$_{8}$) and propyne (C$_{3}$H$_{4}$) were initially detected in Titan’s atmosphere after the 1980 Voyager 1 flyby [@hanel:1981] through spectra acquired by the IRIS instrument. Abundances for propyne were first estimated by [@maguire:1981] by comparing the strength of the 633 cm$^{-1}$ Q-branch of propyne to the 721 cm$^{-1}$ Q-branch of acetylene (also ethyne, C$_{2}$H$_{2}$), and estimated to be on the order of 3$\times$10$^{-8}$. Propane was modeled in the same paper using a synthetic spectrum constructed for its $\nu_{21}$ band, and a disk averaged value of 2$\times$10$^{-5}$ was reported. These values were updated by [@coustenis:1989] to 4.4$^{+1.7}_{-2.1}\times$10$^{-9}$ for propyne and (7$\pm$4)$\times$10$^{-7}$ for propane. Further weak bands of propane were detected by the Composite InfraRed Spectrometer (CIRS) aboard Cassini [@nixon:propane]. Over three decades later, CIRS spectra were used to make the first detection of C$_{3}$H$_{6}$ [@nixon:propene], however an exact abundance could not be retrieved from modeling the spectra due to the lack of a spectral line list, although an abundance estimate was made by comparing the intensities of propene and propane lines, discussed more in Section 4.2.
Recent analyses have shown the abundance of propyne to vary strongly with season and latitude. [@vinatier:2015], using limb viewing observations, showed the vertical gradient of C$_{3}$H$_{4}$ increases dramatically over the mid northern latitudes as northern winter moves into northern spring and the polar vortex responds to the changing amount of sunlight. [@coustenis:2018], using nadir observations to probe abundance in a narrow altitude range in the middle stratosphere, show a similar trend at latitudes closer to the pole, between 60$^{\circ}$ and 90$^{\circ}$ either side of the equator. In the same studies, propane was shown to have a more constant abundance in latitude and time, remaining constant within error bars near 1$\times$10$^{-6}$ throughout the stratosphere, with the exception near the winter pole, where it increases with altitude.
Two C$_{3}$ hydrocarbons have yet to be firmly detected on Titan, allene (CH$_{2}$CCH$_{2}$, isomer of propyne) and cyclopropane (CH$_{2}$CH$_{2}$CH$_{2}$, isomer of propene). There was a tentative detection of allene by [@roe:2011], however an accurate line list was not available at the time of the study, thus the authors were not able to model the potential allene feature and confirm its detection. In this paper, we discuss members of the C$_{3}$H$_{x}$ series known to be present in Titan’s atmosphere- propane (C$_{3}$H$_{8}$), propene (C$_{3}$H$_{6}$), and propyne
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The magnetic nature of Cs$_{2}$AgF$_{4}$, an isoelectronic and isostructural analogue of La$_2$CuO$_4$, is analyzed using density functional calculations. The ground state is found to be ferromagnetic and nearly half metallic. We find strong hybridization of Ag-$d$ and F-$p$ states. Substantial moments reside on the F atoms, which is unusual for the halides and reflects the chemistry of the Ag(II) ions in this compound. This provides the mechanism for ferromagnetism, which we find to be itinerant in character, a result of a Stoner instability enhanced by Hund’s coupling on the F.'
author:
- 'Deepa Kasinathan,$^1$ A. B. Kyker,$^1$ and D. J. Singh$^2$'
title: 'Origin of ferromagnetism in Cs$_2$AgF$_4$: importance of Ag - F covalency'
---
Cs$_2$AgF$_4$ is a member of a family of Ag(II) fluorides that form in perovskite and layered perovskite structures. The distinguishing feature is the presence of Ag(II), which is a powerful oxidizing agent. [@hoppe1; @hoffmann] This compound was first synthesized in 1974 by Odenthal and co-workers. [@hoppe] It occurs in the tetragonal K$_{2}$NiF$_{4}$ layered perovskite structure. This is the same structure as the parent of the high temperature superconducting cuprates, La$_2$CuO$_4$. Cs$_2$AgF$_4$ shows no tilts or rotations of the octahedra, which are common in oxide layered perovskites. Synthesis of isostructural Na$_2$AgF$_4$ and K$_2$AgF$_4$ was also reported and these compounds also have the K$_{2}$NiF$_{4}$ structure. All three compounds are reported as being blue or purple in appearance and ferromagnetic. While transport measurements have not been reported for these compounds, it is known that the related distorted perovskite compound KAgF$_3$ is metallic at high temperatures, and then has a metal insulator transition coincident with an antiferromagnetic ordering temperature. [@g-kagf3]
In the doped high-T$_c$ cuprates, superconductivity develops from a paramagnetic metallic phase, with Fermi surfaces coming from hybridized Cu $d$ - O $p$ bands. These are formally antibonding bands of $d_{x^2-y^2}$ - $p_\sigma$ character. [@pickett] While the theory of high temperature cuprate superconductivity remains to be established, it is widely held that the phenomenon is associated with the physics of the undoped compounds, which are antiferromagnetic Mott insulators. Specifically, it is thought that there is a relationship between superconductivity and the antiferromagnetic fluctuations associated with the correlated $d$ electrons of cuprates. Cs$_2$AgF$_4$ has interesting similarities to the high-T$_c$ cuprates. As mentioned, it is isostructural, featuring AgF$_2$ sheets in place of CuO$_2$ sheets, it has a transition element with a $d^9$ configuration, and it is magnetic. Moreover, related compounds have been shown both in band structure calculations and X-ray photoelectron spectroscopy experiments to display significant Ag - F covalency, reminiscent of the Cu - O hybridization in the cuprates. [@hoffmann; @jaron; @g2] These similarities and other considerations have led to speculations about possible high temperature superconductivity in Ag(II) fluorides. [@hoffmann; @g3] One puzzling difference between the cuprates and the layered Ag(II) fluorides is that the undoped cuprates are antiferromagnetic, while the argentates are ferromagnetic. One possible explanation would be an orbital ordering that favors ferromagnetism within a superexchange framework, as was recently suggested. However, neutron measurements did not detect the symmetry lowering that would occur in this case. [@mclain]
Here we use electronic structure calculations to elucidate the electronic structure of Cs$_2$AgF$_4$ and the origin of its magnetic properties. A previous density functional calculation for this material found it to be a covalent metal,[@hoffmann] with a substantial density of states (DOS) at the Fermi level (E$_{F}$) in the absence of magnetism.
We did electronic structure calculations within the local spin density approximation (LSDA) and the generalized gradient approximation (GGA), [@pw; @pbe] using the general potential linearized augmented planewave method, with local orbitals, [@lapw; @lo] as implemented in the WIEN2K program. [@wien] The augmented planewave plus local orbital extension was used for the Ag $d$ and semicore levels. [@apw] The valence states were treated in a scalar relativistic approximation, while the core states were treated relativistically. Well converged basis set sizes and Brillouin zone samplings were employed. Except as noted otherwise, the LAPW sphere radii were 2.0 $a_0$ and 1.85 $a_0$ for the metal and fluorine atoms, respectively. The basis set cut-off was chosen to be $RK_{max}$=7.0, where $R$ is the radius of the F sphere. We tested the convergence by comparison of LSDA results with an independent code, employing the LAPW augmentation with local orbitals and with higher basis set cut-offs as well as different sphere radii.
The structural data were obtained from the report[@hoppe] of Odenthal and co-workers: $a$ = 4.58Å, $c$ = 14.19Å, including the two internal parameters corresponding to the Cs and apical F heights above the AgF$_2$ square planar sheets. Minimization of the forces in the LDA approximation yielded a value of z$_{\rm Cs}$=0.361 and z$_{\rm F}$=0.147, in close agreement with the experimental values of z$_{\rm Cs}$=0.36 and z$_{\rm F}$=0.15.
Within the LSDA we find a Cs$_2$AgF$_4$ to be a metal on the borderline of ferromagnetism. Fixed spin moment calculations showed a non-spin-polarized ground state, but with a 1 $\mu_B$ per formula unit fully polarized solution only 35 meV higher in energy. We also did LSDA calculations applying fields only inside the Ag LAPW spheres, which were chosen to be 2.1 $a_0$ in radius for this purpose. With 5 mRy fields of this type in a ferromagnetic pattern, moments of 0.35 $\mu_B$ were induced in the Ag spheres, and moments also appeared in the F spheres, for a total spin magnetization of 0.62 $\mu_B$. Application of the same field in an in-plane $c$(2x2) antiferromagnetic pattern yielded induced Ag moments in the spheres of only 0.17 $\mu_B$, with a small moment also appearing on the apical F, but no moments on the in-plane F, as is required by symmetry. This shows the system to be much closer to ferromagnetism than antiferromagnetism at the LSDA level, and suggests an important role for the in-plane F in the magnetism.
Within the GGA, we obtain a ferromagnetic ground state, with spin magnetization, $M=0.9 \mu_B$ and energy 6 meV below the non-spin polarized solution. However, we do not find any metastable antiferromagnetic solution, implying itinerant magnetism, in particular, the absence of stable local moments. The calculated electronic density of states (DOS) for the ferromagnetic ground state is shown in Fig. \[dos\]. The band structure is shown in Fig. \[bands\], and the Fermi surface in Fig. \[fermi\]. The band structure is expected to be two dimensional, due to the bonding topology, which has 180$^\circ$ Ag-F-Ag links in the AgF$_2$ sheets, but no direct Ag-F-Ag connections in the $c$-axis direction. This in fact is the case. [@disp-note] As may be seen, Cs$_2$AgF$_4$ is close to a half metal, with the Fermi energy being near a band edge in the majority channel, but not in the minority channel. The minority spin Fermi surface consists of small hole cylinders running along the zone corner (from the $d_{x^2-y^2}$ band) and electron cylinders around the zone center i(from the $d_{z^2}$ band). The majority spin Fermi surface consists of a single large square cylindrical electron surface that almost fills the Brillouin zone, leaving a small region of holes around the zone boundary.
Cs$_{2}$AgF$_{4}$ has two type of F sites forming distorted Ag centered octahedra; one is in the AgF$_2$ sheets (referred as F1 in this paper), and the other is the apical F along the $c$ - axis (referred as F2 in this paper). The apical Ag - F2 distance is slightly smaller than the in-plane Ag - F1 distance. A key point is that the F1 atoms bridge the Ag atoms in the sheets, with 180$^\circ$ bonds, while the apical, F2 atoms connect to only one Ag atom and therefore are not bridging.
Examining the DOS and projections in more detail, one may note that the valence bands have substantially
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'A wide range of stochastic processes that model the growth and decline of populations exhibit a curious dichotomy: with certainty either the population goes extinct or its size tends to infinity. There is a elegant and classical theorem that explains why this dichotomy must hold under certain assumptions concerning the process. In this note, I explore how these assumptions might be relaxed further in order to obtain the same, or a similar conclusion, and obtain both positive and negative results.'
address: 'Biomathematics Research Centre, University of Canterbury, Christchurch, New Zealand'
author:
- Mike Steel
title: 'Reflections on the extinction–explosion dichotomy'
---
Extinction, Borel–Cantelli lemma, population size, coupling, Markov chain
Introduction
============
The ‘merciless dichotomy’ (Section 5.2 of [@had]) concerning extinction refers to a very general property of stochastic processes that describes the long-term fate of populations. Roughly speaking, the result states that if there is always a strictly positive chance the population could become extinct in the future (depending, perhaps, on the current population size), then the population is guaranteed to either become extinct or to grow unboundedly large. More precisely, a formal version of this result, due to Jagers (Theorem 2 of [@jag]), applies to any sequence $X_1, X_2, \ldots, X_n \ldots $ of non-negative real-valued random variables that are defined on some probability space and which is absorbing at 0 (i.e. $X_n=0 \Rightarrow X_{n+1}=0$ for all $n$). It states that, provided: $$\label{strong}
{{\mathbb P}}(\exists r: X_r=0|X_1, X_2, \ldots, X_n) \geq \delta_x>0 \mbox{ whenever $X_n \leq x$}$$ holds for all positive integers $n$, then, with probability 1, either $X_n \rightarrow \infty$ or a value of $n$ exists for which $X_k=0$ for all $k\geq n$ (notice that $\delta_x$ can tend towards 0 at any rate as $x$ grows). This result applies to a wide variety of stochastic processes studied in evolutionary and population biology (e.g. Yule birth-death models, branching processes etc) and the proof in [@jag] involves an elegant and short application of the martingale convergence theorem.
Note that the processes in [@jag] (and here) need not be Markovian. Nevertheless, the lower-bound inequality condition in (\[strong\]) has a Markovian-like feature that it is required to hold for all values of $X_1, X_2, \ldots, X_{n-1}$ whenever $X_n$ is less than $x$. This raises the question of how much this uniform bounding across the previous history of the process might be relaxed without sacrificing the conclusion of certain extinction or explosion. In this short note, we consider possible extensions of Jagers’ theorem by weakening the assumption in (\[strong\]). Specifically, we will consider a lower bound that conditions just on the event that $0<X_n \leq x$, either alone or alongside another variable that is dependent on (but less complete than) the past history $X_1, \ldots, X_{n-1}$.
First, we consider what happens if the probability in the lower bound (\[strong\]) were to condition just on $0<X_n \leq x$. In this case, we describe a positive result that delivers a slightly weaker conclusion than the original theorem of Jagers. We then show that the full conclusion cannot be obtained by lower bounds that condition solely on $0<X_n\leq x$ by exhibiting a specific counterexample. However, in the final section, we show that the full conclusion of Jagers’ theorem can be obtained by conditioning on $0<X_n\leq x$, together with some partial information concerning the past history of the process.
A simple general lemma and its consequence for bounded populations
==================================================================
We first present an elementary but general limit result, stated within the usual notation of a probability space $(\Omega, \Sigma, {{\mathbb P}})$ consisting of a sigma-algebra $\Sigma$ of ‘events’ (subsets of the sample space $\Omega$) and a probability measure ${{\mathbb P}}$ (for background on probability theory, see [@borel]).
Suppose that $E_1, E_2,\ldots $ are [*increasing*]{} (i.e. $E_i \subseteq E_{i+1}$) and $E = \bigcup_{n=1}^{\infty}E_n$. For example, suppose that $E_n$ is the event that some particular ‘situation’ (e.g. extinction of the population) has arisen on or before a given time step $n$ (e.g. day, year). These events are increasing and their union $E$ is the event that the ‘situation’ eventually arises. We are interested in when ${{\mathbb P}}(E)=1$. A sufficient condition to guarantee this is to impose any non-zero lower bound on the probability that the ‘situation’ arises at time step $n$ given that it has not done so already; in other words, to require that the conditional probability ${{\mathbb P}}(E_n|\overline{E_{n-1}})$ is at least $\delta >0$ for all sufficiently large values of $n$ (throughout this paper an overline denotes the complementary event).
On the other hand, it is equally easy to check that if $p_n={{\mathbb P}}(E_n|\overline{E_{n-1}})$ is allowed to converge to zero sufficiently quickly (so the probability of the ‘situation’ first arising on day $n$ goes to zero sufficiently fast that $\sum_n p_n < \infty$), then it is possible for ${{\mathbb P}}(E)<1$. For example, if accidents occur independently and the probability of a particular accident is reduced each year by $1\%$ of its current value, then there is a positive probability that no accident will ever occur; but if the probability reduces at the rate $1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \cdots,$ then an accident is guaranteed to eventually occur (by the second Borel–Cantelli lemma).
Rather than placing some lower bound on the probability that the situation arises at time step $n$, we can, following [@jag], make a weaker assumption that if the situation has not happened yet, there is always a non-vanishing chance that it will occur some time in the future (formally, requiring merely that ${{\mathbb P}}(E|\overline{E_n})$ is uniformly bounded away from $0$). For maximal generality, we also wish to avoid any Markovian or independence assumptions. The following lemma provides a sufficient condition for ${{\mathbb P}}(E)=1$ without any further assumptions, and uses an elementary argument that will be useful later.
\[figure7\]
\[mike\] Suppose $E_n$ is an increasing sequence with limit $E$ and suppose that for some $\epsilon > 0$, ${{\mathbb P}}(E|\overline{E_n}) \geq \epsilon$ holds for all $n \geq 1$. Then ${{\mathbb P}}(E)=1$.
[*Proof:*]{} Let $p_n = P(E_n)$. Then, by the law of total probability:
${{\mathbb P}}(E) = {{\mathbb P}}(E|\overline{E_n})(1 - p_n) + {{\mathbb P}}(E|E_n)p_n$.
Now, ${{\mathbb P}}(E|E_n) = 1$ and, by assumption, ${{\mathbb P}}(E|\overline{E_n}) \geq \epsilon$. Therefore:
${{\mathbb P}}(E) \geq \epsilon(1 - p_n) + p_n$.
Since the events $E_n$ are increasing, a well known and elementary result in probability theory ensures that ${{\mathbb P}}(E) = \lim_{n \to \infty} p_n$. So, letting $n \to \infty$ in the previous inequality gives:
${{\mathbb P}}(E) \geq \epsilon (1 - {{\mathbb P}}(E)) + {{\mathbb P}}(E)$,
which implies that ${{\mathbb P}}(E) = 1$, as claimed. $\Box$
Example 1
---------
Consider population of a species where $X_n$ denotes the size of the population at time step $n$. The event $E_n = \{X_n=0\}$ is the event that the population is extinct by time step $n$ and this increasing sequence has the limit $E$ equal to the event of eventual extinction. In this setting, Lemma \[mike\] provides the following special case of Jagers’ theorem.
\[coro1\] Suppose that $X_1, X_2, \ldots, X_n$ is a sequence of non-negative real-valued random variables that are absorbing at 0 and are constrained to lie between $0$ and $M$. Moreover, suppose that for some $\delta>0$ and all positive integers $n$ we have: ${{\mathbb P}}(\exists r: X_r=0|X_n \neq 0) \geq \delta.$ Then, with probability 1, a value $n$ exists for which $X_
|
{
"pile_set_name": "ArXiv"
}
| null |
---
address:
- 'Laboratoire de Mathématiques de l’Université Paris-Sud'
- 'AgroParisTech/UMR INRA MIA 518'
author:
- 'A. Bonnet'
- 'E. Gassiat'
- 'C. Lévy-Leduc'
bibliography:
- 'biblio\_anna.bib'
title: Heritability estimation in high dimensional linear mixed models
---
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank Edouard Maurel-Segala and Maxime Février for stimulating discussions on random matrix theory and Thomas Bourgeron and Roberto Toro for having led us to study this very interesting subject and for the discussions that we had together on genetic topics.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- |
Kevin <span style="font-variant:small-caps;">Ebinger</span>$^{1}$, Sanjana <span style="font-variant:small-caps;">Sinha</span>$^{2}$, Carla <span style="font-variant:small-caps;">Fröhlich</span>$^{2}$, Albino <span style="font-variant:small-caps;">Perego</span>$^{3}$, Matthias <span style="font-variant:small-caps;">Hempel</span>$^{1}$,\
Marius <span style="font-variant:small-caps;">Eichler</span>$^{2}$, Jordi <span style="font-variant:small-caps;">Casanova</span>$^{2}$, Matthias <span style="font-variant:small-caps;">Liebendörfer</span>$^{1}$, and Friedrich-Karl <span style="font-variant:small-caps;">Thielemann</span>$^{1}$\
\
$^{1}$ Department für Physik, Universität Basel, CH-4056 Basel, Switzerland\
$^{2}$Department of Physics, North Carolina State University, Raleigh, NC, 29695-8202, USA\
$^{3}$Institut für Kernphysik, Technische Universität Darmstadt, D-64289 Darmstadt, Germany
date: 'August 20, 2016'
title: 'Explosion Dynamics of Parametrized Spherically Symmetric Core-Collapse Supernova Simulations'
---
Introduction
============
Core-collapse supernovae (CCSNe) occur at the end of the evolution of massive stars and the ejecta of these violent events contribute to the chemical evolution of the universe. The explosion mechanism of CCSNe is still not fully understood and self consistent one-dimensional simulations of CCSNe, including general relativity and detailed neutrino transport, do not lead to explosions, with the exception of the lowest-mass CCSN progenitors[@lowmass]. Even though multi-dimensional simulations are promising to explode and well suited to investigate the explosion mechanism they are computationally too expensive to explore a large set of progenitors. The presented parametrized one-dimensional framework (PUSH, introduced in[@push]) is well suited to study explosive nucleosynthesis and remnant properties of a broad range of CCSN progenitors and with this also get a better understanding of phenomenon itself. Spherically symmetric simulations show a smaller heating efficiency of electron neutrinos behind the shock due to an absence of convective motion. PUSH provides extra energy deposition in the heating region by tapping the energy of $\mu$- and $\tau$- (anti)-neutrinos in otherwise consistent spherically symmetric simulations to mimic multi-dimensional effects (e.g., convection, SASI) that enhance neutrino heating. This enables a consistent evolution of the PNS and treatment of the electron fraction of the ejecta. Furthermore, after the onset of explosion the method also prevents a too strong decrease in $\nu$-heating behind the shock due to a drop in electron (anti)neutrino luminosity that occurs in 1D simulations due to drastic reduction of the mass accretion rate on the PNS (see Figure \[lum\]). Figure \[radii\] shows the temporal evolution of the shock, gain and PNS radius with and without PUSH of a CCSN simulation of a 20 M$_{\odot}$ star.
![Temporal evolution of the shock radius, the PNS radius and the gain radius of a 20 M$_{\odot}$ progenitor (WH07 [@prog1]).[]{data-label="radii"}](neutrino_analysis.eps){width="1.05\linewidth"}
![Temporal evolution of the shock radius, the PNS radius and the gain radius of a 20 M$_{\odot}$ progenitor (WH07 [@prog1]).[]{data-label="radii"}](rpns.eps){width="\linewidth"}
Comparison with multi-dimensional simulations
=============================================
Overall PUSH shows a behaviour more consistent with multi-dimensional models than older methods (e.g. pistons and thermal bombs [@piston],[@thiel96]). Figures \[pushentropy\] and \[flashentropy\] show the spherically averaged entropy per baryon as a function of radius obtained from a 2D Flash simulation (see [@kcpan] and references therein) and from a 1D simulation with PUSH for the same progenitor and electron (anti)neutrino transport [@lieb1]. The comparison of the two figures shows on average a similar heating pattern. Such a comparison can be used as a further fit requirement (besides explosion energy and nucleosynthesis yields [@push], see also proceeding of S. Sinha, this volume) for the free parameters of the PUSH method.
![Spherically averaged entropy per baryon as a function of radius obtained from a 2D Flash simulation of a 20 M$_{\odot}$ progenitor.[]{data-label="flashentropy"}](entropyset.eps){width="\linewidth"}
![Spherically averaged entropy per baryon as a function of radius obtained from a 2D Flash simulation of a 20 M$_{\odot}$ progenitor.[]{data-label="flashentropy"}](fig_s20_LS220_radial_averaged_entr_v1.eps){width="\linewidth"}
Progenitor and Equation of State Dependence of Black Hole Formation
===================================================================
To disentangle aspects - other than the explosion mechanism - that have an influence on black hole formation we investigate the effect that different choices of the equation of state and of the progenitor profiles can have in our 1D simulations.
![Black hole formation times for a collection of different progenitor ZAMS masses from two different progenitor sets (WH07[@prog1] in red and WHW02[@prog3] in blue).[]{data-label="fig:overviewcollapse"}](w02rhoc40.eps){width="\linewidth"}
![Black hole formation times for a collection of different progenitor ZAMS masses from two different progenitor sets (WH07[@prog1] in red and WHW02[@prog3] in blue).[]{data-label="fig:overviewcollapse"}](tdot.eps){width="\linewidth"}
Figure \[fig:detailcollapse\] shows the temporal evolution of the central density of a 40 M$_{\odot}$ solar metallicity star for two progenitor models (WH07[@prog1] in red and WHW02[@prog3] in blue) and two equations of state (HS(DD2) solid lines, SFHO dashed lines, [@hempel],[@fischer],[@sfho]). The dependence of the black hole formation time on the equation of state (indicated by the colored areas) and the even stronger dependence on the progenitor model for this progenitor ZAMS mass (difference between red and blue lines) is evident. Baryonic PNS masses at collapse are given next to the corresponding central density curves. In Figure \[fig:overviewcollapse\] the black hole formation times for a set of different progenitor ZAMS masses are given. The differences for black hole formation time between the progenitors can be related to different accretion rates, which are correlated to compactness $\xi_{M}=\frac{M/M_{\odot}}{R(M)/1000km}$.
Conclusions and Outlook
=======================
In comparison to traditional effective methods, as pistons or thermal bombs, PUSH is better suited to study explosive nucleosynthesis, especially of the innermost ejecta, due to the inclusion of more neutrino physics and the preservation of charged current reactions. We have shown that the entorpy profiles obained with PUSH are similar to the spherical averages of multi-dimensional models and demonstrated the big effect the choice of progenitor and equation of state can have on black hole formation and thus on a study of explodability. It is planned to investigate the explodability of different progenitor sets with different equations of state with PUSH in the future.
[9]{} A. Perego, M. Hempel, C. Fröhlich, K. Ebinger, M. Eichler, J. Casanova, M. Liebendörrfer, F.-K. Thielemann, Astrophys. J. **806**, 275 (2015) T. Fischer, S. C. Whitehouse, A. Mezzacappa, F.-K. Thielemann, M. Liebendörfer, A&A **517**, A80 (2010) S. E. Woosley & A. Heger, Phys. Rep. **442**, 269 (2007) S. E. Woosley & T. A. Weaver, Astrophys. J. S. **101**, 181 (1995) F.-K. Thielemann, K. Nomoto, M. A. Hashimoto, Astrophys. J. **460**, 408 (1996) K.-C. Pan et al., Astrophys. J. **817**, 72 (2016) M. Liebendörrfer, S. C. Whitehouse, T. Fischer, Astrophys. J. **698**, 1174 (2009) S. E. Woosley, A. Heger, T. A. Weaver, Rev. Mod. Phys. **74**, 1015 (2002) M. Hempel & J. Schaffner-Bielich, Nuc. Phys. A **837**, 210 (2010) T. Fischer, M. Hempel, I. Sagert, Y. Suwa, J. Schaffner-Bielich, Eur. Phys. J. A. **50**, 46 (2014) A. W. Steiner, M. Hempel, T. Fischer,
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We clarify certain important issues relevant for the geometric interpretation of a large class of $N = 2$ superconformal theories. By fully exploiting the phase structure of these theories (discovered in earlier works) we are able to clearly identify their geometric content. One application is to present a simple and natural resolution to the question of what constitutes the mirror of a rigid Calabi-Yau manifold. We also discuss some other models with unusual phase diagrams that highlight some subtle features regarding the geometric content of conformal theories.'
author:
- |
Paul S. Aspinwall and Brian R. Greene\
F.R. Newman Lab. of Nuclear Studies,\
Cornell University,\
Ithaca, NY 14853\
title: |
On the Geometric Interpretation of\
$N$ = 2 Superconformal Theories\
---
epsf
\#1[date[*\#1*]{}]{} \#1[thefnmarkfootnotetext[1991 [*Mathematics Subject Classification.*]{} \#1]{}]{} \#1[thefnmarkfootnotetext[ [*Key words and phrases.*]{} \#1]{}]{}
PS. @firstpage[PS. @empty oddhead evenheadoddhead ]{}
ifundefined[reset@font]{}[@font]{} footnotetext\#1
[ ]{}
[ ]{}
\#1 Ø[[O]{}]{} ¶[[P]{}]{}
\#1[\#1 \#1 \#1]{} \#1 \#1
\#1\#2 \#1\#2[\_[\#1]{}F\_[\#2]{}]{}
0
1.5cm
Introduction and Summary {#s:intro}
========================
One of the most intriguing problems in string theory is to understand how space-time emerges naturally. Since the vacuum configuration for a critical string is given by a conformal field theory a question which arises in this context is the following. Given a conformal field theory, can one construct some corresponding geometrical interpretation? In this paper we will discuss this question for particularly troublesome conformal field theories. It is worthwhile to emphasize at the outset that in general when a conformal theory does have a geometrical interpretation it may not be unique. A perusal of even simple systems such as conformal theories with central charge $c = 1$ makes this clear. For instance, in this moduli space it is known that a string on the group manifold $SU(2)$ is equivalent to a string on a circle of radius $\sqrt{\alpha^\prime}$. Both target spaces have an equal right to be declared [*the*]{} geometrical interpretation of the conformal field theory. Similarly a circle of radius $R$ is equivalent to a circle of radius $\alpha^\prime/R$. Mirror symmetry, in which strings propagating on distinct Calabi-Yau spaces give identical physical models, is another substantial arena in which geometrical interpretations are not unique. These ambiguities are a reflection of the rich structure of quantum geometry; they arise because of the extended nature of the string.
When there are multiple geometric interpretations of a given model, there is no reason why one should be forced to choose between the possibilities. Rather, one can exploit the geometric ambiguity as some interesting physical questions are more easily answered from one interpretation rather than another.
In this paper we shall focus our investigation into the geometric content of certain of $N = 2$ conformal theories using the framework established in [@W:phase; @AGM:I; @AGM:II]. This approach has the virtue of giving us a physical and mathematical understanding of [*global*]{} properties of the moduli space of these theories as well as of the theories themselves. It also gives us the proper arena for understanding the global implications of mirror symmetry. We will apply this approach to study some theories whose geometrical content has been quite puzzling. For some of these theories, previous papers have proposed possible geometrical interpretations [@Drk:Z; @Schg:gen; @Set:sup]. We will see that when phrased in the language of [@W:phase; @AGM:I; @AGM:II], the previous puzzles are seen to disappear and the geometric status of these theories becomes apparent. Following our remarks above, there need not be one unique interpretation of a given model; however, we do feel that the approach provided here is especially enlightening and economical. We will also see that the less natural constructions of [@Drk:Z; @Schg:gen; @Set:sup] can give misleading results for properties of the corresponding physical model.
We now recall some important background material which will naturally lead us to a summary of the problems we address and the solutions we offer.
Our understanding of the geometric content of $N = 2, c = 3d$ superconformal theories has undergone impressive growth and revision over the last few years. The initial picture which emerged from numerous studies is schematically given in figure \[fig:1\]a. We have an abstract $N = 2, c = 3d$ conformal field theory moduli space that is geometrically interpretable in terms of complex structure and Kähler structure deformations of an associated Calabi-Yau manifold of $d$ complex dimensions and a fixed topological type. The space of Kähler forms naturally exists as a bounded domain (the complexification of the “Kähler cone”) which we denote as a cube. The moduli space of complex structures does not have this form and is more usually compactified to form a compact space. Observables in each of the conformal theories in the moduli space are related to geometrical constructs on the corresponding Calabi-Yau space, the latter being taken as the target space of a nonlinear sigma model.
This picture was extended to that given in figure \[fig:1\]b after the discovery of mirror symmetry. Two Calabi-Yau spaces $X$ and $Y$ constitute a mirror pair if they yield isomorphic conformal theories when taken as the target space for a two-dimensional supersymmetric nonlinear , with the explicit isomorphism being a change in sign of the left moving $U(1)$ charges of all fields. Geometrically this implies that the Hodge numbers $h^{1,1}(X)$ and $h^{d-1,1}(X)$ are related to those of $Y$ by $h^{1,1}(X) = h^{d-1,1}(Y)$ and $h^{d-1,1}(X) = h^{1,1}(Y)$. Since the cohomology groups $H^{1,1}$ and $H^{d-1,1}$ correspond to Kähler and complex structure deformations, respectively, we see that the underlying conformal field theory moduli space has the two geometrical interpretations given in the figure. This immediately led to a problem since, as mentioned above, the geometric form of the moduli spaces of Kähler forms and complex structures appeared to be quite different.
This was resolved by the works of [@W:phase; @AGM:I; @AGM:II] to that shown in figure \[fig:1\]c. Here we see that the appropriate interpretation of the conformal field theory moduli space has required that the Kähler moduli space of $X$ be replaced by its “enlarged Kähler moduli space” (and similarly for $Y$). The latter contains numerous regions in addition to the Kähler cone of the topological manifold $X$. For instance, it typically contains regions corresponding to the Kähler cones of Calabi-Yau spaces related to $X$ by the birational operation of flopping a rational curve, regions corresponding to the moduli space of singular blow-downs of $X$ and its birational partners, and regions interpretable in terms of the parameter space of (gauged or ungauged) Landau-Ginzburg models fibered over various compact spaces. The complex structure moduli space can also be equipped with a phase structure [@AGM:sd] — as must happen to preserve mirror symmetry. We note that from the point of view the phase regions in the complex structure moduli space have a less pronounced physical interpretation. This is because in analyzing the we use perturbation theory in Kähler modes (which fix the size of the Calabi-Yau) and hence this approximation method is not mirror symmetric. However, the phase structure in the complex structure moduli space of $X$ [*is*]{} the phase structure in the enlarged Kähler moduli space of $Y$ and it is the latter interpretation where this phase structure is most manifest. For the purposes of this paper we may ignore the phase structure in the complex structure part of the moduli space and for this reason we have put parentheses around this in \[fig:1\]c.
The results of the present paper all stem directly from a careful study of the phase diagrams of figure \[fig:1\]c. We shall review the quantitative construction of these phase spaces in section \[s:ph\]; for now we will content ourselves with the schematic description given and summarize our results with a similar level of informality.
There are numerous ways of constructing $N = 2$ superconformal theories with $c = 3d$. Some constructions, such as the Calabi-Yau s described above, are manifestly geometric in character. Other constructions do not begin with a geometrical target space and hence their geometrical content, if any, can only be assessed after more detailed study. More generally and pragmatically, given an abstract conformal field theory in some presentation, how do we determine if it has a geometrical interpretation? We will not seek to answer this question in generality, but rather will focus attention on those theories for which we can construct the phase diagram illustrated in figure \[fig:1\]c. For theories of this sort, as we shall review, toric geometry supplies us with a geometric description of each theory. We hasten to emphasize, though, that Calabi-Yau s are but one kind of corresponding geometry. We will see
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Mean-reverting assets are one of the holy grails of financial markets: if such assets existed, they would provide trivially profitable investment strategies for any investor able to trade them, thanks to the knowledge that such assets oscillate predictably around their long term mean. The modus operandi of cointegration-based trading strategies [@tsay2005analysis §8] is to create first a portfolio of assets whose aggregate value mean-reverts, to exploit that knowledge by selling short or buying that portfolio when its value deviates from its long-term mean. Such portfolios are typically selected using tools from cointegration theory [@granger; @johansen], whose aim is to detect combinations of assets that are stationary, and therefore mean-reverting. We argue in this work that focusing on stationarity only may not suffice to ensure profitability of cointegration-based strategies. While it might be possible to create synthetically, using a large array of financial assets, a portfolio whose aggregate value is stationary and therefore mean-reverting, trading such a large portfolio incurs in practice important trade or borrow costs. Looking for stationary portfolios formed by many assets may also result in portfolios that have a very small volatility and which require significant leverage to be profitable. We study in this work algorithmic approaches that can take mitigate these effects by searching for maximally mean-reverting portfolios which are sufficiently sparse and/or volatile.'
author:
- |
Marco Cuturi\
Graduate School of Informatics\
Kyoto University\
`mcuturi@i.kyoto-u.ac.jp`\
\
Alexandre d’Aspremont\
D.I., UMR CNRS 8548\
Ecole Normale Supérieure, `aspremon@ens.fr`
title: |
Mean-Reverting Portfolios:\
Tradeoffs Between Sparsity and Volatility
---
Introduction
============
Mean-reverting assets, namely assets whose price oscillates predictably around a long term mean, provide investors with an ideal investment opportunity. Because of their tendency to pull back to a given price level, a naive contrarian strategy of buying the asset when its price lies below that mean, or selling short the asset when it lies above that mean can be profitable. Unsurprisingly, assets that exhibit significant mean-reversion are very hard to find in efficient markets. Whenever mean-reversion is observed in a single asset, it is almost always impossible to profit from it: the asset may typically have very low volatility, be illiquid, hard to short-sell, or its mean-reversion may occur at a time-scale (months, years) for which the borrow-cost of holding or shorting the asset may well exceed any profit expected from such a contrarian strategy.
### Synthetic Mean-Reverting Baskets
Since mean-reverting assets rarely appear in liquid markets, investors have focused instead on creating synthetic assets that can mimic the properties of a single mean-reverting asset, and trading such synthetic assets as if they were a single asset. Such a synthetic asset is typically designed by combining long and short positions in various liquid assets to form a *mean-reverting portfolio*, whose aggregate value exhibits significant mean-reversion.
Constructing such synthetic portfolios is, however, challenging. Whereas simple descriptive statistics and unit-root test procedures can be used to test whether a single asset is mean-reverting, building mean-reverting portfolios requires finding a proper vector of algebraic weights (long and short positions) that describes a portfolio which has a mean-reverting aggregate value. In that sense, mean-reverting portfolios are made by the investor, and cannot be simply chosen among tradable assets. A mean-reverting portfolio is characterized both by the pool of assets the investor has selected (starting with the dimension of the vector), and by the fixed nominal quantities (or weights) of each of these assets in the portfolio, which the investor also needs to set. When only two assets are considered, such baskets are usually known as long-short trading pairs. We consider in this paper baskets that are constituted by more than two assets.
### Mean-Reverting Baskets with Sufficient Volatility and Sparsity
A mean-reverting portfolio must exhibit sufficient mean-reversion to ensure that a contrarian strategy is profitable. To meet this requirement, investors have relied on cointegration theory [@granger; @maddala1998urc; @johansen2005cointegration] to estimate linear combinations of assets which exhibit stationarity (and therefore mean-reversion) using historical data. We argue in this work, as we did in earlier references [@alex; @cuturi2013mean], that mean-reverting strategies cannot, however, only rely on this approach to be profitable. Arbitrage opportunities can only exist if they are large enough to be traded without using too much leverage or incurring too many transaction costs. For mean-reverting baskets, this condition translates naturally into a first requirement that the gap between the basket valuation and its long term mean is large enough on average, namely that the basket price has sufficient variance or volatility. A second desirable property is that mean-reverting portfolios require trading as few assets as possible to minimize costs, namely that the weights vector of that portfolio is sparse. We propose in this work methods that maximize a proxy for mean reversion, and which can take into account at the same time constraints on variance and sparsity.\
\
We propose first in Section \[s:crit\] three proxies for mean reversion. Section \[s:opt\] defines the basket optimization problems corresponding to these quantities. We show in Section \[s:sdp\] that each of these problems translate naturally into semidefinite relaxations which produce either exact or approximate solutions using sparse PCA techniques. Finally, we present numerical evidence in Section \[s:numres\] that taking into account sparsity and volatility can significantly boost the performance of mean-reverting trading strategies in trading environments where trading costs are not negligible.
Proxies for Mean-Reversion {#s:crit}
==========================
Isolating stable linear combinations of variables of multivariate time series is a fundamental problem in econometrics. A classical formulation of the problem reads as follows: given a vector valued process $x=(x_t)_t$ taking values in $\RR^n$ and indexed by time $t\in\NN$, and making no assumptions on the stationarity of each individual component of $x$, can we estimate one or many directions $y\in\RR^n$ such that the univariate process $(y^Tx_t)$ is stationary? When such a vector $y$ exists, the process $x$ is said to be cointegrated. The goal of cointegration techniques is to detect and estimate such directions $y$. Taken for granted that such techniques can efficiently isolate sparse mean reverting baskets, their financial application can be either straightforward using simple event triggers to buy, sell or simply hold the basket [@tsay2005analysis §8.6], or more elaborate optimal trading strategies if one assumes that the mean-reverting basket value is a Ohrstein-Ullenbeck process, as discussed in [@jurek; @liu2010optimal; @elie:hal-00573429].
Related Work and Problem Setting
--------------------------------
@granger provided in their seminal work a first approach to compare two non-stationary univariate time series $(x_t,y_t)$, and test for the existence of a term $\alpha$ such that $y_t-\alpha x_t$ becomes stationary. Following this seminal work, several techniques have been proposed to generalize that idea to multivariate time series. As detailed in the survey by @maddala1998urc [§5], cointegration techniques differ in the modeling assumptions they require on the time series themselves. Some are designed to identify only one cointegrated relationship, whereas others are designed to detect many or all of them. Among these references, @johansen proposed a popular approach that builds upon a VAR model, as surveyed in [@johansen2005cointegration; @johansen2009cointegration]. These approaches all discuss issues that are relevant to econometrics, such as de-trending and seasonal adjustments. Some of them focus more specifically on testing procedures designed to check whether such cointegrated relationships exist or not, rather than on the robustness of the estimation of that relationship itself. We follow in this work a simpler approach proposed by @alex, which is to trade-off interpretability, testing and modeling assumptions for a simpler optimization framework which can be tailored to include other aspects than only stationarity. @alex did so by adding regularizers to the predictability criterion proposed by @box1977cam. We follow in this paper the approach we proposed in [@cuturi2013mean] to design mean-reversion proxies that do not rely on any modeling assumption.
Throughout this paper, we write $\symm_n$ for the $n\times n$ cone of positive definite matrices. We consider in the following a multivariate stochastic process $x=(x_t)_{t\in\NN}$ taking values in $\RR^n$. We write $\Acal_k= \Expect[x_t x_{t+k}^T], k\geq 0$ for the lag-$k$ autocovariance matrix of $x_t$ if it is finite. Using a sample path $\bx$ of $(x_t)$, where $\bx=(\bx_1,\ldots,\bx_T)$ and each $\bx_
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'A measurement of the top quark pair production cross section in proton anti-proton collisions at an interaction energy of $\sqrt{s}=1.96~{\rm TeV}$ is presented. This analysis uses 405 pb$^{-1}$ of data collected with the DØ detector at the Fermilab Tevatron Collider. Fully hadronic $t\bar{t}$ decays with final states of six or more jets are separated from the multijet background using secondary vertex tagging and a neural network. The $t\bar{t}$ cross section is measured as $\sigma_{t\bar{t}}=4.5_{-1.9}^{+2.0}({\rm stat}) _{-1.1}^{+1.4}({\rm syst}) \pm 0.3 ({\rm lumi})~{\rm pb }$ for a top quark mass of $m_{t} = 175~{\rm GeV/c^2}$.'
date: 'December 13, 2006'
title: 'Measurement of the $p\bar{p} \to t\bar{t}$ production cross section at $\sqrt{s}=1.96$ TeV in the fully hadronic decay channel '
---
list\_of\_authors\_r2.tex
The standard model (SM) predicts that the top quark decays primarily into a $W$ boson and a $b$ quark. The measurement presented here tests the prediction of the SM in the dominant decay mode of the $t\bar{t}$ system: when both $W$ bosons decay to quarks, the so-called fully hadronic decay channel. This topology occurs in 46% of $t\bar{t}$ events. The theoretical signature for fully hadronic $t\bar{t}$ events is six or more jets originating from the hadronization of the six quarks. Of the six jets, two originate from $b$ quark decays. Fully hadronic $t\bar{t}$ events are difficult to identify at hadron colliders because the background rate is many orders of magnitude larger than that of the $t\bar{t}$ signal.
We report a measurement of the production cross-section of top quark pairs, $\sigma_{t\bar{t}}$, using data collected with DØ in the fully hadronic channel, that exploits the long lifetime of the $b$ hadrons in identifying $b$ jets. To increase the sensitivity for $t\bar{t}$ events, we used a neural network to distinguish signal from the overwhelming background of multijet production through Quantum Chromodynamic processes (QCD).
The DØ detector [@d0det] has a central tracking system consisting of a silicon micro strip tracker (SMT) and a central fiber tracker (CFT), both located within a 2 T superconducting solenoidal magnet, with designs optimized for tracking and vertexing at pseudorapidities $|\eta|<3$ and $|\eta|<2.5$, respectively. Rapidity $y$ and pseudorapidity $\eta$ are defined as functions of the polar angle $\theta$ and parameter $\beta$ as $y(\theta,\beta)= \frac{1}{2} \ln [ (1+\beta \cos \theta)/(1-\beta \cos \theta )]$ and $\eta(\theta)=y(\theta,1)$, where $\beta$ is the ratio of the particle’s momentum to its energy. The liquid-argon and uranium calorimeter has a central section (CC) covering pseudorapidities $|\eta|$ up to $\approx 1.1$ and two end calorimeters (EC) that extend coverage to $|\eta| \approx 4.2$, with all three housed in separate cryostats. Each calorimeter cryostat contains a multilayer electromagnetic calorimeter, a finely segmented hadronic calorimeter and a third hadronic calorimeter that is more coarsely segmented, providing both segmentation in depth and in projective towers of size $0.1 \times 0.1$ in $\eta$-$\phi$ space, where $\phi$ is the azimuthal angle in radians. An outer muon system, covering $|\eta|<2$, consists of a layer of tracking detectors and scintillation trigger counters in front of 1.8 T iron toroids, followed by two similar layers after the toroids. The luminosity is measured using plastic scintillator arrays placed in front of the EC cryostats.
The data set was collected between 2002 and 2004, and corresponds to an integrated luminosity $\mathcal{L}=405~\pm~25~{\rm pb}^{-1}$ [@newlumi]. To isolate events with six jets, we used a dedicated multijet trigger. The requirements on the trigger, particularly on jet and trigger tower energy thresholds, were tightened during the collection of the data set to manage the increasing instantaneous luminosities delivered by the Fermilab Tevatron Collider. The change in trigger requirements had little effect on the efficiency for signal, while removing an increasing number of background events [@footnote1]. The trigger was tuned for the fully hadronic $t\bar{t}$ channel and was optimized to remain as efficient possible while using limited bandwidth. The collection rate after all trigger levels was fixed to a few Hz, which was completely dominated by QCD multijet events as the hadronic $t\bar{t}$ event production rate is expected to be a few events per day. We required three or four trigger towers above an energy threshold of 5 GeV at the first trigger level, three reconstructed jets with transverse energies ($E_T$) above 8 GeV at the second trigger level, combined with a requirement on the sum of the transverse momenta ($p_{T}$) of the jets, and four or five reconstructed jets at transverse energy thresholds between 10 and 30 GeV at the highest trigger level [@d0det].
We simulated $t\bar{t}$ production using [alpgen 1.3]{} to generate the parton-level processes, and [pythia 6.2]{} to model hadronization [@alpgen; @pythia]. We used a top quark invariant mass of $m_{t}=175~{\rm GeV/c^2}$. The decay of hadrons carrying bottom quarks was modeled using [evtgen]{} [@evtgen]. The simulated $t\bar{t}$ events were processed with the full [geant]{}-based DØ detector simulation, after which the Monte Carlo (MC) events were passed through the same reconstruction program as was used for data. The small differences between the MC model and the data were corrected by matching the properties of the reconstructed objects. The residual differences were very small and were corrected using factors derived from detailed comparisons between the MC model and the data for well understood SM processes such as the jets in $Z$ boson and QCD dijet production.
In the offline analysis, jets were defined with an iterative cone algorithm [@jetsdef]. Before the jet algorithm was applied, calorimeter noise was suppressed by removing isolated cells whose measured energy was lower than four standard deviations above cell pedestal. In the case that a cell above this threshold was found to be adjacent to one with an energy less than four standard deviations above pedestal, the latter was retained if its signal exceeded 2.5 standard deviations above pedestal. Cells that were reconstructed with negative energies were always removed.
The elements for cone jet reconstruction consisted of projective towers of calorimeter cells. First, seeds were defined using a preclustering algorithm, using calorimeter towers above an energy threshold of 0.5 GeV. The cone jet reconstruction, an iterative clustering process where the jet axis was required to match the axis of a projective cone, was then run using all preclusters above 1.0 GeV as seeds. As jets from $t\bar{t}$ production are relatively narrow due to relatively high jet $p_{T}$, the jets were defined using a cone with radius $R_{{\rm cone}}=0.5$, where $\Delta R = \sqrt{(\Delta y)^2+(\Delta \phi)^2}$ . The resulting jets (proto-jets) took into account all energy deposits contained in the jet cone. If two proto-jets were within $1<\Delta R / R_{{\rm cone}} <2$, an additional midpoint clustering was applied, where the combination of the two proto-jets was used as a seed for a possible additional proto-jet. At this stage, the proto-jets that share transverse momentum were examined with a splitting and merging algorithm, after which each calorimeter tower was assigned to one proto-jet at most. The proto-jets were merged if the shared $p_T$ exceeded 50% of the $p_T$ of the proto-jet with the lowest transverse momentum and the towers were added to the most energetic proto-jet while the other candidate was rejected. If the proto-jets shared less than half of their $p_{T}$, the shared towers were assigned to the proto-jet which was closest in $\Delta R$ space. The collection of stable proto-jets remaining was then referred to as the [*reconstructed*]{} jets in the event. The minimal $p_{T}$ of a reconstructed jet was required to be 8 GeV/$c$ before any energy corrections were applied.
We removed jets caused by electromagnetic particles and jets resulting from noise in hadronic sections of the calorimeter by requiring that the fraction of the jet energy deposited in the calorimeter ($EMF$) was $0.05 < EMF < 0.95$ and the fraction of energy in the coarse hadronic calorimeter was less than 0.4. Jets formed from clusters of calorimeter cells known to be affected by noise were also rejected. The remaining noise contribution was removed by requiring that the jet also fired the first level trigger.
To correct the calorimeter jet energies back to the level of particle jets, a jet energy
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We present a complete atlas of the Cygnus Loop supernova remnant in the light of ø3 ($\lambda 5007$), , and 2 ($\lambda\lambda 6717, 6731$). We include low-resolution ($25\arcsec$) global maps and smaller fields at $6\arcsec$ resolution from observations using the Prime Focus Corrector on the 0.8-m telescope at McDonald Observatory. Despite its shell-like appearance, the Cygnus Loop is not a current example of a Sedov-Taylor blast wave. Rather, the optical emission traces interactions of the supernova blast wave with clumps of gas. The surrounding interstellar medium forms the walls of a cavity through which the blast wave now propagates, including a nearly complete shell in which non-radiative filaments are detected. We identify non-radiative shocks around half the perimeter of the Cygnus Loop, and they trace a circle of radius $R = 1\fdg 4$ (19 pc) in the spherical cavity walls. The Cygnus Loop blast wave is not breaking out of a dense cloud, but is instead running into confining walls. Modification of the shock velocity and gas temperature due to interaction of the blast wave with the surrounding medium introduces errors in estimates of the age of this supernova remnant. The optical emission of radiative shocks arises only where the blast wave encounters inhomogeneities in the ambient medium; it is not a consequence of gradual evolution to a global radiative phase. Distance measurements that rely on this uniform blast wave evolution are uncertain, but the radiative shocks can be used as distance indicators because of the spherical symmetry of the surrounding medium. The interstellar medium dominates not only the appearance of the Cygnus Loop but also the continued evolution of the blast wave. If this is a typical example of a supernova remnant, then global models of the interstellar medium must account for such significant blast wave deceleration.'
author:
- 'N. A. Levenson and James R. Graham'
- 'Luke D. Keller and Matthew J. Richter'
nocite:
- '[@Oort46]'
- '[@Fes82]'
- '[@Min58]'
- '[@Kir76]'
- '[@Gre91]'
- '[@Shu91]'
- '[@Fes82]'
- '[@McC79]'
- '[@Cha85]'
- '[@McK75]'
- '[@Lev97]'
- '[@Hes83]'
- '[@Hes94]'
- '[@Fes92]'
- '[@Fes92]'
- '[@Hes94]'
- '[@Hes94]'
- '[@Bla91]'
- '[@Mor96e]'
- '[@Cha85]'
- '[@Lev97]'
- '[@McK84]'
- '[@Fal82]'
- '[@Shu91]'
- '[@Lev97]'
- '[@PShu85]'
- '[@Van92]'
- '[@Vin97]'
- '[@Lev97]'
- '[@Min58]'
- '[@Hub37]'
- '[@Hes86]'
- '[@Che80]'
- '[@McK77]'
- '[@Shu87]'
title: Panoramic Views of the Cygnus Loop
---
ø3[\[\]]{} 2[\[\]]{} \#1\#2 \#1\#2\#3[ ]{}
Introduction\[secintro\]
========================
Supernova remnants greatly determine the large-scale structure of the interstellar medium. The energy of supernova remnants heats and ionizes the interstellar medium (ISM), and their blast waves govern mass exchange between various phases of the ISM. In doing so, supernova remnants (SNRs) influence subsequent star formation and the recycling of heavy elements in galaxies. Global models of the interstellar medium that include a hot ionized component ([@Cox74]; [@McK77]) are sensitive to the supernova rate, the persistence of their remnants, and the sizes they attain. A simple calculation of the last of these assumes that the blast wave expands adiabatically in a uniform medium once the blast wave has swept up mass comparable to the mass of the ejecta. During this Sedov-Taylor phase, the radius of the SNR as a function of $E_{51}$, the initial energy in units of $10^{51}$ erg, $n_o$, the ambient number density in units of ${\rm cm^{-3}}$, and $t_4$, time in units of $10^4$ yr, is $R=13 (E_{51}/n_o)^{1/5} t_4^{2/5} {\rm \,pc}$ in a medium where the mean mass per particle is $2.0 \times 10^{-24} {\rm \, g}$. This phase will last until radiative losses become important. The beginning of this subsequent phase, marked by the initial loss of pressure behind the blast wave, occurs at $t=1.9\times 10^4 E_{51}^{3/14} n_o^{-4/7} {\rm \,yr}$, when the radius is $R=16.2 E_{51}^{2/7} n_o^{-3/7} {\rm \,pc}$ ([@Shu87]), although the radiating shell is not fully formed yet.
We approach these large-scale questions with analysis of complete images of a particular supernova remnant, the Cygnus Loop, in three optical emission lines. This supernova remnant appears to be a limb-brightened shell at radio ([@Keen73]), infrared ([@Bra86]), optical ([@Fes82]), and X-ray ([@Ku84]; [@Lev97]) energies, which at first glance suggests that it is presently in the transition to the radiative stage. The Cygnus Loop has the advantages of being nearby, bright, and relatively unobscured by dust. This allows us to examine in detail the evolution of various portions of the shock front and to determine physical parameters, such as shock velocity and local ambient density, as they vary throughout the remnant. Despite its appearance, the Cygnus Loop is not a current example of blast wave propagation in a uniform medium at any stage. Instead, its evolution is governed by the inhomogeneous interstellar medium, which we map using the shock as a probe.
Many of the features we discuss have been noted by others. Oort (1946) first suggested that the Cygnus Loop is an expanding supernova shell. Spectroscopy of radiative shocks in selected locations (e.g., [@Mil74], [@Ray80a], and Fesen et al. 1982) combined with theoretical models of these shocks (e.g., [@Cox72], [@Dop77], [@Ray79], and [@Shu79]) has been used to derive the physical conditions of the observed shocks. We utilize the radial velocity measures of Minkowski (1958), Kirshner & Taylor (1976), Greidanus & Strom (1991), and Shull & Hippelein (1991) to discern some of the three-dimensional structure that is ambiguous from the data we present. Many non-radiative or Balmer-dominated shocks in the Cygnus Loop have been identified (e.g., [@Kir76], [@Ray80b], [@Tref81], Fesen et al. 1982, [@Fes92], and [@Han92]). Our observations qualitatively match these, and we rely on these works and others ([@Ray83]; [@Long92]; [@Hes94]) for quantitative measures of parameters such as shock velocity and preshock density. McCray & Snow (1979) and Charles, Kahn, & McKee (1985) have suggested that the Cygnus Loop is the result of a cavity explosion, and we adapt this global model to interpret the surrounding interstellar medium, as well.
This paper is a companion to the soft X-ray survey presently in progress with the [*ROSAT*]{} High Resolution Imager ([@Gra96]; [@Lev97]). With these two surveys, we examine the Cygnus Loop as a whole, not restricting our investigation only to those regions that are exceptionally bright or that appear to be particularly interesting. We hope to understand both the global processes and the specific variations that are responsible for the emission we detect. The X-rays probe hot (temperature $T\sim 10^6$ K) gas that shocks with velocities $v_s \sim 400 \kms$ heat. The optical emission is expected from slower shocks ($v_s \lesssim 200 \kms$) in which the post-shock region cools to temperatures $T\sim 10^4$ K, yet the most prominent regions at optical wavelengths are also bright in X-rays. McKee & Cowie (1975) suggested that the broad correlation of X-ray and optical emission is the result of a blast wave propagating in an inhomogeneous medium. In this scenario, the shock is significantly decelerated in dense clumps of gas, while portions of it proceed unimpeded through the lower-density intercloud medium. We apply the principles of this basic cloud–blast-wave interaction to a range of locations in the Cygnus Loop. In particular, we refine the cavity model introduced in Levenson et al. (1997), using these optical data to constrain the current ISM in the vicinity of the Cygnus Loop and to determine how the stellar progenitor modified it in the past.
We present the observations in §2. We describe them in detail, noting individual regions of interest, and we use these data to measure the physical conditions of the blast wave and the ambient medium in particular locations in §3
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Employing two state-of-the-art methods, second-order many-body perturbation theory and multiconfiguration Dirac-Fock, highly accurate calculations are performed for the lowest 318 fine-structure levels arising from the $2s^{2} 2p^{4}$, $2s 2p^{5}$, $2p^{6}$, $2s^{2} 2p^{3} 3l$, $2s 2p^{4} 3l$, $2p^{5} 3l$, and $2s^{2} 2p^{3} 4l$ configurations in O-like . Complete and consistent atomic data, including excitation energies, lifetimes, wavelengths, and E1, E2, M1 line strengths, oscillator strengths, and transition rates among these 318 levels are provided. Comparisons are made between the present two data sets, as well as with other available experimental and theoretical values. The present data are accurate enough for identification and deblending of emission lines involving the $n=3,4$ levels and are also useful for modeling and diagnosing fusion plasmas be considered as a benchmark for other calculations.'
address:
- 'Hebei Key Lab of Optic-electronic Information and Materials, The College of Physics Science and Technology, Hebei University, Baoding 071002, China'
- 'Shanghai EBIT Lab, Key Laboratory of Nuclear Physics and Ion-beam Application, Institute of Modern Physics, Department of Nuclear Science and Technology, Fudan University, Shanghai 200433, China'
- 'School of Science, Hunan University of Technology, Zhuzhou, 412007, China'
- 'Institute of Applied Physics and Computational Mathematics, Beijing 100088, China'
author:
- Kai Wang
- Wei Zheng
- Xiao Hui Zhao
- Zhan Bin Chen
- Chong Yang Chen
- Jun Yan
bibliography:
- 'ref.bib'
title: 'Extended calculations of energy levels, radiative properties, and lifetimes for oxygen-like '
---
atomic data; O-like ; many-body perturbation theory; multiconfiguration Dirac-Fock method
Introduction
============
Walls of fusion reactors often contain alloys of molybdenum, and ions of Mo are present in the plasmas due to sputtering from the walls [@Mansfield.1978.V11.p1521; @Reader.2015.V48.p144001]. Therefore, in order to simulate and diagnose plasmas that contain Mo as a constituent, accurate atomic data for different Mo ions are required. In view of this, soft X-ray emission lines from molybdenum plasmas generated by dual laser pulses were measured for different Mo ions [@Lokasani.2016.V109.p194103]. @Feldman.1991.V8.p531 identified 13 lines of the $(1s^2)2s^22p^4$ - $2s2p^5$ transitions in a laser-produced plasma.
the theoretical side, excitation energies and radiative transition data for the low-lying states of the $2s^22p^4$, $2s2p^5$, and $2p^6$ configurations were provided by different calculations [@Fontes.2015.V101.p143; @Hu.2011.V9.p1228; @Gu.2005.V89.p267; @Zhang.2002.V82.p357; @Vilkas.1999.V60.p2808]. Atomic parameters of the $n > 2$ levels are also needed for applications in plasma physics [@Rice.2000.V33.p5435; @Kink.2001.V63.p46409].
The present study is to provide a complete accurate data set of energy levels, radiative transition data, and lifetimes involving high-lying levels in O-like . of our previous work O-like ions [@Wang.2017.V229.p37; @Wang.2017.V194.p108]. Excitation energies, wavelengths, line strengths, oscillator strengths, transition rates, and lifetimes are provided the lowest 318 levels of the $2s^{2} 2p^{4}$, $2s 2p^{5}$, $2p^{6}$, $2s^{2} 2p^{3} 3l$ ($l=s, p, d$), $2s 2p^{4} 3l$ ($l=s, p, d$), $2p^{5} 3l$ ($l=s, p, d$), and $2s^{2} 2p^{3} 4l$ ($l=s, p, d, f$) configurations using the many-body perturbation theory (MBPT) method [@Lindgren.1974.V7.p2441; @Safronova.1996.V53.p4036; @Vilkas.1999.V60.p2808; @Gu.2005.V156.p105; @Gu.2007.V169.p154]. In order to assess the accuracy of our MBPT calculations, the multiconfiguration Dirac-Fock (MCDF) and relativistic configuration interaction (RCI) method [@Grant.2007.V.p; @FroeseFischer.2016.V49.p182004] is used to calculate the corresponding data. The present study significantly increases the amount of accurate data for the $n = 3, 4$ levels of O-like Mo to directly aid and confirm experimental identifications.
Calculations
============
The MBPT method integrated in the FAC code [@Gu.2008.V86.p675] and the MCDF method implemented in the GRASP2K code [@Jonsson.2007.V177.p597; @Jonsson.2013.V184.p2197] are used to perform the calculations. Both of the methods have been successfully used to calculate atomic parameters for L- and M-shells systems with high accuracy [@Wang.2014.V215.p26; @Wang.2015.V218.p16; @Wang.2016.V223.p3; @Wang.2016.V226.p14; @Wang.2017.V119.p189301; @Wang.2017.V194.p108; @Wang.2017.V187.p375; @Wang.2017.V229.p37; @Wang.2018.V235.p27; @Wang.2018.V239.p30; @Wang.2018.V234.p40; @Wang.2018.V208.p134; @Wang.2018.V864.p127; @Wang.2018.V220.p5; @Chen.2017.V113.p258; @Chen.2018.V206.p213; @Chen.2019.V234.p90; @Chen.2019.V225.p76; @Guo.2015.V48.p144020; @Guo.2016.V93.p12513; @Si.2016.V227.p16; @Si.2017.V189.p249; @Si.2018.V239.p3; @Zhao.2018.V119.p314]. We only give an outline of the MBPT and MCDF calculations, since these two methods are described in our earlier work in detail.
MBPT
----
In the MBPT method, the Hilbert space of the system is divided into two subspaces, including a model space $M$ and an orthogonal space $N$. By means of solving the eigenvalue problem of a non–Hermitian effective Hamiltonian in the space $M$, we can get the true eigenvalues of the Dirac–Coulomb–Breit Hamiltonian. The configuration interaction effects in the $M$ space is exactly considered, and the interaction of the spaces $M$ and $N$ is accounted for with the many-body perturbation theory up to the second order. In the calculations, we include all states of the $2s^{2} 2p^{4}$, $2s 2p^{5}$, $2p^{6}$, $2s^{2} 2p^{3} 3l$ ($l=s, p, d$), $2s 2p^{4} 3l$ ($l=s, p, d$), $2p^{5} 3l$ ($l=s, p, d$), $2s^{2} 2p^{3} 4l$ ($l=s, p, d, f$), and $2s 2p^{4} 4s$ configurations in the model space $M$, Through the single and double (SD) virtual excitations of the states spanning the $M$ space, are contained in the space $N$. The maximum $n$ values for the single/double excitations are 200/65, respectively, while the maximum $l$ value is 20. The leading quantum electrodynamic (QED) effects are also considered in our work.
MCDF
----
The MCDF method has been described by @FroeseFischer.2016.V49.p182004. Based on the active space (AS) approach [@Sturesson.2007.V177.p539] for the generation of the configuration state function (CSF) expansions, separate MCDF calculations are done for the even and odd parity states. We start the calculation without any excitation from the reference configurations which is usually referred to as the Dirac-Fock (DF) calculation. The reference configurations are .
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We prove the existence of primitive sets (sets of integers in which no element divides another) in which the gap between any two consecutive terms is substantially smaller than the best known upper bound for the gaps in the sequence of prime numbers. The proof uses the probabilistic method. Using the same techniques we improve the bounds obtained by He for gaps in geometric-progression-free sets.'
address: 'Department of Mathematics, Towson University, 8000 York Road, Towson, MD 21252'
author:
- Nathan McNew
bibliography:
- 'bibliography.bib'
title: 'Primitive and Geometric-Progression-Free Sets without large gaps'
---
Introduction
============
Despite the rich history of research on the gaps in the sequence of prime numbers, including many recent breakthroughs, the magnitudes of the largest gaps in this sequence are still poorly understood. Denoting by $p_1, p_2, \ldots$ the sequence of prime numbers, it has been known since 2001, due to Baker, Harman, and Pintz [@bhp], that $$p_n -p_{n-1} \ll p_n^{0.525}.$$ Assuming the Riemann Hypothesis gives a small improvement. Cramér [@cra] shows $$p_n -p_{n-1} \ll \sqrt{p_n}\log p_n.$$ Cramér [@Cra2] conjectures, however, that the bound $p_n -p_{n-1} \ll \log^2 p_n$ gives the true order of magnitude of the largest gaps. As for lower bounds, it follows immediately from the prime number theorem that there must exist gaps where $p_n -p_{n-1} \geq \log p_n$. This can be improved upon slightly. It has recently been shown by Ford, Green, Konyagin, Maynard and Tao [@fgkmt] that, for some positive constant $c$, the innequality $$p_n-p_{n-1} > \frac{c\log p_n \log \log p_n \log_4p_n}{\log_3 p_n}$$ holds infinitely often, improving on the previous result of Rankin [@rankinprimes] which included an additional triple $\log $ factor in the denominator. Here, and throughout the paper, $\log_i x$ will be used to denote the $i$-fold iterated logarithm when $i\geq 3$. Since $\log \log x$ is commonly used it will be used for readability when $i=2$.
Generalizing from the set of primes, one can consider any primitive set of integers. We say a set is primitive if no integer in the set divides another integer in the set. The study of primitive sets also has a rich history. For example, it is known that primitive sets can have counting function substantially larger than the prime numbers. Ahlswede, Khachatrian, and Sárközy [@AKS] showed there exists a primitive sequence $s_1<s_2<\cdots$ with $$n \asymp \frac{s_n}{\log \log s_n (\log_3 s_n)^{1+\epsilon}}$$ for sufficiently large $n$. Martin and Pomerance [@mp] show that this can be improved slightly, in fact there exists such a sequence where $$n \asymp \frac{s_n}{\log \log s_n \log_3 s_n \cdots \log_k s_n (\log_{k+1} s_n)^{1+\epsilon}}$$ for sufficiently large $n$ and any $k\geq 2$. This is, in a sense, best possible, as Erdős [@erdosprimitive] shows that any primitive sequence $s_1, s_2, \ldots$ must satisfy $$\sum_{n=1}^\infty \frac{1}{s_n \log s_n} < \infty.$$ Compared to the sequence of prime numbers, where the average gap grows like $\log x$, we see from these results that primitive sets can have substantially smaller gaps on average, on the order of $\log \log x \log_3 x \cdots \log_k x (\log_{k+1} x)^{1+\epsilon}$ for any $k\geq 2$. Nevertheless, it has not yet been possible to show that the largest gaps among these sequences is any smaller than what is known for the prime numbers.
We show here that there exist primitive sequences in which the gap between consecutive terms is substantially smaller than has been previously shown for the primes or any other primitive sequence. In particular, we get the following upper bound.
\[thm:primitive\] For any $\epsilon>0$ there exists a primitive sequence $q_1< q_2 < \cdots$ of integers in which the gap between any two consecutive terms is bounded above by $$q_n-q_{n-1} \leq \exp \left(\sqrt{2\log q_n \log \log q_n + (2+\epsilon)\log q_n \log_3 q_n}\right). \label{primitive bound}$$
The proof utilizes the probabilistic method, and so it is not constructive. It generalizes, however, to the related problem of geometric-progression-free sets, where the analogous problem has recently attracted attention.
If $r>1$ is rational (sometimes we insist it be integral), then a geometric progression of length $k$ with ratio $r$ is a progression of integers $(g_1,g_2,\ldots g_k$) in which $g_i=rg_{i-1}$. We say $S$ avoids geometric progressions of length $k$ if it is not possible to find $k$ integers from $S$ in a geometric progression. Note that primitive sets can be described as sets avoiding geometric progressions of length 2 in which we insist that the ratio $r$ must be an integer. For the remainder of the paper we will assume that our geometric progressions have length at least 3, and, unless otherwise stated, are allowed to have rational ratio.
In the case of geometric-progression-free sets, unlike primitive sets, there exist such sets with positive density. In particular, the squarefree numbers avoid geometric progressions and have density $\frac{6}{\pi^2}$, though this density isn’t best possible. (See [@mcnewgpf; @NO; @Rankin] for results on the maximum density of such a set.)
Because of this it is not clear, a priori, that there cannot exist such sets in which all of the gaps are bounded above by a fixed constant. In ergodic theory a set in which every gap is bounded by a constant is known as a *syndetic* set. Bieglböck, Bergelsen, Hindman and Strauss [@BBHS] first posed the question of whether there exists a syndetic set that is geometric-progression-free. This problem has become well-known as a good example of the difficulty inherent in studying problems that mix the additive and multiplicative structure of the integers, and remains open.
There has been partial progress toward this question for 2-syndetic sets (sets in which the difference between any two consecutive terms is at most two). He [@He] shows by a computer search that any subset of the range \[1,640\] containing at least one of any pair of consecutive numbers must contain three term geometric progressions. Recently Patil [@Patil] shows that any sequence of integers $s_1<s_2<\cdots$ with $s_n-s_{n-1} \leq 2$ must contain infinitely pairs $\{a,ar^2\}$ with $r$ an integer.
In general, one can avoid geometric progressions of length $k{+}1$ by taking the sequence of $k$-free numbers. Denoting by $s_1<s_2<\cdots$ the sequence of $k$-free numbers, the best known bound on the gaps, due to Trifonov [@trifonov] is that $$s_n-s_{n-1} \ll s_n^{\frac{1}{2k+1}} \log s_n.$$ Though this, again, is likely far greater than the truth.
He [@He] considers the existence of geometric-progression-free sets with gaps provably smaller than the bounds for $k$-free numbers. He shows the following.
For each $\epsilon>0$ there exists a sequence $b_1<b_2<\cdots$ avoiding 6-term geometric progressions satisfying $$b_n-b_{n-1} \ll_\epsilon \exp\left( \left(\frac{5\log 2}6 +\epsilon \right) \frac{\log b_n}{\log \log b_n}\right).$$ Furthermore, there exists a sequence $c_1<c_2<\cdots$ avoiding 5-term geometric progressions satisfying $$c_n-c_{n-1} \ll_\epsilon c_n^\epsilon$$ and a sequence $d_1<d_2<\cdots$ that avoids 3-term geometric progressions with integral ratio in which $$d_n-d_{n-1} \ll_\epsilon d_n^\epsilon.$$
The technique developed here allows us to treat 3-term geometric progressions with rational ratio and obtain a substantially smaller bound on the size of the gaps. In particular we prove the following in section \[sec:gpf\] .
\[thm
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
We investigate the nonlinear current-voltage characteristic of mesoscopic conductors and the current generated through rectification of an alternating external bias. To leading order in applied voltages both the nonlinear and the rectified current are quadratic. This current response can be described in terms of second order conductance coefficients and for a generic mesoscopic conductor they fluctuate randomly from sample to sample. Due to Coulomb interactions the symmetry of transport under magnetic field inversion is broken in a two-terminal setup. Therefore, we consider both the symmetric and antisymmetric nonlinear conductances separately. We treat interactions self-consistently taking into account nearby gates.
The nonlinear current is determined by different combinations of second order conductances depending on the way external voltages are varied away from an equilibrium reference point (bias mode). We discuss the role of the bias mode and circuit asymmetry in recent experiments. In a photovoltaic experiment the alternating perturbations are rectified, and the fluctuations of the nonlinear conductance are shown to decrease with frequency. Their asymptotical behavior strongly depends on the bias mode and in general the antisymmetric conductance is suppressed stronger then the symmetric conductance.
We next investigate nonlinear transport and rectification in chaotic rings. To this extent we develop a model which combines a chaotic quantum dot and a ballistic arm to enclose an Aharonov-Bohm flux. In the linear two-probe conductance the phase of the Aharonov-Bohm oscillation is pinned while in nonlinear transport phase rigidity is lost. We discuss the shape of the mesoscopic distribution of the phase and determine the phase fluctuations.
author:
- 'M. L. Polianski'
- 'M. Büttiker'
title: Rectification and nonlinear transport in chaotic dots and rings
---
Introduction {#sec:intro}
============
A large part of modern physics is devoted to nonlinear classical and quantum phenomena in various systems. Such effects as the generation of the second harmonic or optical rectification are known from classical physics, while quantum electron pumping through a small sample due to interference of wave functions is a quantum nonlinear effect. Experiments on nonlinear electrical transport often combine classical and quantum contributions. A macroscopic sample without inversion center [@UFN] exhibits a current-voltage characteristic which with increasing voltage departs from linearity due to terms proportional to the square of the applied voltage. If now an oscillating (AC) voltage is applied, a zero-frequency current (DC) is generated.
If the sample is sufficiently small, quantum effects can appear due to the wave nature of electrons. The uncontrollable distribution of impurities or small variations in the shape of the sample result in quantum contributions to the DC which are random. For a mesoscopic conductor with terminals $\a ,\b, ... $ we can describe the quadratic current response in terms of second order conductances $\G_{\a\b\g}$. They relate voltages $V_{\b,\oo}$ applied at contacts or neighboring gates $\b$ at frequency $\oo$ to the current at zero frequency at contact $\a$, $$\begin{aligned}
\label{eq:IV}
I_\a &=&\sum_{\b\g} \G_{\a\b\g} |V_{\b,\oo}-V_{\g,\oo}|^2.\end{aligned}$$ The second order conductances include in detail the role of the shape and the nearby conductors (gates). They depend on external parameters like the frequency of the perturbation, temperature, magnetic field or the connection of the sample to the environment.
We concentrate here on the quantum properties of nonlinear conductance through coherent chaotic samples. Chaos could result from the presence of impurities (disorder) or random scattering at the boundaries (ballistic billiard). Due to electronic interference the sign of this effect is generically random even for samples of macroscopically similar shape. [@WW; @AK; @KL] When averaged over an ensemble, the second order conductances vanish. As a consequence, for a fully chaotic sample there is no classical contribution to the DC and the nonlinear response is the result of the sample-specific quantum fluctuations.
Interestingly enough, from a fundamental point of view these fluctuations of nonlinear conductance are sensitive to the presence of Coulomb interactions and magnetic field. While interactions strongly affect the fluctuations’ amplitude, their sign is easily changed by a small variation of magnetic flux $\Phi$, similarly to universal conductance fluctuations (UCF) in linear transport. More importantly, without interactions the current (\[eq:IV\]) through a two-terminal sample is a symmetric function of magnetic field, just like linear conductance. However, the idea that Coulomb interactions are responsible for magnetic-field asymmetry in nonlinear current was recently proposed theoretically [@SB; @SZ] and demonstrated experimentally in different mesoscopic systems. [@wei; @Zumbuhl; @marlow; @ensslin; @Bouchiat; @Bouchiat_preprint] (Various aspects of nonlinear quantum [@PB; @Coulomb; @Tsvelik; @PhysicaE] and classical [@AG] charge and spin transport [@Feldman] have been discussed later on.) It is useful to consider (anti) symmetric second order conductance $\Ga,\Gs$ defined as $$\begin{aligned}
\label{eq:IVfield}
{\genfrac{\{}{\}}{0pt}{}{{\mathcal G}_{s}(\Phi)}{{\mathcal G}_{a}(\Phi)}} &=
&\frac{h}{\nu_s e^3}\frac{\DD^2}{2\DD \tilde
V^2}\left(\frac{I(\Phi)\pm I(-\Phi)}{2}\right)_{\tilde V\to 0},\end{aligned}$$ where $\tilde V$ is a combination of voltages at the gates and contacts varied in the experiment and $\nu_s$ accounts for the spin degeneracy. We emphasize that, depending on the way voltages are varied, experiments probe different linear combinations of second order conductance elements $\G_{\a\b\g}$ of Eq. (\[eq:IV\]). From now on we will simply call $\Gs ,\Ga$ conductances and if no confusion is possible leave out the expression “second order”.
In the presence of a DC perturbation the mesoscopic averages of antisymmetric [@SB; @SZ] and symmetric [@PB; @PhysicaE] conductances vanish, and it is their sample-to-sample fluctuations that are measured. Experiments are usually performed for strongly interacting samples and the magnetic-field components $\Gs,\Ga$ allow one to evaluate the strength of interactions. [@Zumbuhl; @Bouchiat] In previous theoretical works on nonlinear transport through chaotic dots several important issues have been discussed using Random Matrix Theory (RMT). [@SB; @PB; @PhysicaE] Sánchez and Büttiker [@SB] found the fluctuations of $\Ga$ in a dot with arbitrary interaction strength at zero temperature and broken time-reversal symmetry due to magnetic field. Polianski and Büttiker considered the statistics of both $\Ga$ and $\Gs$ for arbitrary flux $\Phi$, the temperature $T$, and the dephasing rate. [@PB] The fluctuations of relative asymmetry $\A=\Ga/\Gs$ and the role of the contact asymmetry on this quantity were discussed in Ref.. The results of RMT approach were compared with experimental data of Zumbühl [@Zumbuhl] and Angers [@Bouchiat]
Previously we considered statistics of $\Ga,\Gs$ for the dots where only one DC voltage was varied. However, to avoid parasitic circuit effects some experiments are performed varying several voltages simultaneously. Surprisingly, the importance of the chosen combination of varied voltages (bias mode) was not addressed before in the literature. It turns out that an experiment where only one of the voltages is varied [@marlow; @Lofgren; @Bouchiat] or two voltages are asymmetrically shifted [@Zumbuhl; @ensslin] measure different combinations of nonlinear conductances $\G_{\a\b\g}$. For example, in a weakly interacting dot in the first mode we found that $\Gs\gg \Ga$, [@PhysicaE] but in the second bias mode the fluctuations of nonlinear current are strongly reduced, so that $\Gs\sim \Ga$.
It is also important to generalize the previous treatment of the nonlinear current to mesoscopic systems biased by an AC-voltage at [*finite*]{} frequency. The resulting DC is sometimes called “photovoltaic current”. We expect that in such mesoscopic AC/DC converters the interactions lead to significant magnetic field-asymmetry in the DC-signal. The rectification effect of mesoscopic diffusive metallic microjunctions was theoretically considered by Falko and Khmelnitskii [@FK] assuming that electrons do not interact. Therefore, a magnetic-field asymmetry was not predicted and was also not observed in subsequent experiments. [@Bykov; @BykovAB; @Bartolo; @Lin; @Liu] The fact that the interactions induce a magnetic field-asymmetry of the photovoltaic current when the size of the sample is strongly reduced was recently demonstrated in Aharonov-Bohm rings by Angers [@Bouchiat_preprint]
However, it turns out that for an AC perturbation another quantum interference phenomenon, also quadratic in voltage, random in sign and magnetic field-asymmetric, contributes to the DC. Due to [*internal*]{} AC- perturbations of the sample, the energy levels are randomly shifted and a phenomenon commonly referred to as “quantum pumping” [@pump; @SAA] appears. Brouwer demonstrated that two voltages applied out of phase generate pumped current linear in frequency, while a single voltage pumps current quadratic in frequency $\oo$. [@pump] Although theory usually considers small (adiabatic) frequencies, a photovoltaic current could be induced by voltages applied at arbitrary frequency. At small $\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- 'Lydie Koch-Miramond'
- Péter Ábrahám
- Yaël Fuchs
- |
\
Jean-Marc Bonnet-Bidaud
- Arnaud Claret
bibliography:
- 'biblio.bib'
date: 'Received 5 June 2002; Accepted 28 June 2002'
title: 'A 2.4 - 12 $\mu$m spectrophotometric study with ISO of CygnusX-3 in quiescence [^1]'
---
Introduction
============
CygnusX-3 has been known as a binary system since its discovery by @gia67, but there is still debate about the masses of the two stars and the morphology of the system (for a review see @bon88). The distance of the object is 8-12.5kpc with an absorption on the line of sight A$_V$ $\sim$ 20 mag [@ker96]. The flux modulation at a period of 4.8 hours, first discovered in X-rays [@par72], then at near infrared wavelengths [@bec73], and observed simultaneously at X-ray and near-IR wavelengths by @mas86, is believed to be the orbital period of the binary system. Following infrared spectroscopic measurements [@ker92], where WR-like features have been detected in I and K band spectra, the nature of the mass-donating star is suggested to be a Wolf-Rayet-like star, but an unambiguous classification, similar to the other WR stars, is still lacking. @mit96 and @van98 pointed out that it is not possible to find a model that meets all the observed properties of CygnusX-3 where the companion star is a normal Population I Wolf-Rayet star with a spherically symmetric stellar wind. In the evolution model originally proposed by @heu73 a final period of the order of 4.8 h may result from a system with initial masses M${_1^0}$=15[[M$_{\odot}$]{}]{}, M${_2^0}$=1[[M$_{\odot}$]{}]{}, P$^0$=5d, the final system being a neutron star accreting at a limited rate of $\sim$ 10$^{-7}$ [[M$_{\odot}$]{}]{}.yr$^{-1}$, from the wind of a core He burning star of about 3.8[[M$_{\odot}$]{}]{}. @van98 proposed that the progenitor of Cygnus X-3 is a 50[[M$_{\odot}$]{}]{}+10[[M$_{\odot}$]{}]{} system with P$^0$=6d; after spiral-in of the black hole into the envelope of the companion, the hydrogen reach layers are removed, and a 2-2.5 [[M$_{\odot}$]{}]{} Wolf-Rayet like star remains with P=0.2d. A system containing a black hole and an He core burning star is also favored by @erg98. In addition, CygnusX-3 undergoes giant radio bursts and there is evidence of jet-like structures moving away from CygnusX-3 at 0.3-0.9 c [@mio98; @mio01; @mar01].\
The main objective of the Infrared Space Observatory (ISO) spectrophotometric measurements in the 2.4-12 $\mu$m range was to constrain further the nature of the companion star to the compact object: the expected strong He lines as well as the metallic lines in different ionization states are important clues, together with the spectral shape of the continuum in a wavelength range as large as possible. An additional motivation for the imaging photometry with ISOCAM was to provide spatial resolution to a possible extended emission feature as a remnant of the expected high mass loss from the system. The paper is laid out as follows. In Section 2 observational aspects are reviewed. Section 3 summarizes the results on the continuum and line emissions from Cygnus X-3 and four Wolf-Rayet stars of WN6, 7 and 8 types. Section 4 reviews the constraints set by the present observations on the wind and on the nature of the companion to the compact object in Cygnux X-3. Finally, Section 5 summarizes the conclusions of this paper.
-------------- ---------- ---------------- ---------- ---------- ------------
Instrument TDTNUM Wavelength Aperture
range ($\mu$m) arcsec time (s) TU (start)
ISOCAM-LW10 14200701 8-15 1.5 1134 6:57:09
ISOPHOT-SS 2.4-4.9 24x24
ISOPHOT-SL 5.9-11.7 24x24
ISOPHOT-P3.6 2.9-4.1 10
ISOPHOT-P10 9-10.9 23
ISOPHOT-P25 20-29 52
ISOPHOT-P60 48-73 99
-------------- ---------- ---------------- ---------- ---------- ------------
Observations and data reduction\[sobserv\]
==========================================
We observed CygnusX-3 with the Infrared Space Observatory (ISO, see @kes96) on April7, 1996 corresponding to JD 2450180.8033 to 2450180.8519. The subsequent observing modes were: ISOCAM imaging photometry at 11.5$\mu$m (LW10 filter, bandwith 8 to 15$\mu$m), ISOPHOT-S spectrophotometry in the range 2.4-12$\mu$m, for 4096s, covering the orbital phases 0.83 to 1.04 (according to the parabolic ephemeris of @kit92); ISOPHOT multi-filter photometry at central wavelengths 3.6, 10, 25 and 60$\mu$m. Observing modes and observation times are summarized in Table 1. Preliminary results were presented in @koc01.\
ISOPHOT-S data reduction
------------------------
A low resolution mid-infrared spectrum of CygnusX-3 was obtained with the ISOPHOT-S sub-instrument. The spectrum covered the 2.4-4.9 and 5.9-11.7$\mu$m wavelength ranges simultaneously with a spectral resolution of about 0.04 and 0.1$\mu$m, respectively. The observation was performed in the triangular chopped mode with two background positions located at $\pm$120$''$, and with a dwelling time of 128s per chopper position. The field of view is 24[”]{}x24[”]{}. The whole measurement consisted of 8 OFF1–ON–OFF2–ON cycles and lasted 4096s.
The ISOPHOT-S data were reduced in three steps. We first used the Phot Interactive Analysis (PIA[^2], @gab97) software (Version 8.2) to filter out cosmic glitches in the raw data and to determine signals by performing linear fits to the integration ramps. After a second deglitching step, performed on the signals, a dark current value appropriate to the satellite orbital position of the individual signal was subtracted. Finally we averaged all non-discarded (typically 3) signals in order to derive a signal per chopper step. Due to detector transient effects, at the beginning of the observation the derived signals were systematically lower than those in the consolidated part of the measurement. We then discarded the first $\sim$800sec (3 OFF-ON transitions), and determined an average \[ON–OFF\] signal for the whole measurement by applying a 1-dimensional Fast Fourier Transformation algorithm (for the application of FFT methods for ISOPHOT data reduction see @haa00). The \[ON–OFF\] difference signals were finally calibrated by applying a signal-dependent spectral response function dedicated to chopped ISOPHOT-S observations [@aco01], also implemented in PIA.
In order to verify our data reduction scheme (which is not completely standard due to the application of the FFT algorithm) and to estimate the level of calibration uncertainties, we reduced HD184400, an ISOPHOT standard star observed in a similar way as CygnusX-3. The results were very consistent with the model prediction of the star, and we estimate that the systematic uncertainty of our calibration is less than 10$\%$.
ISOPHOT spectral energy distribution
------------------------------------
{width="13.cm"}
The observed spectral energy distribution is shown on Fig. 1. The observed (not dereddened) continuum flux in the range 2.4-7$\mu$m is 20$\pm10$mJy in good agreement with that observed by @ogl01 with ISOCAM on the same day (the dereddened fluxes are shown on Fig. 2) ; the observed flux decreases to about 10$\pm8$mJy around 9$\mu$m.
An unresolved line is observed at about 4.3 $\mu$m peaking at 57$\pm$10 mJy. The linewidth is 0.04$\mu$m, consistent with the instrumental response and corresponding to $\sim$2500 km.s$^{-1}$. Note that the measured line flux might be underestimated because the ISOPHOT-S pixels are separated by small gaps, and a narrow line might falls into a gap.
ISOPHOT-P data analysis and results
-----------------------------------
The data reduction in the multi-filter mode was performed using the Phot Interactive Analysis [@gab97] software. After corrections for non-linearities of the integration ramps, the signal was transformed to a
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- |
\
Physics Department, National Technical University, GR-15780 Athens, Greece\
E-mail:
- |
P. Manousselis\
Physics Department, National Technical University, GR-15780 Athens, Greece\
E-mail:
- |
G. Zoupanos\
Physics Department, National Technical University, GR-15780 Athens, Greece\
Institute of Theoretical Physics, D-69120 Heidelberg, Germany\
Max-Planck Institut für Physik, Fohringer Ring 6, D-80805 Munchen, Germany\
Laboratoire d’ Annecy de Physique Theorique, Annecy, France\
E-mail:
title: Noncommutative Gauge Theories and Gravity
---
Introduction
============
Three out of four interactions of nature are grouped together under a common description by the Standard Model in which they are described by gauge theories. However, the gravitational interaction is not part of this picture, admitting a separate, geometric formulation, that is the theory of General relativity. In order to make contact among the two different pictures, there has been an undertaking in which gravity admits a gauge-theoretic approach, besides the geometric one [@Utiyama:1956sy]-[@Witten:1988hc]. Pioneer in this field was Utiyama, whose work was focused on describing 4-d gravity of General Relativity as a gauge theory, localizing the Lorentz symmetry, SO(1,3) [@Utiyama:1956sy]. However, the results were not considered to be successful, since the inclusion of the vielbein did not happen in a convincing way. A few years later, it was Kibble [@Kibble:1961ba] who modified the above consideration, adopting the inhomogeneous Lorentz group (Poincaré group), ISO(1,3), as the gauge group in which, along with the spin connection, the vierbein were also identified as gauge fields of the theory. Nevertheless, the dynamics of General Relativity remained unretrieved, since there was no action of gauge-theoretic origin of the Poincaré gauge group that would be identified as the Einstein-Hilbert action. Solution to this problem was given with the consideration of an SO(1,4) gauge invariant Yang-Mills action (instead of the Poincaré one) along with the involvement of a scalar field in the fundamental representation of the gauge group, SO(1,4) [@Stelle:1979aj] (see also [@MacDowell:1977jt; @Ivanov:1980tw; @Kibble:1985sn]). The gauge fixing of this scalar field led to a spontaneous symmetry breaking, recovering the Einstein-Hilbert action. Therefore, the 4-d gravitational theory of General Relativity was successfully described as a gauge theory with the presence of a scalar field.
Moreover, in the absence of cosmological constant, 3-d Einstein gravity can be also described as a gauge theory of the 3-d Poincaré group, ISO(1,2). In turn, the 3-d de Sitter and Anti de Sitter groups, SO(1,3) and SO(2,2), respectively are employed, in case a cosmological constant is present [@Witten:1988hc]. The first part of the construction, that is the calculation of the transformation of the gauge fields (dreibein and spin connection) and the curvature tensors is similar to the 4-d case. However, the dynamic part is less tedious than that of the 4-d case. The 3-d Einstein-Hilbert action is recovered after the consideration of a Chern-Simons action functional, which is, in fact, identical to the 3-d Einstein-Hilbert’s action. Thus, 3-d Einstein gravity is precisely equivalent to an ISO(1,2) Chern-Simons gauge theory.
Another contribution in this aspect is related with the gauge-theoretic approach of Weyl gravity (and supergravity) as a gauge theory of the 4-d conformal group [@Kaku:1977pa; @Fradkin:1985am][^1]. Proceeding in the same spirit as in the previous cases, the transformations of the gauge fields and the expressions of the various curvature tensors are obtained. The action is determined to be SO(2,4) gauge invariant of Yang-Mills type, as it is expected. Then, constraints are imposed on the curvature tensors and along with gauge fixing of the fields, the final action is actually the Weyl action. Therefore, it is understood that Weyl gravity admits a gauge-theoretic interpretation of the conformal group.
An appropriate framework for the construction of physical theories at the high-energy regime (Planck scale), in which commutativity of the coordinates cannot be naturally assumed, is that of noncommutative geometry [@connes] - [@Gavriil:2015lka]. A very improtant feature of this framework is the potential regularization of quantum field theories and the construction of finite theories. Nevertheless, building quantum field theories on noncommutative spaces is a tedious task and, moreover, problematic ultraviolet features have been encountered [@filk] (see also [@grosse-wulkenhaar] and [@grosse-steinacker]). Despite that, the framework of noncommutative geometry is considered to be a suitable background for accommodating particle physics models, formulated as noncommutative gauge theories [@connes-lott] (see also [@martin-bondia; @dubois-madore-kerner; @madorejz]).
Also, taking into account the above correspondence between gravity and (ordinary) gauge theories, the well-established formulation of noncommutative gauge theories [@Madore:2000en] allows one to use it as methodology for the construction of models of noncommutative gravity. Such approaches have been considered before, see for example refs. [@Chamseddine:2000si]-[@Ciric:2016isg] and, specifically, for 3-d models, employing the Chern-Simons gauge theory formulation, see [@Cacciatori:2002gq]-[@Banados:2001xw]. The authors of the above works to which we refered make use of constant noncommutativity (Moyal-Weyl) and also use the formulation of the $\star$-product and the Seiberg-Witten map [@Seiberg:1999vs].
However, besides the $\star$-product formulation, noncommutative gravitational models can be constructed using the noncommutative realization of matrix geometries [@Banks:1996vh; @Ishibashi:1996xs]. Such approaches, specifically for Yang-Mills matrix models, were proposed in the past few years, see refs. [@Aoki:1998vn]-[@Nair:2006qg]. Also, for alternative approaches on the subject see [@Buric:2006di; @Buric:2007zx; @Buric:2007hb], but also [@Aschieri:2003vyAschieri:2004vhAschieri:2005wm]. In general, formulation of noncommutative gravity implies that the noncommutative deformations break the Lorentz invariance. However, there exist specific noncommutative deformations which preserve the Lorentz invariance and the corresponding background spaces are called covariant noncommutative spaces [@Snyder:1946qz; @Yang:1947ud]. Along these lines, in ref.[@Heckman:2014xha], a noncommutative deformation of a general conformal field theory defined on 4-d dS or AdS spacetime has been employed, see also [@Buric:2015wta]-[@Steinacker:2016vgf].
In this proceedings contribution, our recent contributions in the above field of noncommutative gravity are included. First, we briefly review our proposition for a matrix model of 3-d noncommutative gravity [@Chatzistavrakidis:2018vfi] (see also [@Manolakos:2018isw; @Manolakos:2018hvn]), in which the corresponding background space is the $\mathbb{R}_\lambda^3$, introduced in ref. [@Hammou:2001cc] (see also ref. [@Vitale:2014hca] for field theories on this space), which is actually the 3-d Euclidean space foliated by multiple fuzzy spheres of different radii. As explained in ref.[@Kovacik:2013yca], the above fuzzy space admits an SO(4) symmetry, which is in fact the gauge group we considered. Noncommutativity implies the enlargement of the SO(4) to the U(2)$\times$U(2) gauge group, in a fixed representation, in order that the anticommutators of the generators close. In the same spirit, the Lorentz analogue of the above construction was also explored, in which the corresponding noncommutative space is the $\mathbb{R}_\lambda^{1,2}$, that is the 3-d Minkowski spacetime foliated by fuzzy hyperboloids [@Jurman:2013ota]. In this case too, the initial gauge group, SO(1,3), is eventually extended to GL(2,$\mathbb{C}$) in a fixed representation, for the same reasons as in the Euclidean case. In both signatures, the action proposed is a functional of Chern-Simons type and its variation produces the equations of motion. In addition, the commutative limit is considered, retrieving the expressions of the 3-d Einstein gravity.
Second, a 4-d gravity model as a noncommutative gauge theory is constructed [@Manolakos:2019fle]. Motivated by Heckman-Verlinde [@Heckman:2014xha] who were based on Yang’s early work [@Yang:1947ud], we considered a noncommutative version of the 4-d de Sitter space, which
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In several spectral sequences for (global and local) Iwasawa modules over (not necessarily commutative) Iwasawa algebras (mainly of $p$-adic Lie groups) over ${\mathbb Z}_p$ are established, which are very useful for determining certain properties of such modules in arithmetic applications. Slight generalizations of said results can be found in (for abelian groups and more general coefficient rings), (for products of not necessarily abelian groups, but with ${\mathbb Z}_p$-coefficients), and . Unfortunately, some of Jannsen’s spectral sequences for families of representations as coefficients for (local) Iwasawa cohomology are still missing. We explain and follow the philosophy that all these spectral sequences are consequences or analogues of local cohomology and duality à la Grothendieck (and Tate for duality groups).'
address: |
Mathematisches Institut\
Universität Heidelberg\
Im Neuenheimer Feld 205\
D-69120 Heidelberg
author:
- Oliver Thomas
- Otmar Venjakob
bibliography:
- 'bib.bib'
title: On Spectral Sequences for Iwasawa Adjoints à la Jannsen for Families
---
Introduction {#sec:introduction}
============
Let $\mathcal O$ be a complete discrete valuation ring with uniformising element $\pi$ and finite residue field. Consider furthermore an $\mathcal O$-algebra $R$, which is a complete Noetherian local ring with maximal ideal $\mathfrak m$, of dimension $d$ and finite residue field. We are mainly interested in the case of a ring of formal power series $R=\mathcal O[[X_1,\dots, X_t]]$ in $t$ variables, which is a complete regular local ring of dimension $d=t+1$. We now have a number of dualities at hand.
First, there is Matlis duality: Denote with $\mathcal E$ an injective hull of $R/\mathfrak m$ as an $R$-module. Then $T=\operatorname{Hom}_R(-,\mathcal E)$ induces a contravariant involutive equivalence between Noetherian and Artinian $R$-modules akin to Pontryagin duality.
Second, there is local duality: If ${{\pmb{\mathrm{R}}}}\Gamma_{\underline{\mathfrak m}}$ denotes the right derivation of $ M \mapsto \varinjlim_k \operatorname{Hom}_R(R/\mathfrak m^k, M)$ in the derived category of $R$-modules, then $${{\pmb{\mathrm{R}}}}\Gamma_{\underline{\mathfrak m}} \cong [-d] \circ T \circ
\operatorname{{\pmb{\mathrm{R}}}Hom}_R(-, R)$$ on coherent $R$-modules.
Third, there is Koszul duality: The complex ${{\pmb{\mathrm{R}}}}\Gamma_{\underline{\mathfrak m}}$ can be computed by means of Koszul complexes $K^\bullet$ which are self-dual: $K^\bullet =
\operatorname{Hom}_R(K^\bullet, R)[d]$.
Finally, there is Tate duality: Let $G$ be a pro-$p$ duality group of dimension $s$. Then for discrete $G$-modules $A$ we have $H^i(G, \operatorname{Hom}(A,I)) \cong H^{s-i}(G,A)^*$ for a dualizing module $I$.
Consider $\Lambda_R(G)=\varprojlim_U R[G/U]$ where $U$ runs through the open normal subgroups of $G$. It is well known that $\Lambda_R({\mathbb Z}_p^s)\cong R[[Y_1,\dots,Y_s]]$ and indeed $R=\Lambda_{\mathcal O}({\mathbb Z}_p^r)$. The maximal ideal of $\Lambda_R(G)$ is now generated by the regular sequence $(\pi, X_1, \dots, X_t, Y_1, \dots, Y_s)$ and no matter how we split up this regular sequence into two, they will remain regular. The Koszul complex then gives rise to a number of interesting spectral sequences and these should (at least morally) recover the spectral sequences $$\operatorname{Tor}_n^{{\mathbb Z}_p}(D_m(M^\vee), {\mathbb Q}_p/{\mathbb Z}_p) \Longrightarrow
\operatorname{Ext}_{\Lambda_{{\mathbb Z}_p}(G)}^{n+m}(M, \Lambda_{{\mathbb Z}_p}(G))^\vee
\label{eqn:jannsen-pont-of-iw-adjoints-is-tor-tate-pont}$$ and $$\varinjlim_k D_n(\operatorname{Tor}^{{\mathbb Z}_p}_m( {\mathbb Z}_p/p^k, M)^\vee \Longrightarrow
\operatorname{Ext}_{\Lambda_{{\mathbb Z}_p}(G)}^{n+m}(M, \Lambda_{{\mathbb Z}_p}(G))^\vee),
\label{eqn:jannsen-pont-of-iw-adjoints-is-colim-of-tate-of-pont-of-loc-coh}$$ which show up in Jannsen’s proof of [@MR1097615 2.1 and 2.2]. The functors $D_n$ stem from Tate’s spectral sequence and are a corner stone in the theory of duality groups.
We will show in \[sec:iwasawa-adjoints\] that these spectral sequences (and many more) are consequences of the four duality principles laid out above. This also allows us to generalize Jannsen’s spectral sequences to more general coefficients. For example, generalisations of \[eqn:jannsen-pont-of-iw-adjoints-is-tor-tate-pont\] and \[eqn:jannsen-pont-of-iw-adjoints-is-colim-of-tate-of-pont-of-loc-coh\] are subject of \[prop:pont-of-iw-adjoints-is-tor-tate-pont\] and of \[prop:pont-of-iw-adjoints-is-colim-of-tate-of-pont-of-loc-coh\] respectively. While another spectral sequence for Iwasawa adjoints has already been generalized to more general coefficients (cf. \[thm:lim-sharifi-spec-seq-local\]), the generalizations of the aforementioned spectral sequences are missing in the literature. We can even generalize an explicit calculation of Iwasawa adjoints (cf. [@MR1097615 corollary 2.6], [@MR2392026 (5.4.14)]) in \[prop:dual-of-iw-adj-is-twist-of-local-coh-of-coeff\].
Furthermore, we generalize Venjakob’s result on local duality for Iwasawa algebras ([@MR1924402 theorem 5.6]) to more general coefficients (cf. \[thm:local-duality-for-iw-alg\]). As an application we determine the torsion submodule of local Iwasawa cohomology generalizing a result of Perrin-Riou in the case $R={\mathbb Z}_p$ in \[thm:torsion-in-local-iw-coh-for-poinc-grps\].
Conventions
===========
A *ring* will always be unitary and associative, but not necessarily commutative. If not explicitly stated otherwise, “module” means left-module, “Noetherien” means left-Noetherien etc.
We will furthermore use the language of derived categories. If ${\pmb{\mathrm{A}}}$ is an abelian category, we denote with ${{\pmb{\mathrm{D}}}}({\pmb{\mathrm{A}}})$ the derived category of unbounded complexes, with ${{\pmb{\mathrm{D}}}}^+({\pmb{\mathrm{A}}})$ the derived category of complexes bounded below, with ${{\pmb{\mathrm{D}}}}^-({\pmb{\mathrm{A}}})$ the derived category of complexes bounded above and with ${{\pmb{\mathrm{D}}}}^b({\pmb{\mathrm{A}}})$ the derived category of bounded complexes.
As we simultaneously have to deal with left- and right-exact functors, both covariant and contravariant, recovering spectral sequences from isomorphisms in the derived category is a bit of a hassle regarding the indices. Suppose that ${\pmb{\mathrm{A}}}$ has enough injectives and projectives and that $M$ is a (suitably bounded) complex of objects of $A$. Then for a covariant functor $F\colon
{\pmb{\mathrm{A}}}\ra{\pmb{\mathrm{A}}}$ we set ${{\pmb{\mathrm{R}}}}F=F(Q)$ and ${{\pmb{\mathrm{L}}}}F=F(P)$ with $Q$ a complex of injective objects, quasi-isomorphic to $A$ and $P$ a complex of projectives, quasi-isomorphic to $A$. If $F$ is contravariant, we set ${{\pmb{\mathrm{L}}}}F(A)=F(Q)$ and ${{\pmb{\mathrm{R}}}}F(A)=F(P)$. For indices, this implies the following: Assume that $A$ is concentrated in degree zero. Then for $F$ covariant, ${{\pmb{\mathrm{R}}}}F(A)$ has non-vanishing cohomology at most in non-negative degrees and ${{\pmb{\mathrm{L}}}}F(A)$ at most in non-positive degrees. For $F$ cont
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Let $\K=\Q(\sqrt{d_1},\ldots, \sqrt{d_k})$ be a polyquadratic number field and $N$ be a squarefree positive integer with at least $k$ distinct factors. The Galois group, $\Gal(\K/\Q)$ is an elementary abelian two group generated by $\sigma_i$ such that $g_i(\sqrt{d_i})=-\sqrt{d_i}$. Let $\zeta:\Gal(\K/\Q) \rightarrow \Aut(X_0(N))$ be the cocycle that sends $\sigma_i$ to $w_{m_i}$ where $w_{m_i}$ are the Atkin-Lehner involutions of $\X$. In this paper, we study the $\Q_p$-rational points of the twisted modular curve $\Xc$ and give an algorithm to produce such curves which has $\Q_p$-rational points for all primes $p$. Then we investigate violations of the Hasse Principle for these curves and give an asymptotic for the number of such violations. Finally, we study reasons of such violations.'
address: |
Department of Mathematics\
University of Texas-Austin\
Texas, USA
author:
- Ekin Ozman
title: 'On Polyquadratic Twists of $X_0(N)$'
---
Introduction
============
Given $(m_1, \ldots, m_k)$ pairwise relatively prime, squarefree, positive integers and $(d_1,\ldots, d_k)$ relatively prime squarefree integers we construct a twisted modular curve $\Xc$ as follows: Let $N=\Pi_{i=1}^k m_i$ and $\K=\Q(\sqrt{d_1},\ldots, \sqrt{d_k})$. The Galois group of $\K/\Q$ is elementary abelian $2$ group generated by $\sigma_i$ for $1 \leq i \leq k$. The automorphism group of the modular curve $\X$ is generated by the Atkin-Lehner involutions $w_{p_i}$ for each $p_i|N$. Let $\zeta:\Gal(\K/\Q) \rightarrow \Aut(X_0(N))$ be the cocycle that sends $\sigma_i$ to $w_{m_i}$. The curve $\Xc$ is the twist of $\X$ by $\zeta$. In particular rational points of $\Xc$ are $\K$-rational point of $\X$ fixed by $\sigma_i \circ w_{m_i}$ for each $i$ in between $1$ and $k$.
Like $\X$, the twisted curve $\Xc$ is also a moduli space. Rational points of $\Xc$ parametrize $\Q$-curves. Recall that a $\Q$-curve is an elliptic curve $E$ defined over a number field $\K$ which is isogenous to all of its Galois conjugates. It is a result of Elkies that every $\Q$-curve is geometrically isogenous to a $\Q$-curve defined over a polyquadratic field i.e. a field that is generated by quadratic fields. Therefore understanding rational points of $\Xc$ gives information about $\Q$-curves as well. Then one can naturally ask the following question, a particular case of which was first stated in [@Ell] and answered in [@Ozman] :
\[Q1\] Given pairwise relatively prime, squarefree, positive integers $(m_1, \ldots, m_k)$, relatively prime squarefree integers $(d_1,\ldots, d_k)$ and a prime number $p$, what can be said about the points in $\Xc(\Q_p)$ where $N=\Pi_{i=1}^k m_i$, $\K=\Q(\sqrt{d_1},\ldots, \sqrt{d_k})$ and $\zeta$ is as described above?
Understanding local points is the first step towards understanding global points of any curve. If it happens to be the case that $\Xc(\Q_p)$ is empty for some $p$, then $\Xc(\Q)$ is also empty and there is no such $\Q$-curve. However, having $\Q_p$-points for every prime $p$ does not guarentee the existence of $\Q$-points unless the genus of the curve is zero. For instance, in the case of quadratic twists there are many examples which have local points, but no global points. It is even possible to give an exact asymptotic formula for the number of such curves [@Ozman]. This raises the following question:
Is there an asymptotic for the number of twists $\Xc$ which violate the Hasse principle?
We address these two questions in the first five sections. In the last section, we discuss further directions and reasons of violations of the Hasse principle. More precisely, in Sections 2 and 3 we give conditions on the existence of local points. These conditions depend on splitting behavior of the prime $p$ in the given polyquadratic field $\K$. In some cases we are able to give necessary and sufficient conditions, in other cases we have only sufficient ones. However, this still allows us to give an algorithm which produces infinitely many $\Xc$ with local points everywhere, for any given $N$, as summarized in Section 4. In Section 5, we use this algorithm combined with the methods of [@Ozman] and [@Clark], and give an asymptotic formula for the number of biquadratic twists which has local points everywhere but no global points. In fact this can be generalized to higher degree twists as well. Section 6 is about further directions in this problem. For the twists with computationally feasible equations, we try to explain the lack of global points using Mordell Weil sieve. This is equivalent to Brauer Manin obstruction by the work of Scharaschkin [@sch]. One can not apply Mordell Weil sieve if $\Pic^1(\Xc)(\Q)$ is empty. Understanding the Picard group is usually hard for a generic curve. Note that we even do not have equations for an arbitrary member of the twisted family. Given $N$, we give sufficient conditions for $\Pic^1(\Xc)(\Q)$ to be nonempty in the case of quadratic and some biquadratic cases. We also find families of cases where $\Pic^1(\Xc)(\Q_p)=\emptyset$ when $p$ and $N$ satisfy certain arithmetic conditions.
During the course of typing these results, the PhD thesis of Jim Stankewicz has been brought to author’s attention. Many of the results in Sections 2 and 3 can be concluded from this thesis. However, we still included them here since it may be hard to derive these results from [@Jim] for a reader who is not familiar with the subject and also the work was independent.
The case of good reduction
==========================
$p$ is unramified in $K$
------------------------
: In this case the extension is unramified therefore we can use the theory of Galois descent. Since $p$ is a good prime, $\Xc$ has a smooth model over $\Z_p$. Therefore, by Hensel’s Lemma, $\Xc(\Q_p)$ is non-empty if and only if $\Xc(\F_p)$is non-empty. Let $\mathcal{P}$ be a prime of $\K$ lying over $p$. According to our notation given in the introduction section, each Galois map $\sigma_i$ is twisted by some $w_{m_i}$. Let $S$ be the set of indices $i$ such that $p$ is inert in $\Q(\sqrt{d_i})$. Then decomposition group of $\mathcal{P}$ is $\{1, \prod\limits_{i \in S} \sigma_i \}$. Note that since $\K/\Q$ is an abelian Galois extension, it doesn’t matter which $\mathcal{P}$ we choose. The map $\prod\limits_{i \in S} \sigma_i $ induce frobenius map on the level of residue fields. Note that the coycle $\zeta$ twists the action of $\prod\limits_{i \in S} \sigma_i $ by $\prod\limits_{i \in S} w_{m_i}$. Therefore $\Xc(\F_p)$ consists of $\F_{p^2}$-rational points of $\X$ that are fixed by $w_M \circ \frob$ where $M=\prod\limits_{i \in S}m_i$.
Note that, if $S$ is empty then $p$ splits completely in $\K/\Q$ and $\K \hookrightarrow \K_{\mathcal{P}} \cong \Q_p$. Therefore, $\Xc(\Q_p)=\X(\Q_p)\neq \emptyset$.
The other extreme case is when $p$ is inert in each $\Q(\sqrt{d_i})$, i.e. $S=\{1,2,\ldots,k\}$. We will show that in this case there are points in $\Xc(\F_p)$. Our strategy is to prove that there is a supersingular point fixed by $w_N \circ \frob$, or equivalently, by $w_N \circ w_p =w_{Np}$. This will be derived from well-known results in quaternion arithmetic, see [@Vig] page 152. Using more advanced tools of quaternion arithmetic we can give sufficient conditions for the existence of a $\Q_p$-point on $\Xc$ when $0<
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We report a measurement of the branching fractions of $\Bbar \to D^{**} \ell^- \bar{\nu}_{\ell}$ decays based on 417 fb$^{-1}$ of data collected at the $\Upsilon(4S)$ resonance with the detector at the 2 $e^+e^-$ storage rings. Events are selected by fully reconstructing one of the $B$ mesons in a hadronic decay mode. A fit to the invariant mass differences $m(D^{(*)}\pi)-m(D^{(*)})$ is performed to extract the signal yields of the different $D^{**}$ states. We observe the $\Bbar \to D^{**} \ell^- \bar{\nu}_{\ell}$ decay modes corresponding to the four $D^{**}$ states predicted by Heavy Quark Symmetry with a significance greater than six standard deviations including systematic uncertainties.'
title: 'Measurement of the Branching Fractions of [$\Bbar \to D^{**} \ell^- \bar{\nu}_{\ell}$ ]{} Decays in Events Tagged by a Fully Reconstructed [$B$]{} Meson'
---
authors\_jun2008.tex
Semileptonic $B$ decays to orbitally-excited P-wave charm mesons ($D^{**}$) are of interest for several reasons. Improved knowledge of the branching fractions for these decays is important to reduce the systematic uncertainty in the measurements of the Cabibbo-Kobayashi-Maskawa [@CKM] matrix elements $|V_{cb}|$ and $|V_{ub}|$. For example, one of the leading sources of systematic uncertainty on $|V_{cb}|$ measurements from $\Bbar \to D^* \ell^- \bar{\nu}_{\ell}$ decays [@ell] is the limited knowledge of the background due to $\Bbar \to D^{**} \ell^- \bar{\nu}_{\ell}$ [@BaBarHQET].
The $D^{**}$ mesons contain one charm quark and one light quark with relative angular momentum $L=1$. According to Heavy Quark Symmetry (HQS) [@IW], they form one doublet of states with angular momentum $j \equiv s_q + L= 3/2$ $\left[D_1(2420), D_2^*(2460)\right]$ and another doublet with $j=1/2$ $\left[D^*_0(2400), D_1'(2430)\right]$, where $s_q$ is the light quark spin. Parity and angular momentum conservation constrain the decays allowed for each state. The $D_1$ and $D_2^*$ states decay through a D-wave to $D^*\pi$ and $D^{(*)}\pi$, respectively, and have small decay widths, while the $D_0^*$ and $D_1'$ states decay through an S-wave to $D\pi$ and $D^*\pi$ and are very broad.
$\Bbar \rightarrow D^{**} \ell^- \bar{\nu}_{\ell}$ decays constitute a significant fraction of $B$ semileptonic decays [@pdg] and may help to explain the discrepancy between the inclusive $\Bbar \to X\ell^- \bar{\nu}_{\ell}$ rate and the sum of the measured exclusive decay rates [@pdg; @babar-2; @babar-3]. The measured decay properties for $\Bbar \rightarrow D^{**} \ell^- \bar{\nu}_{\ell}$ can be compared with the predictions of the Heavy Quark Effective Theory (HQET) [@LLSW]. QCD sum rules [@uraltsev] imply the strong dominance of $B$ decays to the narrow $D^{**}$ states over those to the wide ones, while some experimental data show the opposite trend [@belle; @delphi2005].
In this letter, we present the observation of $B$ semileptonic decays into the four excited $D$ mesons predicted by HQS and measure the ${\cal B}(\Bbar \to D^{**} \ell^- \bar{\nu}_{\ell})$ branching fractions. The analysis is based on data collected with the detector [@detector] at the 2 asymmetric-energy $e^+e^-$ storage rings at SLAC. The data consist of a total of 417 fb$^{-1}$ recorded at the $\Upsilon(4S)$ resonance, corresponding to approximately 460 million pairs. An additional 40 fb$^{-1}$, taken at a center-of-mass (CM) energy 40 MeV below the $\Upsilon(4S)$ resonance, is used to study background from $e^+e^- \to f\bar{f}~(f=u,d,s,c,\tau)$ continuum events. A detailed GEANT4-based Monte Carlo (MC) simulation [@Geant] of and continuum events is used to study the detector response, its acceptance, and to validate the analysis techniques. The simulation describes $\Bbar \to D^{**} \ell^- \bar{\nu}_{\ell}$ decays using the ISGW2 model [@ISGW], and non-resonant $\Bbar
\to D^{(*)} \pi \ell^- \bar{\nu}_{\ell}$ decays using the model of Goity and Roberts [@Goity].
We select semileptonic $\Bbar \to D^{**}\ell^-\bar{\nu}_{\ell}$ decays with $\ell=e, \mu$ in events containing a fully reconstructed $B$ meson ($B_\mathrm{tag}$), which allows us to constrain the kinematics, reduce the combinatorial background, and determine the charge and flavor of the signal $B$ meson. $D^{**}$ mesons are reconstructed in the $D^{(*)}\pi^{\pm}$ decay modes and the different $D^{**}$ states are identified by a fit to the invariant mass differences $m(D^{(*)}\pi) -
m(D^{(*)})$.
We first reconstruct the semileptonic $B$ decay, selecting a lepton with momentum $p^*_{\ell}$ in the CM frame larger than 0.6 GeV/$c$. We search for pairs of oppositely-charged tracks that form a vertex and remove those with an invariant mass consistent with a photon conversion or a $\pi^0$ Dalitz decay. Candidate $D^0$ mesons that have the correct charge correlation with the lepton are reconstructed in the $K^-\pi^+$, $K^- \pi^+ \pi^0$, $K^- \pi^+ \pi^+ \pi^-$, $K^0_S \pi^+ \pi^-$, $K^0_S \pi^+ \pi^- \pi^0$, $K^0_S \pi^0$, $K^+ K^-$, $\pi^+ \pi^-$, and $K^0_S K^0_S$ channels, and $D^+$ mesons in the $K^- \pi^+ \pi^+$, $K^- \pi^+ \pi^+ \pi^0$, $K^0_S \pi^+$, $K^0_S \pi^+ \pi^0$, $K^+ K^- \pi^+$, $K^0_S K^+$, and $K^0_S \pi^+ \pi^+ \pi^-$ channels. In events with multiple $D\ell^-$ combinations, the candidate with the best $D$-$\ell$ vertex fit is selected. Candidate $D^*$ mesons are reconstructed by combining a $D$ candidate with a pion or a photon in the $D^{*+} \rightarrow D^0 \pi^+ $, $D^{*+} \rightarrow D^+ \pi^0$, $D^{*0} \rightarrow D^0 \pi^0$, and $D^{*0} \rightarrow D^0 \gamma$ channels. In events with multiple $D^{*}\ell^-$ combinations, we choose the candidate with the smallest $\chi^2$ based on the deviations from the nominal values of the $D$ invariant mass and the invariant mass difference between the $D^*$ and the $D$, using the resolution measured in each mode.
We reconstruct $B_\mathrm{tag}$ decays [@BrecoVub] in charmed hadronic modes $\Bbar \rightarrow D Y$, where $Y$ represents a collection of hadrons, composed of $n_1\pi^{\pm}+n_2 K^{\pm}+n_3 K^0_S+n_4\pi^0$, where $n_1+n_2 =1,3,5$, $n_3
\leq 2$, and $n_4 \leq 2$. Using $D^0(D^+)$ and $D^{*0}(D^{*+})$ as seeds for $B^-(\Bzb)$ decays, we reconstruct about 1000 different decay chains.
The kinematic consistency of a $B_\mathrm{tag}$ candidate with a $B$ meson decay is evaluated using two variables: the beam-energy substituted mass $m_{ES} \equiv \sqrt{s/4-|p^*_B|^2}$, and the energy difference $\Delta E \equiv E^*_B -\sqrt{s}/2$. Here $\sqrt{s}$ is the total CM energy, and $p^*_B$ and $E^*_B$ denote the momentum and energy of the $B_\mathrm{tag}$ candidate in the CM frame. For correctly identified $B_\mathrm{tag}$ decays, the $m_{ES}$ distribution peaks at the $B$ meson mass, while $\Delta E$ is consistent with zero. We select $B_\mathrm
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
A new symplectic N-body integrator is introduced, one designed to calculate the global $360^\circ$ evolution of a self-gravitating planetary ring that is in orbit about an oblate planet. This freely-available code is called [epi\_int]{}, and it is distinct from other such codes in its use of streamlines to calculate the effects of ring self-gravity. The great advantage of this approach is that the perturbing forces arise from smooth wires of ring matter rather than discreet particles, so there is very little gravitational scattering and so only a modest number of particles are needed to simulate, say, the scalloped edge of a resonantly confined ring or the propagation of spiral density waves.
The code is applied to the outer edge of Saturn’s B ring, and a comparison of Cassini measurements of the ring’s forced response to simulations of Mimas’ resonant perturbations reveals that the B ring’s surface density at its outer edge is $\sigma_0=195\pm 60$ gm/cm$^2$ which, if the same everywhere across the ring would mean that the B ring’s mass is about $90\%$ of Mimas’ mass.
Cassini observations show that the B ring-edge has several free normal modes, which are long-lived disturbances of the ring-edge that are not driven by any known satellite resonances. Although the mechanism that excites or sustains these normal modes is unknown, we can plant such a disturbance at a simulated ring’s edge, and find that these modes persist without any damping for more than $\sim10^5$ orbits or $\sim100$ yrs despite the simulated ring’s viscosity $\nu_s=100$ cm$^2$/sec. These simulations also indicate that impulsive disturbances at a ring can excite long-lived normal modes, which suggests that an impact in the recent past by perhaps a cloud of cometary debris might have excited these disturbances which are quite common to many of Saturn’s sharp-edged rings.
author:
- 'Joseph M. Hahn'
- 'Joseph N. Spitale'
- |
Submitted for publication in the\
[*Astrophysical Journal*]{} on December 28, 2012\
Revised April 26, 2013\
Accepted June 1, 2013
bibliography:
- 'biblio.bib'
title: |
An N-body Integrator for Gravitating Planetary Rings,\
and the Outer Edge of Saturn’s B Ring
---
Introduction {#intro_section}
============
A planetary ring is often coupled dynamically to a satellite via orbital resonances. The ring’s response to resonant perturbations varies with the forcing, and if the ring is for instance composed of low optical depth dust, then the ring’s response will vary with the satellite’s mass and its proximity. But in an optically thick planetary ring, such as Saturn’s main A and B rings or its many dense narrow ringlets, the ring is also interacting with itself via self gravity, so its response is also sensitive to the ring’s mass surface density $\sigma_0$ [@S84; @MB05; @HSP09]. So by measuring a dense ring’s response to satellite perturbations, and comparing that measurement to a model for the ring-satellite system, one can then infer the ring’s physical properties, such as its surface density $\sigma_0$, and perhaps other quantities too [@MB05; @TBN07; @HSP09]. Recently [@HSP09] developed a semi-analytic model of the outer edge of Saturn’s B ring, which is confined by an $m=2$ inner Lindblad resonance with the satellite Mimas. The resonance index $m$ also describes the ring’s anticipated equilibrium shape, with the ring-edge’s deviations from circular motion expected to have an azimuthal wavenumber of $m=2$. So the B ring’s expected shape is a planet-centered ellipse, which has $m=2$ alternating inward and outward excursions. The model of [@HSP09] also calculates the ring’s equilibrium $m=2$ response excited by Mimas, but that comparison between theory and observation was done during the early days of the Cassini mission when that spacecraft’s measurement of the ring-edge’s semimajor axis $a_{\mbox{\scriptsize edge}}$ was still rather uncertain. It turns out that the ring’s inferred surface density is very sensitive to how far the B ring’s outer edge extends beyond the resonance, which was quite uncertain then due to the uncertainty in $a_{\mbox{\scriptsize edge}}$, so the uncertainty in the ring’s inferred $\sigma_0$ was also relatively large. Now however $a_{\mbox{\scriptsize edge}}$ is known with much greater precision, so a re-examination of this system is warranted.
Cassini’s monitoring of the B ring also reveals that the ring’s outer edge exhibits several normal modes, which are unforced disturbances that are not associated with any known satellite resonances. Figure \[Bring\_fig\] illustrates this phenomenon with a mosaic of images that Cassini acquired of the B ring’s edge on 28 January 2008. [@SP10] have also fit a kinematic model to four years worth of Cassini images of the B ring; that model is composed of four normal modes having azimuthal wavenumbers $m=1,2,2,3$ that steadily rotate over time at distinct rates. In the best-fitting kinematic model there are two $m=2$ modes, one that is forced by and corotating with Mimas, as well as a free $m=2$ mode that rotates slightly faster. The amplitudes and orientations of all the modes as they appear in the 28 January 2008 data is also shown in Fig. \[Bring\_fit\_fig\]. Note that although the B ring’s outer edge, as seen in Fig. \[Bring\_fig\], might actually resemble a simple $m=2$ shape on 28 January 2008, at other times the ring-edge’s shape is much more complicated than a simple $m=2$ configuration, yet at other times the ring-edge is relatively smooth and nearly circular; see for example Fig. 1 of [@SP10]. This behavior is due to the superposition of the normal modes that are rotating relative to each other, which causes the B ring’s edge to evolve over time. Since this system is not in simple equilibrium, a time-dependent model of the ring that does not assume equilibrium is appropriate here.
So the following develops a new N-body method that is designed specifically to track the time evolution of a self-gravitating planetary ring, and that model is then applied to the latest Cassini results. Section \[method\_section\] describes in detail the N-body model that can simulate all $360^\circ$ of a narrow annulus in a self-gravitating planetary ring using a very modest number of particles. Section \[B ring\] then shows results from several simulations of the outer edge of Saturn’s B ring, and demonstrates how a ring’s observed epicyclic amplitudes and pattern speeds can be compared to N-body simulations to determine the ring’s physical properties. Results are then summarized in Section \[summary\].
Numerical method {#method_section}
================
The following briefly summarizes the theory of the symplectic integrator that [@DLL98] use in their [SYMBA]{} code and [@C99] use in the [MERCURY]{} integrator to calculate the motion of objects in nearly Keplerian orbits about a point-mass star. That numerical method is adapted here so that one can study the evolution of a self-gravitating planetary ring that is in orbit about an oblate planet.
symplectic integrators {#symplectic}
----------------------
The Hamiltonian for a system of N bodies in orbit about a central planet is $$\begin{aligned}
H &=& \sum_{i=0}^{N}\frac{p_i^2}{2m_i} + \sum_{i=0}^{N}\sum_{j>i}^{N} V_{ij},\end{aligned}$$ where body $i$ has mass $m_i$ and momentum $\mathbf{p}_i = m_i\mathbf{v}_i$ where $\mathbf{v}_i=\mathbf{\dot{r}}_i$ is its velocity and $V_{ij}$ is the potential such that $\mathbf{f}_{ij}=-\nabla_{\mathbf{r}_i}V_{ij}$ is the force on $i$ due to body $j$ where $\nabla_{\mathbf{r}_i}$ is the gradient with respect to coordinate $\mathbf{r}_i$, and the index $i=0$ is reserved for the central planet whose mass is $m_0$. Next choose a coordinate system such that all velocities are measured with respect to the system’s barycenter, so $\mathbf{p}_0 = -\sum_{j=1}^N\mathbf{p}_j$, and the Hamiltonian becomes $$\begin{aligned}
H &=& \sum_{i=1}^{N}\left(\frac{p_i^2}{2m_i} + V_{i0}\right)
+ \sum_{i=1}^{N}\sum_{j>i}^NV_{ij}
+ \frac{1}{2m_0}\left(\sum_{i=1}^N\mathbf{p}_i\right)^2
\equiv H_A + H_B + H_C\end{aligned}$$ since $V_{ij} = V_{ji}$. This Hamiltonian has three parts,
\[H\] $$\begin{aligned}
H_A &=&
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We investigate the motion of a thin rigid body in Stokes flow and the corresponding slender body approximation used to model sedimenting fibers. In particular, we derive a rigorous error bound comparing the rigid slender body approximation to the classical PDE for rigid motion in the case of a closed loop with constant radius. Our main tool is the slender body PDE framework established by the authors and D. Spirn in [@closed_loop; @free_ends], which we adapt to the rigid setting.'
author:
- |
Yoichiro Mori, Laurel Ohm [^1]\
*School of Mathematics, University of Minnesota, Minneapolis, MN 55455*
bibliography:
- 'rigid\_bib.bib'
title: 'An error bound for the slender body approximation of a thin, rigid fiber sedimenting in Stokes flow'
---
Introduction
============
Determining the motion of a three-dimensional rigid body sedimenting in a Stokesian fluid is an important problem in both theoretical and computational fluid mechanics. This motion is described by a classical PDE [@corona2017integral; @galdi1999steady; @weinberger1972variational], which we write below in the case of a thin rigid body. We use ${\mathcal{E}}({\bm{u}}) = \frac{1}{2}(\nabla{\bm{u}}+(\nabla{\bm{u}})^{\rm T})$ to denote the symmetric gradient, and $\bm{\sigma}=\bm{\sigma}({\bm{u}},p) = 2{\mathcal{E}}({\bm{u}})-p{\bf I}$ to denote the stress tensor. Let $\Sigma_\epsilon$ denote a closed loop slender body of radius $\epsilon>0$ (to be made precise in Section \[geometry\]) and let $\Omega_\epsilon ={\mathbb{R}}^3 \setminus \overline{\Sigma_\epsilon}$ and $\Gamma_\epsilon={\partial}\Sigma_\epsilon$ (see Figure \[fig:coord\_sys\]). The full PDE description of a slender body undergoing a rigid motion in Stokes flow may be written as follows: $$\label{rigid}
\begin{aligned}
-\Delta {\bm{u}}^{\rm r} +\nabla p^{\rm r} &=0 \hspace{2.6cm} \text{ in } \Omega_\epsilon \\
{{\rm{div}\,}}{\bm{u}}^{\rm r} &= 0 \hspace{2.6cm} \text{ in } \Omega_\epsilon \\
{\bm{u}}^{\rm r}({\bm{x}}) &= {\bm{v}}^{\rm r} + \bm{\omega}^{\rm r}\times {\bm{x}}, \qquad {\bm{x}}\in \Gamma_\epsilon \\
{\bm{u}}^{\rm r}({\bm{x}}) &\to 0 \hspace{2.6cm} \text{as }{\left\lvert {\bm{x}}\right\rvert}\to \infty
\end{aligned}$$ and $$\begin{aligned}
\int_{\Gamma_\epsilon} \bm{\sigma}^{\rm r}\bm{n} \; dS &= \bm{F}, \quad \int_{\Gamma_\epsilon} {\bm{x}}\times (\bm{\sigma}^{\rm r}\bm{n}) \; dS = \bm{T} .\end{aligned}$$ Here the total force $\bm{F}\in {\mathbb{R}}^3$ and torque $\bm{T}\in {\mathbb{R}}^3$ are given and we aim to solve for the linear velocity ${\bm{v}}^{\rm r}\in {\mathbb{R}}^3$ and angular velocity $\bm{\omega}^{\rm r}\in{\mathbb{R}}^3$ of the body. Note that the boundary value problem is in fact valid for rigid bodies of arbitrary shape, but for the purposes of this paper we specifically consider here a slender closed loop. Using the variational framework of [@galdi1999steady; @gonzalez2004dynamics; @weinberger1972variational], it can be shown that is a well-posed PDE.\
On the computational side, there has been much recent interest in numerical simulations of rigid particle sedimentation [@guazzelli2006sedimentation; @guazzelli2011fluctuations], and various tools have been developed to facilitate these simulations [@corona2017integral; @jung2006periodic; @mitchell2015sedimentation].\
![The geometry of the rigid fiber may be parameterized with respect to the orthogonal frame ${\bm{e}}_t(s)$, ${\bm{e}}_{n_1}(s)$, ${\bm{e}}_{n_2}(s)$ defined in Section \[geometry\].[]{data-label="fig:coord_sys"}](SB_geometry_rigid "fig:")\
For a thin rigid body, a commonly-used tool for simplifying simulations is slender body theory, which exploits the thin geometry of the body by approximating the filament as a one-dimensional force density distributed along the fiber centerline. Slender body theory is a popular method for modeling sedimentation of thin fibers, both rigid [@butler2002dynamic; @park2010cloud; @saintillan2005smooth; @shin2009structure] and semi-flexible [@li2013sedimentation; @manikantan2014instability]. Here we will specifically consider the slender body theory established by Keller and Rubinow [@keller1976slender] and further developed in [@gotz2000interactions; @johnson1980improved; @tornberg2004simulating].\
Let ${\bm{X}}: {\mathbb{T}}\equiv {\mathbb{R}}/ {\mathbb{Z}}\to {\mathbb{R}}^3$ denote the coordinates of the slender body centerline, parameterized by arclength $s$ and defined more precisely in Section \[geometry\]. Given a line force density $\bm{f}^{\rm s}(s)$, $s\in {\mathbb{T}}$, the slender body approximation yields a direct expression approximating the velocity of the fiber, given by [@shelley2000stokesian]: $$\label{SBT_expr}
\begin{aligned}
{\bm{u}}^{\rm s}_{\rm C}(s) &= \bm{\Lambda}[\bm{f}^{\rm s}](s) + \bm{K}[\bm{f}^{\rm s}](s),\\
\bm{\Lambda}[\bm{f}](s) &:= \frac{1}{8\pi}\big[({\bf I}- 3{\bm{e}}_t{\bm{e}}_t^{\rm T})-2({\bf I}+{\bm{e}}_t{\bm{e}}_t^{\rm T}) \log(\pi\epsilon/4) \big]{\bm f}(s) \\
\bm{K}[\bm{f}](s) &:= \frac{1}{8\pi}\int_{{\mathbb{T}}} \left[ \left(\frac{{\bf I}}{|\bm{R}_0|}+ \frac{\bm{R}_0\bm{R}_0^{\rm T}}{|\bm{R}_0|^3}\right){\bm f}(s') - \frac{{\bf I}+{\bm{e}}_t(s){\bm{e}}_t(s)^{\rm T} }{|\sin (\pi(s-s'))/\pi|} {\bm f}(s)\right] \, ds'.
\end{aligned}$$ Here ${\bm{e}}_t(s)$ is the unit tangent vector to ${\bm{X}}(s)$ and $\bm{R}_0(s,s') = {\bm{X}}(s) - {\bm{X}}(s')$. The slender body approximation generally allows for bending and flexing of the filament along its centerline and requires specifying the one-dimensional force density over the length of the fiber centerline. If the fiber is constrained to be fully rigid, only the total force $\bm{F}$ and torque $\bm{T}$ must be specified, where $$\label{SBT_cond1}
\int_{{\mathbb{T}}} \bm{f}^{\rm s}(s) \, ds = \bm{F}, \quad \int_{{\mathbb{T}}} {\bm{X}}(s)\times \bm{f}^{\rm s}(s) \, ds = \bm{T}.$$ Additionally, we constrain the motion of the fiber centerline to be rigid, i.e. $$\label{SBT_cond2}
{\bm{u}}^{\rm s}_{\rm C}(s) = {\bm{v}}^{\rm s}+\bm{\omega}^{\rm s}\times {\bm{X}}(s)$$ for constant vectors ${\bm{v}}^{\rm s}$, $\bm{\omega}^{\rm s}$. These constraints give rise to a system of integral equations which must be solved to obtain the line force density along the slender body (see [@gustavsson2009gravity; @tornberg2006numerical]).\
The aim of this paper is to establish a rigorous error bound between the slender body approximation for rigid motion - and the classical PDE describing the sedimentation of a rigid fiber immersed in Stokes flow. We show the following theorem.
\[rigid\_theorem\] Let $\Sigma_\epsilon$ be a slender body as defined in Section \[geometry\]. Suppose the total force $\bm{F}\in {\mathbb{R}}^3$ and torque $\bm{T}\in {\mathbb{R}}^3$ are given, and assume that rigid slender body approximation - is satisfied by some $\bm{f}^{\rm s}\in C^1({\mathbb{T}})$. Then the difference ${\bm{v}}^{\rm r}-{\bm{v}}^{\rm s}$, $\bm{\omega}^{\rm r}-\bm{\omega}^{\rm s}$ between the linear and angular velocities of true rigid motion and
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'If a long chain is held in a pot elevated a distance $h_1$ above the floor, and the end of the chain is then dragged over the rim of the pot and released, the chain flows under gravity down into a pile on the floor. Not only does the chain flow out of the pot, it also leaps above the pot in a “chain-fountain”. I predict and observe that the steady state shape of the fountain is an inverted catenary, and discuss how to apply boundary conditions to this solution. In the case of a level pot, the fountain shape is completely vertical. In this case I predict and observe both how fast the fountain grows to its steady state height, and how it grows $\propto t^2$ if there is no floor. The fountain is driven by an unexpected push force from the pot that acts on the link of chain about to come into motion. I confirm this by designing two new chains, one consisting of hollow cylinders threaded on a string and one consisting of heavy beads separated by long flexible threads. The former is predicted to produce a pot-push and hence a fountain, while the latter will not. I confirm these predictions experimentally. Finally I directly observe the anomalous push in a horizontal chain-pick up experiment.'
author:
- John S Biggins
title: Growth and Shape of a Chain Fountain
---
The mechanics of chains is one of the oldest fields in physics. Galileo observed that hanging chains approximate parabolas, particularly when the curvature is small[@galilei1974two], while the true shape was proved to be a catenary by Huygens Leibniz and John Bernoulli[@lockwood1971book]. A chain hanging in a catenary is a structure supporting its weight with pure tension. In 1675 Hooke discovered that a thin arch supporting its own weight with pure compression must follow the inverted shape of a hanging chain[@hookedescription], that is, an inverted catenary. Ever since architects from Wren to Gaudi have incorporated inverted catenary arches into their buildings and even used hanging strings to build inverted architectural prototypes. We might expect such a venerable and technologically important field to have few remaining surprises, but chain mechanics has recently produced several. A chain falling onto a table accelerates faster than $g$, leading inexorably to the conclusion that the table must pull down on the falling chain[@grewal2011chain; @hamm2010weight]. If a pile of chain rests on a surface, and the end is then pulled in the plane of the surface to deploy the chain, an unexpected noisy chain arch has been observed to form perpendicular to the surface of the chain immediately beyond of the pile[@Santagelochainarch], that is, in the portion of chain that has just come into motion. There is also recent work on the rich dynamics of whips and free ends[@shapeofawhip; @HannaSantangelofreeend; @tomaszewski2006motion].
The most recent surprise comes via Mould’s videos of a chain fountain[@mouldwebsite], shown in fig. \[photoanddiagram\]a, in which a chain not only flows from an elevated pot to the floor under gravity but leaps above the pot. These videos have surprised and delighted almost 3.5 million viewers. In this letter I demonstrate that chain in such a fountain traces Hook’s inverted catenary, but as a structure of pure tension stabilized by the motion of the chain.
![a) Steve Mould demonstrating a chain fountain. Photo courtesy of J. Sanderson. b) Diagram of a chain fountain. A chain with mass per unit length $\lambda$ flows at speed $v$ along a curved trajectory from a pot tilted to an angle $\theta_p$ and elevated to an height $h_1$, to the floor. The fountain has height $h_2$ and width $w$. At each point $x$ the chain has a height $y(x)$ a tension $T(x)$ and makes an angle $\theta(x)$ with the vertical.[]{data-label="photoanddiagram"}](photoanddiagram.pdf){width="30.00000%"}
In a chain fountain, the leaping of the chain above the pot requires that when a link of the chain is brought into motion, it must not only be pulled into motion by the moving chain but also pushed into motion by the pot[@BigginsWarnerChain]. This anomalous push is expected to arise whenever a pile of chain is deployed and, as such, has a wide field of potential applications. However, the analysis in [@BigginsWarnerChain] infers the existence of the anomalous push from a simplified model of a zero-width steady-state fountain, leading to questions about whether the anomalous push is an artifact of these assumptions. In this letter I consider fountains of finite width and the dynamics of fountain growth. The extended theory does not mitigate the need for an anomalous force, and explains the observed fountain behavior well. I also confirm the anomalous force hypothesis experimentally, both by direct observation in a horizontal pickup geometry, and by comparing the fountains made by radically different sorts of chain.
A non-vertical chain fountain is sketched in fig. \[photoanddiagram\]b. We expect that, after the fountain reaches the floor, it will tend to a steady shape.To find this equilibrium curve, consider an element of chain with horizontal extent $\mathrm{d}x$, which has length $\mathrm{d}s=\mathrm{d}x/\sin{(\theta)}$ and mass $\lambda \mathrm{d}s$. Tangentially there is no acceleration so the tension gradient balances gravity, $$T'(x)=\frac{\lambda g}{\sin{\theta}} \cos{\left(\theta\right)} .$$ Since $\cot{\left(\theta\right)}=y'(x)$ this can be integrated to give $$T(x)=\lambda g y +\lambda(v^2-c g),$$ where we have written the constant of integration as $\lambda(v^2-c g)$ and $c$ is a constant. Perpendicularly, there is the inward force $T(x)/r(x)$ (where $r(x)$ is the radius of curvature), a Laplace-pressure like term that arrises whenever one has tension in a curved surface. This force and gravity supply the centripetal acceleration: $$\frac{T(x)}{ r(x)}-\lambda g \sin{\left(\theta\right)} =\lambda \frac{v^2}{r(x)}.$$ Recalling that in Cartesians $1/r=y''(x)/(1+y'(x)^2)^{3/2}$ and $\sin{\left(\theta\right)}=1/\sqrt{1+y'(x)^2}$, this simplifies to $$(T(x)-\lambda v^2)y''(x)=g \lambda (1+y'(x)^2).\label{cateqn1}$$ Substituting in our result for $T(x)$ and solving for $y(x)$ reveals that a chain moving along its own length under gravity in an unchanging shape must trace a catenary.[@Tripos_1854; @airy1858mechanical; @perkins1989theoretical]. Curiously, this result was first published as a question in the 1854 Cambridge University maths examination[@Tripos_1854]. In the case of the fountain, this catenary must be an inverted one, viz. $$y(x)=-a \cosh{\left(\left(x-b\right)/a\right)}+c$$ where $a$, $b$ are new constants of integration.
The simplest inverted catenary is $y(x)=-\cosh{(x)}$. The above solution is simply this curve translated and with a “zoom” by a factor of $a$ equal to the radius of curvature of the catenary at its apex. All steady-state chain fountains should therefore produce shapes that, after zooming and translating, collapse onto $-\cosh(x)$. To test this, a 50m long brass ball-chain was put in a 1L beaker, elevated to 1.72m above the ground and tilted by an angle $\theta_p$. The end of the chain was then pulled over the rim and released initiating a chain fountain. The experiment was repeated with different tilt angles, resulting in different fountain shapes, which were photographed towards the end of each run to ensure the fountain was in its steady state. Runs with significant tangles were disregarded. Two examples are shown in fig. \[chaincats\]a, one thin and one wide. The fountains undulate locally but macroscopically trace a catenary. In fig. \[chaincats\]b many chain fountains with different widths are rescaled onto a single inverted catenary, demonstrating that the chain fountain is well described by Hook’s inverted catenary.
To determine the parameters, $a$, $b$ and $c$, $v$ and $w$ (the width of the fountain, see fig. \[photoanddiagram\]b) we require five boundary conditions. The first two, $y(0)=0$, and $y(w)=-h_1$, fix the coordinate origin and the fountain drop. To find the remaining three we must examine the pickup and putdown processes.
In a time $\mathrm{d}t$ a length of chain $v\mathrm{d}t$ is picked up, acquiring a momentum $\lambda v^2 \mathrm{d}t$. If the links are accelerated solely by the tension, then the third boundary condition is $T(0)=\lambda v^2$. However, inspecting eqn. (\[cateqn1\]), we see that the right-hand side is
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The $SL(2)$-type of any smooth, irreducible and unitarizable representation of $GL_n$ over a $p$-adic field was defined by Venkatesh. We provide a natural way to extend the definition to all smooth and irreducible representations. For unitarizable representations we show that the $SL(2)$-type of a representation is preserved under base change with respect to any finite extension. The Klyachko model of a smooth, irreducible and unitarizable representation $\pi$ of $GL_n$ depends only on the $SL(2)$-type of $\pi.$ As a consequence we observe that the Klyachko model of $\pi$ and of its base-change are of the same type.'
author:
- Omer Offen and Eitan Sayag
date:
-
-
-
title: 'The $SL(2)$-type and base change'
---
introduction
============
Let $F$ be a finite extension of ${\mathbb{Q}}_{p}$. In [@MR2133760], Venkatesh assigned a partition of $n,$ the *$SL(2)$-type* of $\pi,$ to any smooth, irreducible and unitarizable representation $\pi$ of $GL_n(F).$ For a representation of Arthur type the $SL(2)$-type encodes the combinatorial data in the Arthur parameter. In general, the $SL(2)$-type is defined in terms of Tadic’s classification of the unitary dual.
The reciprocity map for $GL_n(F)$ is a bijection from the set of isomorphism classes of smooth irreducible representations of $GL_n(F)$ to the set of isomorphism classes of $n$-dimensional Weil-Deligne representations (cf. [@MR1876802] and [@MR1738446]). Applying the reciprocity map we observe that there is a natural way to extend the definition of the $SL(2)$-type to all smooth and irreducible representations of $GL_n(F)$ (see Theorem \[thm: SL(2)-type\] and Remark \[rmk: SL(2)-type\]). The reciprocity map also allows the definition of base change with respect to any finite extension $E$ of $F.$ It is a map ${\operatorname{bc}}_{E/F}$ from isomorphism classes of smooth irreducible representation of $GL_n(F)$ to isomorphism classes of smooth irreducible representation of $GL_n(E)$ that is the ‘mirror image’ of restriction with respect to $E/F$ of Weil-Deligne representations. The content of Theorem \[thm: main\], our main result, is that for any smooth, irreducible and unitarizable representation $\pi$ of $GL_n(F)$ the representations $\pi$ and ${\operatorname{bc}}(\pi)$ have the same $SL(2)$-type.
In [@MR2332593], [@os], [@OS3] we studied the Klyachko models of smooth irreducible representations of $GL_{n}(F),$ that is, distinction of a representation with respect to certain subgroups that are a semi direct product of a unipotent and a symplectic group. Our results are also described in terms of Tadic’s classification and depend, in fact, only on the $SL(2)$-type of a representation. For example, a smooth, irreducible and unitarizable representation $\pi$ of $GL_{2n}(F)$ is $Sp_{2n}(F)$-distinguished, i.e. it satisfies ${\operatorname{Hom}}_{Sp_{2n}(F)}(\pi,{\mathbb{C}}) \ne 0,$ if and only if the $SL(2)$-type of $\pi$ consists entirely of even parts (and in this case ${\operatorname{Hom}}_{Sp_{2n}(F)}(\pi,{\mathbb{C}})$ is one dimensional [@MR1078382 Theorem 2.4.2]). For unitarizable representations, our results on Klyatchko models are reinterpreted here in terms of the $SL(2)$-type. As a consequence we show that Klyachko models are preserved under base-change with respect to any finite extension. In particular, we have
\[thm: symplectic main\] Let $E/F$ be a finite extension of $p$-adic fields. A smooth, irreducible and unitarizable representation $\pi$ of $GL_{2n}(F)$ is $Sp_{2n}(F)$-distinguished if and only if ${\operatorname{bc}}_{E/F}(\pi)$ is $Sp_{2n}(E)$-distinguished.
The rest of this note is organized as follows. After setting some general notation in Section \[sec: notation\], in Section \[sec: bc\] we recall the definition of the reciprocity map. In Section \[sec: SL(2)-type\] we recall the definition of Venkatesh for the $SL(2)$-type of a unitarizable representation and extend it to all smooth irreducible representations. We recall (and reformulate in terms of the $SL(2)$-type) our results on symplectic (and more generally on Klyachko) models in Section \[sec: Klyachko\]. Our main observation Theorem \[thm: main\] and its application to Klyachko models Corollary \[cor: main\] are stated in Section \[sec: statements\] and proved in Section \[sec: proofs\]. The main theorem says that base change respects $SL(2)$-types and its corollary says that base change respects Klyachko types. Theorem \[thm: symplectic main\] is a special case where the Klyachko type is purely symplectic.
Notation {#sec: notation}
========
Let $F$ be a finite extension of ${\mathbb{Q}}_p$ for some prime number $p$ and let ${\left|{\,\cdot}\right|}_F:F^\times \to {\mathbb{C}}^\times$ denote the standard absolute value normalized so that the inverse of uniformizers are mapped to the size of the residual field. Denote by $W_F$ the Weil group of $F$ and by $I_F$ the inertia subgroup of $W_F.$ We normalize the reciprocity map $T_F:W_F \to F^\times,$ given by local class field theory, so that geometric Frobenius elements are mapped to uniformizers. The map $T_F$ defines an isomorphism from the abelianization $W_F^{ab}$ of $W_F$ to $F^\times$ (this is the inverse of the Artin map). Let ${\left|{\,\cdot}\right|}_{W_F}={\left|{\,\cdot}\right|}_{F}\circ T_F$ denote the associated absolute value on $W_F.$
Denote by ${{\bf 1}}_\Omega$ the characteristic function of a set $\Omega.$ Let ${\operatorname{MS}}_{{\operatorname{fin}}}(\Omega)$ be the set of finite multisets of elements in $\Omega,$ that is, the set of functions $f:\Omega \to
{\mathbb{Z}}_{\ge 0}$ of finite support. When convenient we will also denote $f$ by $\{\omega_1,\dots,\omega_1,\omega_2,\dots,\omega_2,\dots\}$ where $\omega\in \Omega$ is repeated $f(\omega)$ times. Let ${\mathcal{P}}={\operatorname{MS}}_{{\operatorname{fin}}}({\mathbb{Z}}_{>0})$ be the set of partitions of positive integers and let $${\mathcal{P}}(n)=\{f \in {\mathcal{P}}: \sum_{k=1}^\infty k\,f(k)=n\}$$ denote the subset of partitions of $n.$ For $n,\,m \in {\mathbb{Z}}_{>0}$ let $(n)_m=m\,{{\bf 1}}_n=\{n,\dots,n\}$ be the partition of $nm$ with ‘$m$ parts of size $n$’. Let ${\operatorname{odd}}:{\mathcal{P}}\to {\mathbb{Z}}_{\ge 0}$ be defined by $${\operatorname{odd}}(f)=\sum_{k=0}^\infty f(2k+1),$$ i.e. ${\operatorname{odd}}(f)$ is the number of odd parts of the partition $f.$
Reciprocity and base-change for $GL_n(F)$ {#sec: bc}
=========================================
Weil-Deligne representations
----------------------------
An $n$-dimensional *Weil-Deligne* representation is a pair $((\rho,V),N)$ where $(\rho,V)$ is an $n$-dimensional representation of $W_{F}$ that decomposes as a direct sum of irreducible representations and $N:V \to V$ is a linear operator such that $${\left|{w}\right|}_{W_F}\,N \circ \rho(w)=\rho(w)\circ N,\ w \in
W_F.$$ The map $((\rho,V),N) \mapsto ([\rho],f),$ where $[\rho]$ denotes the isomorphism class of the $n$-dimensional representation $(\rho,V)$ of $W_F$ and $f \in {\mathcal{P}}(n)$ is the partition of $n$ associated to the Jordan decomposition of $N,$ defines an injective map on isomorphism classes of Weil-Deligne representations. Denote its image by ${\mathcal{G}}_{F}(n).$ In this way we identify the set ${\mathcal{G}}_F(n)$ with the set of isomorphism classes of $n$-dimensional Weil-Deligne representations. Let $P_{F,n}: {\mathcal{G}}_F(n) \to {\mathcal{P}}(n)$ be the projection to the second
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
A gambler moves on the vertices $1, \ldots, n$ of a graph using the probability distribution $p_{1}, \ldots, p_{n}$. A cop pursues the gambler on the graph, only being able to move between adjacent vertices. What is the expected number of moves that the gambler can make until the cop catches them?
Komarov and Winkler proved an upper bound of approximately $1.97n$ for the expected capture time on any connected $n$-vertex graph when the cop does not know the gambler’s distribution. We improve this upper bound to approximately $1.95n$ by modifying the cop’s pursuit algorithm.
author:
- |
Jesse Geneson\
geneson@gmail.com
title: 'An anti-incursion algorithm for unknown probabilistic adversaries on connected graphs'
---
Introduction
============
Games with cops and robbers on graphs, which can be applied for designing anti-incursion programs, have been studied for several decades [@T1; @T2; @T3; @T4; @NW; @Q]. We investigate a version of the game where the adversary moves among the vertices $1, \ldots, n$ following a probability distribution $p_{1}, \ldots, p_{n}$. Before the game starts, the cop picks and occupies a vertex from $G$. In each round of the game, the cop selects and moves to an adjacent vertex or stays at the same vertex. The gambler chooses to occupy a vertex randomly based on a time-independent distribution, not restricted to only adjacent vertices.
Whenever both players occupy the same vertex at the same time, the cop wins. The gambler is called a known gambler if the cop knows their probability distribution. Otherwise the gambler is called unknown.
Gambler-pursuit games model anti-incursion programs navigating a linked list of ports, trying to minimize interception time for enemy packets. Komarov and Winkler proved that the expected capture time on any connected $n$-vertex graph is exactly $n$ for a known gambler [@KW], assuming that both players use optimal strategies. For an unknown gambler, Komarov and Winkler proved an upper bound of approximately $1.97n$ [@KW].
Komarov and Winkler conjectured that the general upper bound for the unknown gambler on a connected $n$-vertex graph can be improved from about $1.97n$ to $3n/2$, and that the star is the worst case for this bound. In Sections \[unkn\] and \[unkn1\], we improve the upper bound for the unknown gambler’s expected capture time to approximately $1.95n$ by using a different strategy for the cop.
Unknown Gambler Pursuit Algorithm {#unkn}
=================================
Let $G$ be a connected $n$-vertex graph. As in [@KW], let $T$ be a spanning subtree of $G$. We describe the cop’s pursuit algorithm, and then we prove an upper bound of approximately $1.95n$ on the expected capture time.
Suppose that the cop performs a depth first search of $T$, except the cop stays at some leaves for two turns instead of one. Specifically, the cop uniformly at random selects a subset $U$ of ${\lceil0.72912 n\rceil}$ vertices and stays at the vertices in $U$ for an extra turn if the vertices are leaves. If there is a vertex $v$ in $U$ that is not a leaf, then the depth first search would already go twice through $v$, so the cop does not need to stay an extra turn at $v$. After the proof, we explain the reason for using the number $0.72912$.
The cop flips a coin to decide whether to perform the depth first search forward or backward. Thus the total number of turns in a single depth first search (including the extra turns for the leaves in $U$) is at most $1+2(n-1)+{\lceil0.72912 n\rceil} \leq 2.72912 n$. The search is repeated until capture. Since the cop flips a coin to decide whether to search forward or backward, the expected number of turns in the successful depth first search is at most $1.36456n$.
Analysis {#unkn1}
========
Let the vertices of the graph be named $1, \ldots, n$. Suppose that the unknown gambler chooses vertex $i$ with probability $p_{i}$. We split the proof into two cases to show that the probability of evasion in a single depth first search is at most $0.17745$.
If there are two vertices $i$ and $j$ that the cop visits at least twice each such that $p_{i}+p_{j} \geq 0.732$, then the probability of evasion in a single depth first search is less than $0.162$.
If the cop visits $i$ and $j$ both at least twice, then the probability of evasion is at most $(1-p_{i})^{2}(1-p_{j})^{2} \leq (1-p_{i})^{2}(0.268+p_{i})^{2}$, which has a maximum value of approximately $0.16157$ on the interval $[0,1]$ at $p_{i} = 0.366$.
Next we show that the probability of evasion is at most $e^{-1.72912} < 0.17745$ when there are no vertices $i$ and $j$ that the cop visits at least twice each such that $p_{i}+p_{j} \geq 0.732$.
\[didj\] Suppose that there are no vertices $x$ and $y$ that the cop visits at least twice each such that $p_{x}+p_{y} \geq 0.732$. Then the probability of evasion for a single depth first search is at most $(1-\frac{1}{n})^{1.72912 n}$.
Let $i, j$ be any two vertices of $G$ and suppose that $p_{i}+p_{j} = a$ and let $t_{1}, \ldots, t_{n-2}$ denote the vertices of $G$ not equal to $i$ or $j$. Given the condition that there are no vertices $x$ and $y$ that the cop visits at least twice each such that $p_{x}+p_{y} \geq 0.732$, then the probability of evasion for a single depth first search can be bounded by performing the following reduction to obtain shorter searches called $S$ and $S'$.
First we define $S'$. If the cop visits a vertex $v$ not in $U$ more than once in the original depth first search, skip the cop’s visits to $v$ after the first visit to $v$ in $S'$; if the cop visits a vertex $v$ in $U$ more than twice in the original depth first search, skip the cop’s visits to $v$ after the second visit to $v$ in $S'$. Note that with $S'$, vertices in $U$ are visited exactly twice, and vertices not in $U$ are visited exactly once. The number of turns in $S'$ is thus $n+{\lceil0.72912n\rceil}={\lceil1.72912n\rceil}$. To obtain $S$ from $S'$, skip all visits to vertices $i$ and $j$.
Note that the reduction can only increase the probably of evasion, so the probability of evasion for the original depth first search is at most the probability of evasion for $S'$, which is at most the probability of evasion for $S$. Note also that the searches $S$ or $S'$ could be impossible for the cop to perform, since consecutive vertices in $S$ or $S'$ might not be adjacent. The searches $S$ and $S'$ are only used in this proof to obtain an upper bound on the probability of evasion for the original depth first search.
For $c, d \in \left\{1,2\right\}$, define $f_{c,d}(p_{t_{1}}, \ldots, p_{t_{n-2}})$ to be the probability that the gambler evades the cop in search $S$ and that the cop makes $c$ visits to vertex $i$ and $d$ visits to vertex $j$ in search $S'$, conditioned on the fact that there are no vertices $x$ and $y$ that the cop visits at least twice each such that $p_{x}+p_{y} \geq 0.732$ in the original depth first search. Then the probability of evasion in search $S'$ is $p = p(p_{i},p_{j},p_{t_{1}},\ldots,p_{t_{n-2}})$ of the form $(1-p_{i})(1-a+p_{i}) f_{1,1}(p_{t_{1}}, \ldots, p_{t_{n-2}})+(1-p_{i})^{2}(1-a+p_{i}) f_{2,1}(p_{t_{1}}, \ldots, p_{t_{n-2}})+(1-p_{i})(1-a+p_{i})^{2} f_{1,2}(p_{t_{1}}, \ldots, p_{t_{n-2}}
|
{
"pile_set_name": "ArXiv"
}
| null |
---
bibliography:
- 'BrainrefRMTGFP.bib'
---
=1
[**Organization and hierarchy of the human functional brain network lead to a chain-like core** ]{}
Rossana Mastrandrea$^{*1}$, Andrea Gabrielli$^{1,2}$, Fabrizio Piras$^{3,4}$, Gianfranco Spalletta$^{4,5}$, Guido Caldarelli$^{1,2}$ Tommaso Gili$^{3,4}$\
**[1]{} IMT School for Advanced Studies, Lucca, piazza S. Ponziano 6, 55100 Lucca, Italy\
**[2]{} Istituto dei Sistemi Complessi (ISC) - CNR, UoS Sapienza, Dipartimento di Fisica, Universitá Sapienza; P.le Aldo Moro 5, 00185 - Rome, Italy\
**[3]{} Enrico Fermi Center, Piazza del Viminale 1, 00184 Rome, Italy\
**[4]{} IRCCS Fondazione Santa Lucia, Via Ardeatina 305, 00179 Rome, Italy\
**[5]{} Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, Tx, USA\
$*$ E-mail: rossana.mastrandrea@imtlucca.it\
**********
Abstract {#abstract .unnumbered}
========
The brain is a paradigmatic example of a complex system as its functionality emerges as a global property of local mesoscopic and microscopic interactions. Complex network theory allows to elicit the functional architecture of the brain in terms of links (correlations) between nodes (grey matter regions) and to extract information out of the noise. Here we present the analysis of functional magnetic resonance imaging data from forty healthy humans during the resting condition for the investigation of the basal scaffold of the functional brain network organization. We show how brain regions tend to coordinate by forming a highly hierarchical chain-like structure of homogeneously clustered anatomical areas. A maximum spanning tree approach revealed the centrality of the occipital cortex and the peculiar aggregation of cerebellar regions to form a closed core. We also report the hierarchy of network segregation and the level of clusters integration as a function of the connectivity strength between brain regions.
Introduction {#introduction .unnumbered}
============
The intrinsic functional architecture of the brain and its changes due to cognitive engagement, ageing and diseases are nodal topics in neuroscience, attracting considerable attention from many disciplines of scientific investigation. Complex network theory [@caldarelli2007scale; @barabasi2009scale; @newman2010networks] provides tools representing the state-of-the-art of multivariate analysis of local cortical and subcortical mesoscopic interactions. The new amount of data available from a variety of different sources showed clearly that the complex network description of both the structural and the functional organization of the brain demonstrates a repertoire of unexpected properties of brain connectomics [@bullmore2009complex]. Accordingly network theory allows a description of brain architecture in terms of patterns of communication between brain regions, treated as evolving networks and associate this evolution to behavioral outcomes [@bassett2011dynamic]. Brain networks are characterized by a balance of dense relationships among areas highly engaged in processing roles, as well as sparser relationships between regions belonging to systems with different processing roles. This segregation facilitates communication among brain areas that may be distributed anatomically but are needed for sets of processing operations [@sporns2013network]. Along with that the integrated functional organization of the brain involves each network component executing a discrete cognitive function mostly autonomously or with the support of other components where the computational load in one is not heavily influenced by processing in the others [@bertolero2015modular]. In this quest for a constantly improving quantitative description of the brain and of the cardinal features of its functioning complex networks plays a crucial role [@rubinov2010complex; @deco2011emerging; @van2011rich; @sporns2012simple]. Specifically, it proved to be able to elicit both the scaffold of the mutual interactions among different areas in healthy brains [@bullmore2009complex; @bassett2016small] and the local failure of the global functioning in diseased brains [@Bassett2009; @rosazza2011resting; @aerts2016brain]. Network representation describes the brain as a graph with a set of nodes - a variable number of brain areas (from $10\sp{2}$ to $10\sp{4}$) - connected with links representing functional, structural or effective interactions. The use of complex network theory passed progressively from an initial assessment of basic topological properties [@salvador2005neurophysiological; @van2008small; @telesford2010reproducibility] to a more sophisticated description of global features of the brain, as small-worldness [@achard2006resilient], rich-club organization [@van2011rich] and topology [@petri2014homological; @tiz2016]. However, a clear understanding of the functional advantage of a network organization in the brain, the characterization of its substrate and a description of the network structure as a function of the level of brain regions interaction are still missing. In this paper, we investigate resting state functional networks, where links represent the strength of correlation between time series of spontaneous neural activity as measured by blood oxygen-level-dependent (BOLD) functional MRI (fMRI) [@eguiluz2005]. Specifically we interpret a functional connection between two nodes as the magnitude of the synchrony of their low-frequency oscillations, which is associated with the modulation of activity in one node by another one [@wang2012electrophysiological; @honey2012slow] largely constrained by anatomical connectivity [@hagmann2008mapping; @honey2009predicting]. A percolation analysis of the functional network [@gallos2012small; @tiz2016] is used to highlight the progressive engagement of brain regions in the whole network as a function of the connectivity strength. Subsequently, by means of the maximum spanning forest (MSF) and the maximum spanning tree (MST) representations [@caldarelli2007scale; @caldarelli2012network] we obtain the basal scaffold of the brain network, that shows to be characterized by a linear backbone in which few nodes (cerebellar and occipital regions) play a central role, even progressively increasing the spatial resolution or reducing nodes size.
Results {#results .unnumbered}
=======
Percolation Analysis {#percolation-analysis .unnumbered}
--------------------
The representative human functional brain network is often analyzed introducing specific thresholds to map the fully-connected correlation matrix in a sparse binary matrix [@bullmore2009complex]. Here, to avoid any arbitrary assumption we perform a percolation analysis [@gallos2012small; @tiz2016] on the whole network. We rank correlations in increasing order, one at a time we remove from the network the link corresponding to the observed value and explore the global organization of the remaining network. Specifically, in fig.\[perc\] (a) we show the number of connected components updated each time a new link is removed. The emergence and the number of *plateaux* shed light on the hierarchical structure of the network, their length unveils the intrinsic stability of certain network configurations after the removal of links. For the human functional brain network a remarkable hierarchy in the disaggregation process emerges from the comparison with a proper null model. The same percolation analysis is performed on the ensemble of 100 randomizations (Methods) of the observed correlation matrix showing a faster disaggregation in disconnected components with plateaux absent or of small length.
We compute the distribution of plateaux length looking at the increment of correlation values when the network passes from $n$ to $n+1$ components and show it in fig.\[perc\] (b). In the same figure we also report the average plateaux length computed on the ensemble of randomizations. A great variability characterizes the percolation curve of the real network with significant deviations of the plateaux length from the random case.
Chain-like modules {#chain-like-modules .unnumbered}
==================
The percolation analysis highlights the existence of a not trivial functional organization of the human brain network. Here, we investigate it considering a filtered network where for each area all weighted links but the strongest are discarded. This approach gives rise to 36 disconnected components forming a Maximum Spanning Forest (Methods) with a new information on link directionality. It simply indicates that each source points toward its maximally correlated brain area and it is not necessarily reciprocated. Figure \[MSF\] shows the abundance of modules with size 2 and reciprocated links: those are meanly mirror-areas of the right and left brain hemispheres or adjacent/very close ROIs belonging to the same hemisphere. Nodes are coulored according to the anatomical regions reported in figure \[MSF\] (a) (detailed names of ROIs in table 1 in SI). A noteworthy result concerns groups of size greater than 3 exhibiting a chain-like structure, sometimes very long as for the Cerebellum. This implies that most of the nodes in the MSF have in-degree[^1] equal to one, few equal to 2 and very few greater than 2. Furthermore, nodes tend to connect with nodes belonging to the same anatomical region. The only exception is represented by the Temporal Lobe: ROIs in this region are linked with all the other anatomical areas except for the Cerebellum and the Deep Grey Matter ones.
We build the MSF of the 100 randomizations of the real network obtaining a number of components varying in the range $[12,24]$. All randomized correlation matrices exhibit a star-like organization of components in the MSF, while the number of modules of size 2 is dramatically reduced. Moreover, colors of linked nodes are completely random. In figure S1
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study the production of three gauge bosons at the next generation of linear $e^+e^-$ colliders operating in the $\gamma\gamma$ mode. The processes $\gamma\gamma \rightarrow W^+W^-V$ ($V=Z^0$, or $\gamma$) can provide direct information about the quartic gauge-boson couplings. We analyze the total cross section as well as several dynamical distributions of the final state particles including the effect of kinematical cuts. We find out that a linear $e^+e^-$ machine operating in the $\gamma\gamma$ mode will produce 5–10 times more three-gauge-boson states compared to the standard $e^+e^-$ mode at high energies.'
address: |
$^a$ Instituto de Física, Universidade de São Paulo,\
C.P. 20516, 01498-970 São Paulo, Brazil\
$^b$ Instituto de Física Teórica, Universidade Estadual Paulista,\
Rua Pamplona 145, 01405-900 São Paulo, Brazil.
author:
- |
F. T. Brandt $^a$, O. J. P. Éboli $^a$, E. M. Gregores $^b$,\
M. B. Magro $^a$, P. G. Mercadante $^a$, and S. F. Novaes $^b$
title: 'Triple Vector Boson Processes in $\gamma\gamma$ Colliders'
---
Introduction {#sec:int}
============
The multiple vector-boson production will be a crucial test of the gauge structure of the Standard Model since the triple and quartic vector-boson couplings involved in this kind of reaction are strictly constrained by the $SU(2)_L \otimes U(1)_Y$ gauge invariance. Any small deviation from the Standard Model predictions for these couplings spoils the intimate cancellations of the high energy behaviour between the various diagrams, giving rise to an anomalous growth of the cross section with energy. It is important to measure the vector-boson selfcouplings and look for deviations from the Standard Model, which would provide indications for a new physics.
The production of several vector bosons is the ideal place to search directly for any anomalous behaviour of the triple and quartic couplings. The reaction $e^+ e^- \rightarrow W^+ W^-$ will be accessible at LEP200 and important information about the $WW\gamma$ and $WWZ$ vertices will be available in the near future [@ano:ee]. Nevertheless, due to its limited center of mass energy available, we will have to wait for colliders with higher center of mass energy in order to produce a final state with three or more gauge bosons and to test the quartic gauge-boson coupling. The measurement of the three-vector-boson production cross section can provide a non-trivial test of the Standard Model that is complementary to the analyses of the production of vector-boson pairs. Previously, the cross sections for triple gauge boson production in the framework of the Standard Model were presented for $e^+e^-$ colliders [@bar:plb; @bar:num; @gunion] and hadronic colliders [@bar:plb; @golden].
An interesting option that is deserving a lot of attention nowadays is the possibility of transforming a linear $e^+e^-$ collider in a $\gamma\gamma$ collider. By using the old idea of Compton laser backscattering [@las0], it is possible to obtain very energetic photons from an electron or positron beam. The scattering of a laser with few GeV against a electron beam is able to give rise to a scattered photon beam carrying almost all the parent electron energy with similar luminosity of the electron beam [@laser]. This mechanism can be employed in the next generation of $e^+e^-$ linear colliders [@pal; @bur] (NLC) which will reach a center of mass energy of 500–2000 GeV with a luminosity of $\sim 10^{33}$ cm$^{-2}$ s$^{-1}$. Such machines operating in $\gamma\gamma$ mode will be able to study multiple vector boson production with high statistic.
In this work, we examine the production of three vector bosons in $\gamma\gamma$ collisions through the reactions $$\eqnum{I}
\label{z}
\gamma + \gamma \rightarrow W^+ + W^- + Z^0 \; ,$$ $$\eqnum{II}
\label{g}
\gamma + \gamma \rightarrow W^+ + W^- + \gamma \; .$$ These processes involve only interactions of between the gauge bosons making more evident any deviation from predictions of the Standard Model gauge structure. Besides that, there is no tree-level contribution involving the Higgs boson which eludes all the uncertainties coming from the scalar sector, like the Higgs boson mass. Nevertheless, the production of multiple longitudinal gauge bosons can shed light on the symmetry breaking mechanism even when there is no contribution coming from the standard Higgs boson. For instance, in models where the electroweak-symmetry breaking sector is strongly interacting there is an enhancement of this production [@golden; @strong].
We analyze the total cross section of the processes above, as well as the dynamical distributions of the final state vector bosons. We concentrate on final states where the $W$ and $Z^0$ decay into identifiable final states. We conclude that for a center of mass energy $\sqrt{s} \gtrsim 500$ GeV and an annual integrated luminosity of 10 fb$^{-1}$, there will be a promising number of fully reconstructible events. Moreover, we find out that a linear $e^+e^-$ machine operating in the $\gamma\gamma$ mode will produce 5–10 times more three-gauge-boson states compared to the standard $e^+e^-$ mode at high energies.
The outline is as follows. In Sec. \[sec:res\], we introduce the laser backscattering spectrum, and present the details of the calculational method. Section \[cs:dis\] contains our results for the total cross section and the kinematical distributions of the final state gauge bosons for center of mass energies $\sqrt{s} = 0.5$ and $1$ TeV. This paper is supplemented by an appendix which gives the invariant amplitudes for the above processes.
Calculational Method {#sec:res}
====================
The cross section for the triple-vector-boson production via $\gamma\gamma$ fusion can be obtained by folding the elementary cross section for the subprocesses $\gamma\gamma
\rightarrow WWV$ ($V= Z^0,~ \gamma$) with the photon luminosity ($dL_{\gamma\gamma}/dz$), $$d\sigma (e^+e^-\rightarrow \gamma\gamma \rightarrow WWV)(s) =
\int_{z_{\text{min}}}^{z_{\text{max}}} dz ~ \frac{dL_{\gamma\gamma}}{dz} ~
d \hat\sigma (\gamma\gamma \rightarrow WWV) (\hat s=z^2 s) \; ,$$ where $\sqrt{s}$ ($\sqrt{\hat{s}}$) is the $e^+e^-$ ($\gamma\gamma$) center of mass energy and $z^2= \tau \equiv
\hat{s}/s$. Assuming that the whole electron beam is converted into photons via the laser backscattering mechanism, the relation connecting the photon structure function $F_{\gamma/e} (x,\xi)$ to the photon luminosity is $$\frac{d L_{\gamma\gamma}}{dz} = 2 ~ \sqrt{\tau} ~
\int_{\tau/x_{\text{max}}}^{x_{\text{max}}} \frac{dx}{x}
F_{\gamma/e} (x,\xi)F_{\gamma/e} (\tau/x,\xi) \; .
\label{lum}$$ For unpolarized beams the photon-distribution function [@laser] is given by $$F_{\gamma/e} (x,\xi) \equiv \frac{1}{\sigma_c} \frac{d\sigma_c}{dx} =
\frac{1}{D(\xi)} \left[ 1 - x + \frac{1}{1-x} - \frac{4x}{\xi (1-x)} +
\frac{4
x^2}{\xi^2 (1-x)^2} \right] \; ,
\label{f:l}$$ with $$D(\xi) = \left(1 - \frac{4}{\xi} - \frac{8}{\xi^2} \right) \ln (1 + \xi) +
\frac{1}{2} + \frac{8}{\xi} - \frac{1}{2(1 + \xi)^2} \; ,$$ where $\sigma_c$ is the Compton cross section, $\xi \simeq 4
E\omega_0/m_e^2$, $m_e$ and $E$ are the electron mass and energy respectively, and $\omega_0$ is the laser-photon energy. The fraction $x$ represents the ratio between the scattered photon and initial electron energy for the backscattered photons traveling along the initial electron direction. The maximum value of $x$ is $$x_{\text{max}} = \frac{\omega_{\text{max}}}{E}
= \frac{\xi}{1+\xi} \; ,$$ with $\omega_{\text{max}}$ being the maximum scattered photon energy.
The fraction of photons with energy close to the maximum value grows with $\sqrt{s}$ and $\omega_0$. Nevertheless, the bound
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study the waves at the interface between two thin horizontal layers of immiscible fluids subject to high-frequency horizontal vibrations. Previously, the variational principle for energy functional, which can be adopted for treatment of quasi-stationary states of free interface in fluid dynamical systems subject to vibrations, revealed existence of standing periodic waves and solitons in this system. However, this approach does not provide regular means for dealing with evolutionary problems: neither stability problems nor ones associated with propagating waves. In this work, we rigorously derive the evolution equations for long waves in the system, which turn out to be identical to the ‘plus’ (or ‘good’) Boussinesq equation. With these equations one can find all time-independent-profile solitary waves (standing solitons are a specific case of these propagating waves), which exist below the linear instability threshold; the standing and slow solitons are always unstable while fast solitons are stable. Depending on initial perturbations, unstable solitons either grow in an explosive manner, which means layer rupture in a finite time, or falls apart into stable solitons. The results are derived within the long-wave approximation as the linear stability analysis for the flat-interface state \[D.V. Lyubimov, A.A. Cherepanov, Fluid Dynamics [**21**]{}, 849–854 (1987)\] reveals the instabilities of thin layers to be long-wavelength.'
author:
- 'D. S. Goldobin'
- 'A. V. Pimenova'
- 'K. V. Kovalevskaya'
- 'D. V. Lyubimov'
- 'T. P. Lyubimova'
title: 'Running interfacial waves in two-layer fluid system subject to longitudinal vibrations'
---
Introduction {#sec_intro}
============
In [@Wolf-1961; @Wolf-1970] Wolf reported experimental observations of the occurrence of steady wave patterns on the interface between immiscible fluids subject to horizontal vibrations. The build-up of the theoretical basis for these experimental findings was initiated with the linear instability analysis of the flat state of the interface [@Lyubimov-Cherepanov-1987; @Khenner-Lyubimov-Shotz-1998; @Khenner-etal-1999] (see Fig.\[fig1\] for the sketch of the system considered in these works). Specifically, it was found that in thin layers the instability is a long-wavelength one [@Lyubimov-Cherepanov-1987]. In [@Khenner-Lyubimov-Shotz-1998; @Khenner-etal-1999], the linear stability was determined for the case of arbitrary frequency of vibrations.
In spite of the substantial advance in theoretical studies, the problem proved to require subtle approaches; a comprehensive straightforward weakly-nonlinear analysis of the system subject to high-frequency vibrations still remains lacking in the literature (as well as the long-wavelength one). The approach employed in [@Lyubimov-Cherepanov-1987] can be (and was) used for analysis of time-independent quasi-steady patterns (including non-linear ones) only, but not the evolution of these patterns over time. This “restricted” analysis of the system revealed that quasi-steady patterns can occur both via sub- and supercritical pitchfork bifurcations, depending on the system parameters. Later on, specifically for thin layers, which will be the focus of our work, the excitation of patterns was shown to be always subcritical [@Zamaraev-Lyubimov-Cherepanov-1989] (paper [@Zamaraev-Lyubimov-Cherepanov-1989] is published only in Russian, although the result can be derived from [@Lyubimov-Cherepanov-1987] as well). Within the approach of [@Lyubimov-Cherepanov-1987; @Zamaraev-Lyubimov-Cherepanov-1989] neither time-dependent patterns nor the stability of time-independent patterns can be analyzed. Specifically for the case of subcritical excitation, time-independent patterns may belong to the stability boundary between the attraction basins of the flat-interface state and the finite-amplitude pattern state in the phase space. [^1]
In this work we accomplish the task of derivation of the governing equations for dynamics of patterns on the interface of two-layer fluid system within the approximation of inviscid fluids. In Wolf’s experiments [@Wolf-1961; @Wolf-1970], the viscous boundary layer in the most viscous liquid was an order of magnitude thinner than the liquid layer, meaning the approximation of inviscid liquid is relevant. The layer is assumed to be thin enough for the evolving patterns to be long-wavelength [@Lyubimov-Cherepanov-1987]. With the governing equations we analyze the dynamics of the system below the linear instability threshold, where the system turns out to be identical to the ‘plus’ Boussinesq equation. The system admits soliton solutions, these solutions are parameterized with single parameter, soliton speed. The maximal speed of solitons equals the minimal group velocity of linear waves in the system; the soliton waves move always slower than the packages of linear waves. Stability analysis reveals that the standing and slow solitons are unstable while fast solitons are stable. The system, as the ‘plus’ Boussinesq equation, is known to be fully integrable.
Recently, the problem of stability of a liquid film on a horizontal substrate subject to tangential vibrations was addressed in the literature [@Shklyaev-Alabuzhev-Khenner-2009]. The stability analysis for space-periodic patterns and solitary waves for the latter system was reported in [@Benilov-Chugunova-2010]. The similarity of this problem with the problem we consider and expected similarity of results are illusive. Firstly, for the problem of [@Shklyaev-Alabuzhev-Khenner-2009] the liquid film is involved into oscillating motion only due to viscosity, an inviscid liquid will be motionless over the tangentially vibrating substrate, while in the system we consider the inviscid fluid layers will oscillate due to motion of the lateral boundaries of the container and fluid incompressibility [@Lyubimov-Cherepanov-1987; @Khenner-Lyubimov-Shotz-1998; @Khenner-etal-1999]. Secondly, the single-film case corresponds to the case of zero density of the upper layer in a two-layer system; in the system we consider this is a very specific case. These dissimilarities have their reflection in the resulting mathematical models; the governing equations for long-wavelength patterns derived in [@Shklyaev-Alabuzhev-Khenner-2009] are of the 1st order with respect to time and the 4th order with respect to the space coordinate and describes purely dissipative patterns in the viscous fluid, while the equation we will report is of 2nd order in time, 4th order in space and describes non-dissipative dynamics.
The paper is organized as follows. In Sec.\[sec\_statement\] we provide a physical description and mathematical model for the system under consideration. In Sec.\[sec\_deriv\] the governing equations for long-wavelength patterns are derived and discussed. In Sec.\[sec\_solitons\] soliton solutions are presented and their stability properties are analyzed. Conclusions are drawn in Sec.\[sec\_concl\].
Problem statement and governing equations {#sec_statement}
=========================================
We consider a system of two horizontal layers of immiscible inviscid fluid, confined between two impermeable horizontal boundaries (see Fig.\[fig1\]). The system is subject to high-frequency longitudinal vibrations of linear polarization; the velocity of vibrational motion of the system is $be^{i\omega
t}+c.c.$ (here “$c.c.$” stands for complex conjugate). For simplicity, we consider the case of equal thickness, say $h$, of two layers, which is not expected to change the qualitative picture of the system behavior [^2] but makes calculations simpler. The density of upper liquid $\rho_1$ is smaller than the density of the lower one $\rho_2$. We choose the horizontal coordinate $x$ along the direction of vibrations, the $z$-axis is vertical with origin at the unperturbed interface between layers.
In this system, at the limit of infinitely extensive layers, the state with flat interface $z=\zeta(x,y)=0$ is always possible. In real layers of finite extent, the oscillating lateral boundaries enforce liquid waves perturbing the interface; however, at a distance from these boundaries the interface will be nearly flat as well. For inviscid fluids, this state (the ground state) features spatially homogeneous pulsating velocity fields $\vec{v}_{j0}$ in both layers; $$\begin{array}{c}
\displaystyle
\vec{v}_{j0}=a_j(t)\vec{e}_x,\qquad
a_j(t)=A_je^{i\omega t}+c.c.,\\[10pt]
\displaystyle
A_1=\frac{\rho_2 b}{\rho_1+\rho_2},\qquad
A_2=\frac{\rho_1 b}{\rho_1+\rho_2},
\end{array}
\label{eq01}$$ where $j=1,2$ and $\vec{e}_x$ is
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- |
Michele Pepe, [^1], Uwe-Jens Wiese\
Bern University, Switzerland\
E-mail: , ,
- |
Bernard B. Beard\
Christian Brothers University, Memphis, USA\
E-mail:
title: 'An Efficient Cluster Algorithm for CP(N-1) Models '
---
Standard Formulation of $CP(N-1)$ Models
========================================
The manifold $CP(N-1) = SU(N)/U(N-1)$ is a $(2N-2)$-dimensional coset space relevant in the context of the spontaneous breakdown of an $SU(N)$ symmetry to a $U(N-1)$ subgroup. In particular, in more than two space-time dimensions ($d > 2$) the corresponding Goldstone bosons are described by $N \times N$ matrix-valued fields $P(x) \in CP(N-1)$ which obey $$P(x)^2 = P(x), \ P(x)^\dagger = P(x), \ \mbox{Tr} P(x) = 1.$$ For $d = 2$ the Hohenberg-Mermin-Wagner-Coleman theorem implies that the $SU(N)$ symmetry cannot break spontaneously. Correspondingly, similar to 4-dimensional non-Abelian gauge theories, the fields $P(x)$ develop a mass-gap nonperturbatively. Motivated by these observations, D’Adda, Di Vecchia, and Lüscher [@DAd78] introduced $CP(N-1)$ models as interesting toy models for QCD. The corresponding Euclidean action is given by $$\label{CPNaction}
S[P] = \int d^2x \ \frac{1}{g^2} \mbox{Tr}[\partial_\mu P \partial_\mu P],$$ where $g^2$ is the dimensionless coupling constant. Note that this action is invariant under global $\Omega \in SU(N)$ transformations $$P(x)' = \Omega P(x) \Omega^\dagger,$$ and under charge conjugation $C$ which acts as $^CP(x) = P(x)^*$.
D-Theory Formulation of $CP(N-1)$ Models
========================================
In this section we describe an alternative formulation of field theory in which the $2$-dimensional $CP(N-1)$ model emerges from the dimensional reduction of discrete variables — in this case $SU(N)$ quantum spins in $(2+1)$ space-time dimensions. The dimensional reduction of discrete variables is the key ingredient of D-theory, which provides an alternative nonperturbative regularization of field theory. In D-theory we start from a ferromagnetic system of $SU(N)$ quantum spins located at the sites $x$ of a $2$-dimensional periodic square lattice. The $SU(N)$ spins are represented by Hermitean operators $T_x^a = \frac{1}{2} \lambda_x^a$ (Gell-Mann matrices for the triplet representation of $SU(3)$) that generate the group $SU(N)$ and thus obey $$[T_x^a,T_y^b] = i \delta_{xy} f_{abc} T_x^c, \
\mbox{Tr}(T_x^a T_y^b) = \frac{1}{2} \delta_{xy} \delta_{ab}.$$ In principle, these generators can be taken in any irreducible representation of $SU(N)$. However, as we will see later, not all representations lead to spontaneous symmetry breaking from $SU(N)$ to $U(N-1)$ and thus to $CP(N-1)$ models. The Hamilton operator for an $SU(N)$ ferromagnet takes the form $$H = - J \sum_{x,i} T_x^a T_{x+\hat i}^a,$$ where $J>0$ is the exchange coupling. By construction, the Hamilton operator is invariant under the global $SU(N)$ symmetry, i.e. it commutes with the total spin given by $$T^a = \sum_x T_x^a.$$
The Hamiltonian $H$ describes the evolution of the quantum spin system in an extra dimension of finite extent $\beta$. In D-theory this extra dimension is not the Euclidean time of the target theory, which is part of the $2$-dimensional lattice. Instead, it is an additional compactified dimension which ultimately disappears via dimensional reduction. The quantum partition function $$Z = \mbox{Tr} \exp(- \beta H)$$ (with the trace extending over the Hilbert space) gives rise to periodic boundary conditions in the extra dimension.
The ground state of the quantum spin system has a broken global $SU(N)$ symmetry. The choice of the $SU(N)$ representation determines the symmetry breaking pattern. We choose a totally symmetric $SU(N)$ representation corresponding to a Young tableau with a single row containing $n$ boxes. It is easy to construct the ground states of the $SU(N)$ ferromagnet, and one finds spontaneous symmetry breaking from $SU(N)$ to $U(N-1)$. Consequently, there are $(N^2 - 1) - (N-1)^2 = 2N - 2$ massless Goldstone bosons described by fields $P(x)$ in the coset space $SU(N)/U(N-1) = CP(N-1)$. In the leading order of chiral perturbation theory the Euclidean action for the Goldstone boson fields is given by $$\label{ferroaction}
S[P] = \int_0^\beta dt \int d^2x \ \mbox{Tr}
[\rho_s \partial_\mu P \partial_\mu P - \frac{2 n}{a^2} \int_0^1 d\tau \
P \partial_t P \partial_\tau P].$$ Here $\rho_s$ is the spin stiffness, which is analogous to the pion decay constant in QCD. The second term in eq.(\[ferroaction\]) is a Wess-Zumino-Witten term which involves an integral over an interpolation parameter $\tau$.
For $\beta = \infty$ the system then has a spontaneously broken global symmetry and thus massless Goldstone bosons. However, as soon as $\beta$ becomes finite, due to the Hohenberg-Mermin-Wagner-Coleman theorem, the symmetry can no longer be broken, and, consequently, the Goldstone bosons pick up a small mass $m$ nonperturbatively. As a result, the corresponding correlation length $\xi = 1/m$ becomes finite and the $SU(N)$ symmetry is restored over that length scale. The question arises if $\xi$ is bigger or smaller than the extent $\beta$ of the extra dimension. When $\xi \gg \beta$ the Goldstone boson field is essentially constant along the extra dimension and the system undergoes dimensional reduction. Since the Wess-Zumino-Witten term vanishes for field constant in $t$, after dimensional reduction the action reduces to $$\label{targetaction}
S[P] = \beta \rho_s \int d^2x \ \mbox{Tr}[\partial_\mu P \partial_\mu P],$$ which is just the action of the 2-d target $CP(N-1)$ model. The coupling constant of the 2-d model is determined by the extent of the extra dimension and is given by $$\frac{1}{g^2} = \beta \rho_s.$$ Due to asymptotic freedom of the 2-d $CP(N-1)$ model, for small $g^2$ the correlation length is exponentially large, i.e.$$\xi \propto \exp(4 \pi \beta \rho_s/N).$$ Here $N/4 \pi$ is the 1-loop coefficient of the perturbative $\beta$-function. Indeed, one sees that $\xi \gg \beta$ as long as $\beta$ itself is sufficiently large. In particular, somewhat counter-intuitively, dimensional reduction happens in the large $\beta$ limit because $\xi$ then grows exponentially. In D-theory one approaches the continuum limit not by varying a bare coupling constant but by increasing the extent $\beta$ of the extra dimension. This mechanism of dimensional reduction of discrete variables is generic and occurs in all asymptotically free D-theory models [@Bro99; @Bro04]. It should be noted that (just like in the standard approach) no fine-tuning is needed to approach the continuum limit.
Path Integral Representation of $SU(N)$ Quantum Spin Systems
============================================================
Let us construct a path integral representation for the partition function $Z$ of the $SU(N)$ quantum spin ferromagnet introduced above. In an intermediate step we introduce a lattice in the Euclidean time direction, using a Trotter decomposition of the Hamiltonian. However, since we are dealing with discrete variables, the path integral is completely well-defined even in continuous Euclidean time. Also the cluster algorithm to be described in the following section can operate directly in the Euclidean time continuum [@Beard96]. Hence, the final results are completely independent of the Trotter decomposition. In $2$ spatial dimensions (with an even extent) we decompose the Hamilton operator into $4$ terms $$H = H_1 + H_2 + H_3 + H_4,$$ with $$H_{1,2} = \!\! \sum_{\stackrel{x = (x_1,x_2)}{x_i
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Statistical divergence is widely applied in multimedia processing, basically due to regularity and explainable features displayed in data. However, in a broader range of data realm, these advantages may not out-stand, and therefore a more general approach is required. In data detection, statistical divergence can be used as an similarity measurement based on collective features. In this paper, we present a collective detection technique based on statistical divergence. The technique extracts distribution similarities among data collections, and then uses the statistical divergence to detect collective anomalies. Our technique continuously evaluates metrics as evolving features and calculates adaptive threshold to meet the best mathematical expectation. To illustrate details of the technique and explore its efficiency, we case-studied a real world problem of click farming detection against malicious online sellers. The evaluation shows that these techniques provided efficient classifiers. They were also sufficiently sensitive to a much smaller magnitude of data alteration, compared with real world malicious behaviours. Thus, it is applicable in the real world.'
author:
- |
[Ruoyu Wang[$^{1,2}$]{}, Daniel Sun[$^{2,3}$]{}, Guoqiang Li[$^{1*}$]{} ]{}\
*$^{1}$School of Software, Shanghai Jiao Tong University, China\
$^{2}$School of Computer Science and Engineering, University of New South Wales, Australia\
$^{3}$Data61, CSIRO, Australia\
{ruoyu.wang,li.g}@sjtu.edu.cn, daniel.sun@data61.csiro.au*
title: Statistical Detection of Collective Data Fraud
---
Introduction
============
Statistical divergence is widely applied in multimedia processing. Prevalent applications include multimedia event detection [@amid2014unsupervised], content classification [@moreno2004kullback; @park2005classification] and qualification [@pheng2016kullback; @goldberger2003efficient]. It has been attracting more attention since the dawn of big data era, basically due to regularity and interpretable features displayed in the data. However, in a broader range of data realm, these advantages may not out-stand (e.g. in online sales data records). It requires a more general approach.
Currently, there are more than 2.7ZB data in the digital universe [@bigDataStatistics] and the growing speed is doubling every two years. It has already been hard and will be much harder in the future to harness the exploding volume of data that has resulted in many problems in data management and engineering, threatening trustworthiness and reliability of data flows inside working systems. Data error rate in enterprises is approximately 1% to 5%, and for some, even above 30% [@saha2014data]. Those data anomalies may arise due to both internal and external reasons.
On one hand, components inside systems may generate problematic source data. For example, in a sensor network, some sensors may generate erroneous data when it experiences power failure or other extreme conditions [@rassam2014adaptive]. Data packages will be lost if sensor nodes fail to connect to network or some sensor hubs break down [@herodotou2014scalable]. Also, human operators act as a heavily vulnerable part to bugs and mistakes. Malicious insiders even deliberately modify system configurations for fatal compromises [@schuster2015vc3]. A study shows that 65% of organizations state that human errors are the main cause of data problems [@humanError] .
On the other hand, data manipulation [@dataManipulation] from outside hackers composes another potential threat of data quality and reliability. *Data Manipulation* here, according to a NSA definition, refers to that “hackers can infiltrate networks via any attack vector, find their way into databases and applications and change information contained in those systems, rather than stealing data and holding it for ransom”. If data is compromised, it will severely affect mining and learning algorithms and further change the final decision driven by the data. In 2013, hackers from Syria put up fake reports via Associated Press’ Twitter account and caused a 150-point drop in the Dow [@SyriaHacker].
It is hard to detect a single record that is alerted but still remain in correct value scopes, but if sufficient data records are altered to change a final decision, we can still detect malicious data manipulation behaviours. According to our observation, typical manipulations on numerical data will lead to a drift or distortion of its original distribution. For measurable reshaping, we can enclose data collections with similar distribution patterns and filter out those strangely shaped ones. To address problems caused by data manipulation, we proposed a novel technique which sorts out manipulated data collections from normal ones by adopting statistical divergence. In this paper, we focus on a concrete data manipulation problem: click farming in online shops, and try to apply our technique to pick out those dishonest sellers. Our technique maps data collections to points in distribution spaces and reduce the problem to classical point anomaly detection. Optimizations estimate ground truth, mapping each data collection into a single real number within a definite interval. Then a Gaussian classifier can be applied to detect outliers derived from manipulated data. To automatically calculate adaptive threshold for the classifier, we keep two evidence sets for both normal points and anomalies, taking advantage of the property provided by statistical divergence. In the dynamic environments, these evidence sets are modified after every data collection is checked, in which manner they act intuitively as slide windows and keep up to the evolving features in dynamic scenarios. Our contribution includes: 1) A brief review on data anomaly detection and a study on the problem of click farming; 2) Detailed description of both basic and optimized framework of our technique, resolving several technical difficulties such as automated adaptive threshold; 3) Real world and synthetic data experiments that test efficiency of our technique and a comparison with a previous work on the same topic.
The rest of the paper is organised as follows: Section \[sec:related-work\] states related work on data anomaly detection and describes a real world problem. Section \[sec:preliminaries\] introduces statistical distance. Details of proposed technique are introduced in section \[sec:algorithm-details\]. Then section \[sec:evaluation\] presents evaluation results and further findings of the algorithm. Finally, the paper is concluded in section \[sec:conclusion\].
Related Work {#sec:related-work}
============
Data Anomaly Detection
----------------------
Statistical divergence was applied mainly as classifiers on multimedia content [@park2005classification], especially as kernels in SVMs [@moreno2004kullback]. As a similarity measurement, it can also be used in qualitative and quantitative analysis in image evaluation [@pheng2016kullback; @goldberger2003efficient]. [@amid2014unsupervised] adopted divergence to detect events in multimedia streams.
Anomaly detection, also known as outlier detection, has been studied for a long time and discussed in diverse research domains, such as fraud detection, intrusion detection, system monitoring, fault detection and event detection in sensor networks. Anomaly detection algorithms deal with input data in the form of points (or records), sequences, graphs and spatial and geographical relationships. [@chandola2009anomaly] According to relationships within data records, outliers can be classified into *point anomalies*, *contextual (or conditional) anomalies* and *collective anomalies*. [@goldberger2000components]
Currently, distance based [@cao2014scalable; @cao2017multi] and feature evolving algorithms [@masud2013classification; @li2015discovery; @shao2014prototype] catch most attention. Others adopted tree isolation [@zhang2017lshiforest], model based [@yin2016model] and statistical methods [@zhu2002statstream] in certain applications.
To detect collective anomalies, [@caudell1993adaptive] adopts the *ART (Adoptive Resonance Theory)* neural networks to detect time-series anomalies. *Box Modeling* is proposed in [@chan2005modeling]. *Longest Common Subsequence* was leveraged in [@budalakoti2006anomaly] as similarity metric for symbolic sequence. Markovian modeling techniques are also popular in this domain[@ye2000markov; @warrender1999detecting; @pavlov2003sequence]. [@yu2015glad] depicts groups in social media as combinations of different “roles” and compare groups according to the proportion of each role within each group.
Wang et al. proposed a technique, *Multinomial Goodness-of-Fit* (MGoF), to analyze likelihood ratio of distributions via Kullback-Leibler divergence, and is fundamentally a hypothesis test on distributions [@wang2011statistical]. MGoF divides the observed data sequence into several windows. It quantifies data in each window into a histogram and check these estimated distributions against several hypothesis. If the target distribution rejects all provided hypothesis, it is considered an anomaly and preserved as a new candidate of null hypothesis. If the target distribution failed to reject some hypothesis, then it is considered a supporting evidence of the one that yields most similarity. Furthermore, if the number of supporting evidence is larger than a threshold $c_{th}$, it is classified as non-anomaly.
MGoF is the best competitor out of the similar techniques, and we use it as our baseline against our approach.
Real World Problem: Click Farming Detection {#sec:related-realworld}
-------------------------------------------
Taobao possesses a market share of 50.6% to 56.2% in China by 2016 [@iresearch2016b2c]. Currently, there are more than 9.4 million sellers in Taobao, providing more than 1 billion different products. Under the super-pressure caused by massive competitors, a number of the sellers choose to use some cheating techniques to raise reputation and sale volumes
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Given a generalized $\overline M^{n+1}=I\times_{\phi}F^n$ Robertson-Walker spacetime we will classify strongly stable spacelike hypersurfaces with constant mean curvature whose warping function verifies a certain convexity condition. More precisely, we will show that given $x:M^n\rightarrow\overline M^{n+1}$ a closed spacelike hypersurfaces of $\overline M^{n+1}$ with constant mean curvature $H$ and the warping function $\phi$ satisfying $\phi''''\geq\max\{H\phi'',0\}$, then $M^{n}$ is either minimal or a spacelike slice $M_{t_0}=\{t_0\}\times F$, for some $t_0\in I$.'
author:
- 'A. Barros, A. Brasil and A. Caminha'
title: Stability of Spacelike Hypersurfaces in Foliated Spacetimes
---
Introduction
============
Spacelike hypersurfaces with constant mean curvature in Lorentz manifolds have been object of great interest in recent years, both from physical and mathematical points of view. In [@ABC:03], the authors studied the uniqueness of spacelike hypersurfaces with CMC in generalized Robertson-Walker (GRW) spacetimes, namely, Lorentz warped products with 1-dimensional negative definite base and Riemannian fiber. They proved that in a GRW spacetime obeying the timelike convergence condition (i.e, the Ricci curvature is non-negative on timelike directions), every compact spacelike hypersurface with CMC must be umbilical. Recently, Alías and Montiel obtained, in [@AM:01], a more general condition on the warping function $f$ that is sufficient in order to guarantee uniqueness. More precisely, they proved the following
Let $f:I\rightarrow\mathbb R$ be a positive smooth function defined on an open interval, such that $ff^{''}-(f^{'})^{2}\leq 0$, that is, such that $-\log f$ is convex. Then, the only compact spacelike hypersurfaces immersed into a generalized Robertson-Walker spacetime $I\times_fF^{n}$ and having constant mean curvature are the slices $\{t\}\times F$, for a [(]{}necessarily compact[)]{} Riemannian manifold $F$.
Stability questions concerning CMC, compact hypersurfaces in Riemannian space forms began with Barbosa and do Carmo in [@BdC:84], and Barbosa, Do Carmo and Eschenburg in [@BdCE:88]. In the former paper, they introduced the notion of stability and proved that spheres are the only stable critical points for the area functional, for volume-preserving variations. In the setting of spacelike hypersurfaces in Lorentz manifolds, Barbosa and Oliker proved in [@Barbosa:93] that CMC spacelike hypersurfaces are critical points of volume-preserving variations. Moreover, by computing the second variation formula they showed that CMC embedded spheres in the de Sitter space $S_1^{n+1}$ maximize the area functional for such variations. In this paper, we give a characterization of [*strongly stable*]{}, CMC spacelike hypersurfaces in GRW spacetimes, the essential tool for the proof being a formula for the Laplacian of a new support function. More precisely, it is our purpose to show the following
Let $\overline M^{n+1}=I\times_{\phi}F^n$ be a generalized Robertson-Walker spacetime, and $x:M^n\rightarrow\overline M^{n+1}$ be a closed spacelike hypersurface of $\overline M^{n+1}$, having constant mean curvature $H$. If the warping function $\phi$ satisfies $\phi''\geq\max\{H\phi',0\}$ and $M^n$ is strongly stable, then $M^{n}$ is either minimal or a spacelike slice $M_{t_0}=\{t_0\}\times F$, for some $t_0\in I$.
Stable spacelike hypersurfaces
==============================
In what follows, $\overline M^{n+1}$ denotes an orientable, time-oriented Lorentz manifold with Lorentz metric $\overline g=\langle\,\,,\,\,\rangle$ and semi-Riemannian connection $\overline\nabla$. If $x:M^n\rightarrow\overline M^{n+1}$ is a spacelike hypersurface of $\overline M^{n+1}$, then $M^n$ is automatically orientable ([@O'Neill:83], p. 189), and one can choose a globally defined unit normal vector field $N$ on $M^n$ having the same time-orientation of $V$, that is, such that $$\langle V,N\rangle<0$$ on $M$. One says that such an $N$ [*points to the future*]{}.
A [*variation*]{} of $x$ is a smooth map $$X:M^n\times(-\epsilon,\epsilon)\rightarrow\overline M^{n+1}$$ satisfying the following conditions:
1. For $t\in(-\epsilon,\epsilon)$, the map $X_t:M^n\rightarrow\overline M^{n+1}$ given by $X_t(p)=X(t,p)$ is a spacelike immersion such that $X_0=x$.
2. $X_t\big|_{\partial M}=x\big|_{\partial M}$, for all $t\in(-\epsilon,\epsilon)$.
The [*variational field*]{} associated to the variation $X$ is the vector field $\frac{\partial X}{\partial t}$. Letting $f=-\langle\frac{\partial X}{\partial t},N\rangle$, we get $$\frac{\partial X}{\partial t}\Big|_M=fN+\left(\frac{\partial X}{\partial t}\right)^T,$$ where $T$ stands for tangential components. The [*balance of volume*]{} of the variation $X$ is the function $\mathcal V:(-\epsilon,\epsilon)\rightarrow\mathbb R$ given by $$\mathcal V(t)=\int_{M\times[0,t]}X^*(d\overline M),$$ where $d\overline M$ denotes the volume element of $\overline M$.
The [*area functional*]{} $\mathcal A:(-\epsilon,\epsilon)\rightarrow\mathbb R$ associated to the variation $X$ is given by $$\mathcal A(t)=\int_MdM_t,$$ where $dM_t$ denotes the volume element of the metric induced in $M$ by $X_t$. Note that $dM_0=dM$ and $\mathcal A(0)=\mathcal A$, the volume of $M$. The following lemma is classical:
\[lemma:first variation\] Let $\overline M^{n+1}$ be a time-oriented Lorentz manifold and $x:M^n\rightarrow\overline M^{n+1}$ a spacelike closed hypersurface having mean curvature $H$. If $X:M^n\times(-\epsilon,\epsilon)\rightarrow\overline M^{n+1}$ is a variation of $x$, then $$\frac{d\mathcal V}{dt}\Big|_{t=0}=\int_MfdM,\ \ \ \ \frac{d\mathcal A}{dt}\Big|_{t=0}=\int_MnHfdM.$$
Set $H_0=\frac{1}{\mathcal A}\int_MdM$ and $\mathcal J:(-\epsilon,\epsilon)\rightarrow\mathbb R$ given by $$\mathcal J(t)=\mathcal A(t)-nH_0\mathcal V(t).$$ $\mathcal J$ is called the [*Jacobi functional*]{} associated to the variation, and it is a well known result [@BdCE:88] that $x$ has constant mean curvature $H_0$ if and only if $\mathcal J'(0)=0$ for all variations $X$ of $x$.
We wish to study here immersions $x:M^n\rightarrow\overline M^{n+1}$ that maximize $\mathcal J$ for all variations $X$. Since $x$ must be a critical point of $\mathcal J$, it thus follows from the above discussion that $x$ must have constant mean curvature. Therefore, in order to examine whether or not some critical immersion $x$ is actually a maximum for $\mathcal J$, one certainly needs to study the second variation $\mathcal J''(0)$. We start with the following
Let $x:M^n\rightarrow\overline M^{n+1}$ be a closed spacelike hypersurface of the time-oriented Lorentz manifold $\overline M^{n+1}$, and $X:M^n\times(-\epsilon,\epsilon)\rightarrow\overline M^{n+1}$ be a variation of $x$. Then, $$\label{eq:fundamental relation}
n\frac{\partial H}{\partial t}=\Delta f-\left\{\overline{Ric}(N,N)+|A|^2\right\}f-n\langle\left(\frac{\partial X}{\partial t}\right)^T,\nabla H\rangle.$$
Although the above proposition is known to be true, we believe there is a lack, in the literature, of a clear proof of it in this degree of generality, so we present a simple proof here.
Let $p\in M$ and $\{e_k\}$ be a moving frame on a neighborhood $U\subset M$ of $p$, geodesic at $p$ and diagonalizing $A$ at $p$, with $Ae
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: '[Multisymplectic systems, partial differential equations, fluid dynamics, conservation laws, potential vorticity]{} We construct multisymplectic formulations of fluid dynamics using the inverse of the Lagrangian path map. This inverse map – the “back-to-labels” map – gives the initial Lagrangian label of the fluid particle that currently occupies each Eulerian position. Explicitly enforcing the condition that the fluid particles carry their labels with the flow in Hamilton’s principle leads to our multisymplectic formulation. We use the multisymplectic one-form to obtain conservation laws for energy, momentum and an infinite set of conservation laws arising from the particle-relabelling symmetry and leading to Kelvin’s circulation theorem. We discuss how multisymplectic numerical integrators naturally arise in this approach.'
author:
- 'C. J. Cotter, D. D. Holm and P. E. Hydon'
title: Multisymplectic formulation of fluid dynamics using the inverse map
---
\[firstpage\]
Introduction
============
A system of partial differential equations (PDEs) is said to be *multisymplectic* if it is of the form $$K^{\alpha}_{ij}(\MM{z})z^j_{,\alpha} = {\frac{\partial H}{\partial z^i}},$$ where each of the two-forms $$\kappa^\alpha = \frac{1}{2}K_{ij}^\alpha(\MM{z})\, {\mathrm{d}}z^i\wedge{\mathrm{d}}z^j$$ is closed. Here $\MM{z}$ is an ordered set of dependent variables, total differentiation with respect to each independent variable $q^\alpha$ is denoted by the subscript $\alpha$ after a comma, and the Einstein summation convention is used.
The closed two-form $\kappa^\alpha$ is associated with the independent variable $q^\alpha$; it is analogous to the symplectic two-form for a Hamiltonian ordinary differential equation. Hence there is a symplectic structure associated with each independent variable. In the first of a series of papers, Bridges (1997) pioneered the development of multisymplectic systems, showing that the rich geometric structure that is endowed by the symplectic two-forms can be used to understand the interaction and stability of nonlinear waves. For many important PDEs, the multisymplectic formulation has revealed hidden features that are important in stability analysis. In order to preserve at least some of these features in numerical simulations, Bridges & Reich (2001) introduced multisymplectic integrators, which generalise the symplectic methods that have been widely used in numerical Hamiltonian dynamics. Hydon (2005) showed that multisymplectic systems of PDEs may be derived from Hamilton’s principle whenever the Lagrangian is affine in the first-order derivatives and contains no higher-order derivatives. This can usually be achieved by introducing auxiliary variables to eliminate the derivatives.
The aim of this paper is to provide a unified approach to producing multisymplectic formulations of fluid dynamics, based on the inverse map. Our approach covers all fluid dynamical equations that are written in Euler-Poincaré form (Holm *et al.*, 1998), *i.e.* all equations which arise due to the advection of fluid material. First we use the inverse map to form a canonical Euler-Lagrange equation (following the Clebsch representation given in Holm & Kupershmidt, 1983). Then the Lagrangian is made affine in the space and time derivatives by using constraints that introduce additional variables. Following Hydon (2005), we obtain a one-form quasi-conservation law which, when it is pulled back to the space and time coordinates, gives conservation laws for momentum and energy. We also obtain a two-form conservation law that represents conservation of symplecticity; when this is pulled back to the spatial coordinates, it leads to a conservation law for vorticity. The multisymplectic version of Noether’s Theorem yields an infinite space of conservation laws from the particle-relabelling symmetry for fluid dynamics; these conservation laws imply Kelvin’s circulation theorem. The conserved momentum that is canonically conjugate to the back-to-labels map plays a key role in the derivation of the conservation laws. The corresponding velocity is the convective velocity, whose geometric properties are discussed in Holm *et al.* (1986).
In this paper we show how the above constructions are made in general, illustrating this with examples. We also discuss how multisymplectic integrators can be constructed using these methods. Sections \[review\] and \[inverse map sec\] review the relations among multisymplectic structures, the Clebsch representation and the momentum map associated with particle relabelling. Section \[inverse map EPDiff\] shows how to construct a multisymplectic formulation of the Euler-Poincaré equation for the diffeomorphism group (EPDiff), and derives the corresponding conservation laws, including the infinite set of conservation laws that yield Kelvin’s circulation theorem. Section \[advected\] extends this formulation to the Euler-Poincaré equation with advected quantities. This is illustrated by the incompressible Euler equation, showing how the circulation theorem arises in the multisymplectic formulation. Section \[numerics\] sketches numerical issues in the multisymplectic framework. Finally, Section \[summary\] summarises and outlines directions for future research.
Review of multisymplectic structures {#review}
====================================
This section reviews the formulation of multisymplectic systems and their conservation laws, following Hydon (2005).
A system of partial differential equation (PDEs) is multisymplectic provided that it can be represented as a variational problem with a Lagrangian that is affine in the first derivatives of the dependent variables: $$\label{mslag}
L = L_j^\alpha(\MM{z})z^j_{,\alpha} - H(\MM{z}).$$ The Euler-Lagrange equations are then $$\label{Eul-Lag-eqns}K^{\alpha}_{ij}(\MM{z})z^j_{,\alpha} =
{\frac{\partial H}{\partial z^i}},$$ where the functions $$\label{Msymp-struct-matrix} K^{\alpha}_{ij}(\MM{z}) =
{\frac{\partial L^\alpha_j}{\partial z^i}}-{\frac{\partial L^\alpha_i}{\partial z^j}}$$ are coefficients of the multisymplectic structure matrix. We define the (closed) symplectic two-forms $$\label{kappa} \kappa^\alpha = \frac{1}{2}K_{ij}^\alpha(\MM{z})\, {\mathrm{d}}z^i\wedge{\mathrm{d}}z^j,$$ and obtain the structural conservation law (Bridges, 1997). $$\label{kappa law} \kappa^\alpha_{,\alpha} = 0.$$ Hydon showed that the Poincaré Lemma leads to a one-form quasi-conservation law $$(L^{\alpha}_jdz^j)_{,\alpha} =
{\mathrm{d}}(L^\alpha_jz^j_{,\alpha}-H(\MM{z}))= {\mathrm{d}}{L}, \label{ofcl}$$ whose exterior derivative is (\[kappa law\]).
Every one-parameter Lie group of point symmetries of the multisymplectic system (\[Eul-Lag-eqns\]) is generated by a differential operator of the form $$\label{X}
X = Q^i(\MM{q},\MM{z}){\frac{\partial }{\partial z^i}} + (Q^i(\MM{q},\MM{z}))_{,\alpha}
{\frac{\partial }{\partial z^i_{,\alpha}}}.$$ Noether’s Theorem implies that if $X$ generates variational symmetries, that is, if $$\label{varsym}
XL = B^\alpha_{,\alpha}$$ for some functions $B^\alpha$, then the interior product of $X$ with the one-form quasi-conservation law yields the conservation law $$\label{noethm}
(L_j^{\alpha}Q^j-B^{\alpha})_{,\alpha}=0.$$ This is the multisymplectic form of Noether’s theorem.
Every multisymplectic system is invariant under translations in the independent variables $\MM{q}$. For each of these symmetries, Noether’s theorem yields a conservation law $$(L_j^\alpha z^j_{,\beta}-L\delta^\alpha_\beta)_{,\alpha}=0.$$ Such conservation laws can equally well be obtained by pulling back the quasi-conservation law (\[ofcl\]) to the base space of independent variables. Commonly, the independent variables are spatial position $\MM{x}$ and time $t$. Pulling back (\[ofcl\]) to these base coordinates yields the energy conservation law from the ${\mathrm{d}}{t}$ component, and the momentum conservation law from the remaining components. We shall see the form of these conservation laws for fluid dynamics in later sections.
The inverse map and Clebsch representation {#inverse map sec}
==========================================
Lagrangian fluid dynamics and the inverse map
---------------------------------------------
Lagrangian fluid dynamics provides evolution equations for particles moving with a fluid flow. This is typically done by writing down a flow map $\Phi$ from some reference configuration to the fluid domain $\Omega$ at each instance in time. As the fluid particles cannot cavitate, superimpose or jump, this map must be a diffeomorphism.
For an $n$-dimensional fluid flow, the flow map $\Phi:\,\mathbb{R}^n\times\mathbb{R}\mapsto\mathbb{R}^n$ given by $\MM{x}=\Phi(\MM{l},t)$ specifies the spatial position at time $t$ of the fluid
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation—maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.'
author:
- 'Jim Jing-Yan Wang, Majed Alzahrani and Xin Gao'
title: |
\
**Large Margin Image Set Representation and Classification [^1] [^2]**
---
Introduction
============
visual information processing and understanding is based on single images. The classification of a visual target, such as a human face, is also conducted based on a single image, where each training or test sample is an individual image [@Zhao2003399; @JonathonPhillips20001090; @wang2012scimate; @wang2012adaptive; @zhou2010region]. With the rapid development of video technologies, sequences of images is more commonly available than single images. Consequently, the visual target classification task could be improved from image-based to image-set-based. The image set classification problem has been proposed and studied recently [@MMD2008; @Chu20111567; @Fan2006177; @Kim2007; @Chin2006461; @Wang20091161; @MMD2008; @Arandjelovic2005581; @sun2012unsupervised]. In image set classification, each sample is a set of images instead of one single image, and each class is represented by one or more training samples. The classification problem is to assign a given test sample to one of the known classes. For example, in image-set-based face recognition problem, each sample is a set of facial images with different poses, illuminations, and expressions. Compared to the traditional single image classification, image set classification has the potential of achieving higher accuracy, because image sets usually contain more information than single images. Even if images in the image set are of low quality, we could still exploit the temporal relationship and the complementarity between the images to improve the classification accuracy.
Although image set classification has proposed a novel and promising scheme for the visual classification problem, it has also brought challenges to the machine learning and computer vision communities. Traditional single-image-based representation and classification methods are not suitable for this problem, such as principal component analysis (PCA) [@Wold198737], support vector machine (SVM) [@SVM2013], and K-nearest neighbors (KNN) [@KNN2013]. To handle the problem of image set classification, a number of methods have been proposed. In [@Kim2007], Kim et al. developed the discriminant canonical correlations (DCC) for image set classification, by proposing a linear discriminant function to simultaneously maximize the canonical correlations of within-class sets and minimize the canonical correlations of between-class sets. The classification is done by transforming the image sets using the discriminant function and comparing them by the canonical correlations. In [@MMD2008], Wang et al. formulated the image set classification problem as the computation of manifold-manifold distance (MMD), i.e., calculating the distance between nonlinear manifolds, each of represented one image set. In [@MDA2009], Wang et al. presented the manifold discriminant analysis (MDA) for the image set classification problem, by modeling each image set as a manifold and formulating the problem as a classification-oriented multi-manifold learning problem. Cevikalp and Triggs later introduced the affine hull-based image set distance (AHISD) for image set based face recognition [@AHISD2010], by representing images as points in a linear or affine feature space and characterizing each image set by a convex geometric region spanned by its feature points. Set dissimilarity was measured by geometric distances between convex models. Hu et al. proposed to represent each image set as both the image samples of the set and their affine hull model [@Hu2011; @Hu2012], and introduced a novel between-set distance called sparse approximated nearest point (SANP) distance for the image set classification. In [@Wang2012], Wang et al. modeled the image set with its natural second-order statistic, i.e. covariance matrix (COV), and proposed a novel discriminative learning approach to image set classification based on the covariance matrix.
Among all these methods, affine subspace-based methods, including AHISD [@AHISD2010] and SANP [@Hu2011; @Hu2012], have shown their advantage over the other methods. However, all these methods are unsupervised ones, ignoring the class labels of the images sets in the training set. Moreover, most image-set-based classification methods are under the framework of pairwise image set comparison [@MMD2008; @MDA2009; @AHISD2010; @Hu2011; @Hu2012]. A similar approach has been successfully adopted on pairwise comparison on individual samples by Mu et al. [@mu2013local], which exploits abundant discriminative information for each local manifold. In set classification case, a test image set is compared to all the training image set one by one, and then the nearest neighbor rule is utilized to decide which class the test image set belongs to. The disadvantage of this strategy lays in the following two folds:
- When the training image set number is large, this strategy is quite time-consuming.
- When a pair of image sets are compared, all other image sets are ignored, and thus the global structure of the image set dataset is ignored.
To overcome these issues, in this paper, we propose a novel image set representation and classification method. Similarly to SANP [@Hu2011; @Hu2012], we also use the image samples of an image set and their affine hull model to represent the image set. To utilize the class labels of each image set, inspired by large margin framework for feature selection [@sun2010local], we propose to maximize the margin of each image set. Based on this representation and its corresponding pairwise distance measure, we define two types of nearest neighboring image sets for each image set — the nearest neighbor from the same class and the nearest neighbor from different classes. The margin of a image set is defined as the difference between its distances to nearest miss and nearest hit, and the representation parameter is learned by maximizing the margins of the image sets. To classify a test image set, we assign it to the class which could achieve the largest margin for it. The contributions of the proposed Large Margin Image Set (LaMa-IS) representation and classification method are of three folds:
1. Using the class labels, we define the margin of the image sets, such that the discriminative ability can be improved in a supervised manner.
2. The global structure of the image sets can also be explored by searching the nearest hit and nearest miss from the entire database for each images set.
3. To classify a test image set, we only need to compare it to every class instead of every training image set, which could reduce the time complexity of the online classification procedure significantly, especially when the number of training image sets is much larger than the number of classes.
The rest of the paper is organized as follows: in Section \[sec:Method\], we propose the novel LaMa-IS algorithm; in Section \[sec:experiment\], the experiment results on several image-set-based face recognition problems are given; and finally in Section \[sec:conclusion\], we draw conclusions.
Methods {#sec:Method}
=======
In this section we will introduce the proposed LaMa-IS method for image set representation and classification.
Objective Function
------------------
Suppose we have a database of image sets denoted as $\{(X_i,y_i)_{i=1}^N\}$, where $X_i$ is the data matrix of the $i$-th image set, and $y_i\in \{1,\cdots,C\}$ is its corresponding class label. In the data matrix $X_i = [{{\textbf{x}}}_{i,1},\cdots,{{\textbf{x}}}_{i,N_i}]\in \mathbb{R}^{D\times N_i}$, the $n$-th column, ${{\textbf{x}}}_{i,n}\in \mathbb{R}^D$ is the $D$-dimensional visual feature vector of the $n$-th image of the $i$-th image set, and $N_i$ is the number of images in $i$-th image set. Note that the feature vector for an image can be the original pixel values or some other visual features extracted from the image, such as local binary pattern (LBP) [@LBP2013]. To represent an image set, two linear model has been employed to approximate the structure of the image set following [@Hu2011; @Hu2012]:
- Using the images in the image set, we can model the $i$-th image set as an linear combination of the images in the $i$-th set as $$\begin{aligned}
{{\textbf{x}}}= \sum_{n=1}^{N_i} {{\textbf{x}}}_{i,n} \alpha_{i
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Let $p_{1}<p_2<\dots <p_{\nu}<\cdots$ be the sequence of prime numbers and let $m$ be a positive integer. We give a strong asymptotic formula for the distribution of the set of integers having prime factorizations of the form $p_{m^{k_1}}p_{m^{k_{2}}} \cdots p_{m^{k_{n}}}$ with $k_{1}\le k_{2}\le \dots \le k_{n}$. Such integers originate in various combinatorial counting problems; when $m=2$, they arise as Matula numbers of certain rooted trees.'
address: 'Department of Mathematics, Ghent University, Krijgslaan 281 Gebouw S22, B 9000 Gent, Belgium'
author:
- Hans Vernaeve
- Jasson Vindas
- Andreas Weiermann
title: Asymptotic distribution of integers with certain prime factorizations
---
Introduction
============
Let $\left\{p_\nu\right\}_{\nu=1}^{\infty}$ be the sequence of all prime numbers arranged in increasing order and let $m>1$ be a fixed positive integer. We shall consider the class of integers only admitting prime factors from the subsequence $\left\{p_{m^{k}}\right\}_{k=0}^{\infty}$, that is, the set $$\label{Matulaeq1}
A_{m}=\left\{p_{m^{k_1}}p_{m^{k_{2}}} \cdots p_{m^{k_{n}}}\in\mathbb{N}:\ 0\leq k_{1}\le k_{2}\leq \cdots \le k_{n}\right\}\:.$$ The aim of this article is to provide an asymptotic formula for the distribution of $A_{m}$, namely, the following counting function $$M_{2,m}(x)=\underset{n\in A_{m}}{\sum_{n\le x}} 1\: .$$
The function $M_{2,2}$ arises in various interesting combinatorial counting problems; particularly, in connection with rooted trees. In 1968 Matula gave an enumeration of (non-planar) rooted trees by prime factorization [@Matula1968], the so-called Matula numbers. Number theoretic aspects of this rooted tree coding have been investigated in detail in [@b-t; @g-ivic1996]. Such numbers may be used to deduce many intrinsic properties of rooted trees [@Deutsch; @g-ivic1994; @g-y]. The set $A_{2}$ in fact corresponds to a class of Matula numbers. In Section \[rooted trees\] we review Matula coding of rooted trees and give the interpretation of $M_{2,2}$ as the counting function of rooted trees with height less or equal to 2, under Matula’s enumeration. It is worth mentioning that the significance of Matula numbers comes from applications in organic chemistry, as they can be employed to develop efficient nomenclatures for representing molecules of a variety of organic compounds (cf. [@elk1989; @elk1990; @elk1994; @elk2011; @g-ivic-elk1993]). As explained in Section \[rooted trees\], $M_{2,2}$ might also be regarded as a “transfinite counting function” for the ordinal $\omega^{\omega}$ in a certain complexity norm [@Weiermann2010].
In [@Weiermann2010] Weiermann found the weak asymptotics of the counting function $M_{2,2}$. Using a Tauberian theorem by Kohlbecker for partitions [@kohlbecker1958], he showed that
$$\label{Matulaeq3}
\log M_{2,2}(x)\sim \pi \sqrt{\frac{2\log x}{3\log 2}}\,.$$
The asymptotic relation (\[Matulaeq3\]) resembles the one obtained by Hardy and Ramanujan in 1917 for the celebrated (unrestricted) partition function, $$\label{Matulaeq4}
\log p(n)\sim \pi \sqrt{\frac{2n}{3}}\:,$$ which they [@hardy-ramanujan1918], and independently Uspensky [@Uspensky1920], greatly refined later to $$\label{Matulaeq5}
p(n)\sim \frac{e^{\pi \sqrt{\frac{2n}{3}}}}{(4\sqrt{3})n}\: .$$ Naturally, the transition from (\[Matulaeq4\]) to (\[Matulaeq5\]) consists in finding missing asymptotic terms. The problem we address here is of similar nature. We shall fill the gap between (\[Matulaeq3\]) and the strong asymptotics by exhibiting hidden lower order terms in the approximation (\[Matulaeq3\]), as stated in the following theorem.
\[Matulath1\] The function $M_{2,m}$ has asymptotic behavior $$\label{Matulaeq6}
M_{2,m}(x)\sim \frac{e^{K_m}\sqrt{3}\log m}{2 \pi^{2}\log 2}(\log x)^{\frac{\log\left(\frac\pi {\sqrt{6\log m}}\right)}{2\log m}}
\exp\left(\pi\sqrt{\frac{2\log x}{3\log m}} - \frac {(\log\log x)^{2}}{8\log m}\right)\: ,$$ where $$K_m=\frac{1}{2\log m}\left((\log\log m)^2+\gamma^{2}-2\gamma\log\log m-\frac{\pi^{2}}{6}-\log^2\left(\frac\pi{\sqrt{6\log m}}\right)\right)- C_{2,m}\:,$$ $\gamma$ is the Euler-Mascheroni constant, and $C_{2,m}$ is given by the convergent series $$C_{2,m}=\sum_{k=1}^\infty \left(\log\log p_{m^k} - \log k - \log\log m - \frac{\log k}{k \log m} - \frac{\log\log m}{k \log m}\right)\:.$$
We will provide a proof of Theorem \[Matulath1\] in Section \[proof\]. The proof is based on Ingham’s method from [@Ingham]; however, it turns out that Ingham’s original theorem for partitions [@Ingham Thm. 2] is not directly applicable to our context. In Section \[Ingham theorem\], we shall slightly extend his result. It is likely that such an extension of Ingham’s theorem might be useful for treating partition problems other than the one dealt with in this article.
Two counting problems and $M_{2,m}$ {#rooted trees}
===================================
Rooted trees {#Matula Numbers}
------------
Matula’s coding of (non-planar) rooted trees in terms of prime factorizations provides a bijection between such trees and the positive integers. The same rooted tree enumeration was rediscovered by Göbel in [@gobel1980]. It is defined as follows. If we denote the trivial one-vertex tree by $\bullet$, then its Matula number is $n(\bullet):=1$. Inductively, if $T_1$, $T_2$, …, $T_l$ are trees and $T$ is given as $$\begin{tikzpicture}[thick,auto,baseline]
\coordinate[vertex] (r) at (0,0);
\coordinate[tree,label=left:$T_1$] (n1) at (-1,1);
\coordinate[tree,label=left:$T_2$] (n2) at (-.5,1.5);
\coordinate[tree,label=right:$T_{l-1}$] (n3) at (.5,1.5);
\coordinate[tree,label=right:$T_{l}$] (n4) at (1,1);
\coordinate[label=right:$\dots$] (dots) at (-.4,1.5);
\draw (r) to (n1);
\draw (r) to (n2);
\draw (r) to (n3);
\draw (r) to (n4);
\end{tikzpicture}$$ then its Matula number is defined as $n(T) := p_{n(T_1)}\cdots p_{n(T_l)}$.
If $T_{1,k}$ is the tree of height one with $k$ nodes above the root, then $n(T_{1,k})= p_1^k = 2^k$. If $T$ has height two, then $$n(T) = p_{n(T_{1,k_1})}p_{n(T_{1,k_2})} \cdots p_{n(T_{1,k_{\nu-1}})} p_{n(T_{1,k_\nu})} = p_{2^{k_1}}p_{2^{k_{2}}}\cdots p_{2^{k_{\nu-1}}} p_{2^{k_\nu}}\:,$$ where the $j$-th node connected to the root carries a tree $T_{1,k_j}$. $$\begin{tikzpicture}[thick,auto,baseline]
\coordinate[vertex] (r) at (0,0);
\coordinate[vertex,label=left:$T_{1,k_1}$] (n
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We have studied the energy gap of the 1D AF-Heisenberg model in the presence of both uniform ($H$) and staggered ($h$) magnetic fields using the exact diagonalization technique. We have found that the opening of the gap in the presence of a staggered field scales with $h^{\nu}$, where $\nu=\nu(H)$ is the critical exponent and depends on the uniform field. With respect to the range of the staggered magnetic field, we have identified two regimes through which the $H$-dependence of the real critical exponent $\nu(H)$ can be numerically calculated. Our numerical results are in good agreement with the results obtained by theoretical approaches.'
address: 'Institute for Advanced Studies in Basic Sciences, Zanjan 45195-1159, Iran'
author:
- 'S. Mahdavifar'
title: ' Scaling behavior of the energy gap of spin-$\frac{1}{2}$ AF-Heisenberg chain in both uniform and staggered fields '
---
Introduction
============
The effect of external magnetic fields in the quantum properties of low-dimensional magnets has been of much interest in recent years. Experimental and theoretical studies of these systems have revealed a plethora of quantum flactuation phenomena, not usally observed in higher dimensions. The magnetization processes in antiferromagnetic (AF) spin chains and ladders have been under intensive investigation using novel numerical techniques. The progress in the experimental front is achived by introduction of high-field neutron scattering studies and synthesis of magnetic quasi-one dimensional systems such as the spin-$\frac{1}{2}$ antiferromagnet Cu benzoate [@dender; @asano1; @asano2] and $Yb_{4}As_{3}$ [@oshikawa1; @shiba; @kohgi]. Due to these developments we can now observe the effect of a staggered magnetic field (or even more complicated interactions) on the low energy behavior of a one-dimentional quantum model in the laboratory.
There exist different mechanisms for generating a staggered field in a real magnet [@oshikawa2; @wang; @sato]. In Cu benzoate the alternating crystal axes is the source of such a field. Dender et.al. [@dender] showed that an effective staggered field can be generated by the alternating g-tensor. Theoreticaly, Afflec et.al. [@oshikawa2] have studied how an effective staggered field is generated by Dzyaloshinskii-Moriya (DM) interaction if the crystal symmetry is sufficiently low. They showed that in the presence of DM interaction along the AF chain, an applied uniform field $\overrightarrow{H}$ generates an effective staggered field $\overrightarrow{h}$. Ignoring small residual anisotropies, they obtained an effective hamiltonian where a one-dimensional Heisenberg AF chain is placed in perpendicular uniform ($H$) and staggered ($h$) fields $$\hat{H}= \sum_{j} [J \overrightarrow{S}_{j}.
\overrightarrow{S}_{j+1}-H S_{j}^{x}+h (-1)^{j}
S_{j}^{z}]\label{Hamiltonian}$$ It is expected [@oshikawa2; @alcaraz1] that the staggered field induces an excitation gap in the $S=\frac{1}{2}$ Heisenberg antiferromagnetic (AF) chain, which should be otherwise gapless. Such as excitation gap caused by the staggered field is indeed found in real magnets [@dender; @kohgi; @feyerherm].
In the absence of the staggered magnetic field ($h=0$) and the uniform magnetic field ($H=0$), the spectrum is gapless. In the ground state, the system is in the spin-fluid phase, where the decay of correlations fallow a power low. When a uniform magnetic field is applied the spectrum of the system remains gapless until a critical field $H_{c}=2 J$, is reached. Here a phase transition of the Pokrovsky-Talapov type [@pokrovsky] occurs and the ground state becomes a completely ordered ferromagnetic state [@griffiths]. Since the uniform magnetic field does not destroy the exact integrability of the Heisenberg model, the eigenspectra is exactly solvable. Applying a staggered magnetic field, the integrability is lost. The application of a staggered magnetic field when $H=0$, produces an antiferromagnetically ordered (Neel order) ground-state and induces a gap in the spectrum of the model. Heisenberg model in both staggered and uniform fields has been recently studied [@lou] using density matrix renormalization group (DMRG). It is shown that bound midgap states generally exist in open boundary AF-Heisenberg chains. The gap and midgap energies in the thermodynamic limit are obtained by extrapolating numerical results of small chain sizes up to 200 sites. It is revealed that some of the gap and midgap energies for the half-integer spin chains fit well to a scaling function derived from the quantum Sine-Gordon model, but other low energy excitations do not fit equally well.
In this paper, we present the numerical results obtained on the low-energy states of the 1D AF-Heisenberg model in both uniform and staggered fields using an exact diagonalization technique for finite systems. We calculate the spin gap as a function of applied staggered field in the presence of small uniform field ($0\leq H<0.1$). With respect to the range of the staggered magnetic field, we show that there are two regimes in which we can compute the real critical exponent of the energy gap and it is important to note to which one of these regimes the numerical data are related. In Sec. II we discuss the scaling behavior of the gap using the available limiting behaviors. The leading exponent of the staggered field $h$, depends on $H$ boath in finite size and thermodynamic limit. In Sec. III, we explain how, in certain limits the numerical calculations may produce incorrect result for the critical exponent. We apply a perturbative approach[@langari] to find the correct critical exponent in the small-$x$ ($x=N h^{\nu(H)}$) regime. In Sec. IV, we increase the scaling parameter $x$ and find the correct critical exponent in the large-$x$ regime. Finally, the summary and discussion are presented in Sec. V.
The Scaling Behavior of the Gap
===============================
In the high field neutron-scattering experiment on Cu benzoate [@dender], which is a quasi-one dimensional $S=\frac{1}{2}$ antiferromagnet, the magnetic field induces a gap in excitation spectrum of the magnet. The observed gap is proportional to $H_{0}^{0.65}$, where $H_{0}$ is the magnitude of the applied field. This exponent of about $\frac{2}{3}$ describing the field dependence of the gap obtained in different experiments [@feyerherm; @kohgi] identify the source of this gap as the staggered field.
Using bosonization techniques, Affleck et.al showed that, the gap scales as $$\begin{aligned}
\Delta(h, H) \sim h^{\nu(H)}, \label{e5}\end{aligned}$$ where $\nu(H)$ is the critical exponent of the gap and when $H$ is stricly $0$, $\nu(H=0)=\frac{2}{3}$. The $H$-dependence of the exponent $\nu(H)$ is studied numerically in Ref.\[15\]. Their approach is based on the $\eta$-exponent, defined through the static structure factor of the model in the absence of a staggered field ($h=0$). They show that there is a relation between the critical exponent of the gap and $\eta$-exponent. Then by computing the $\eta$-exponent of the structure factor of the model, they predict the $H$-dependence of $\nu(H)$. Similarly, In an interesting recent work [@chernyshev], the effect of an external field on the gap of the 2D AF Heisenberg model with DM interaction has been studied. It is shown that the effect of the external field on the gap can be predicted by investigating the on-site magnetization of the model.
Here we study the evolution of the gap, using the conformal estimates of the small perturbation $h\ll1$, and the finite size scaling estimates of the energy eigenvalues of the small chains in the presence of the staggered field ($h\neq0$). We argue that there are two regimes in which the real critical exponent can be numerically calculated and it is very important to note to which one of these regimes the numerical data are related.
Let us rewrite the Hamiltonian (1) in the form $$\begin{aligned}
\hat{H}&=&\hat{H}_{0}+V \nonumber \\
\hat{H}_{0}&=& \sum_{j} [J \overrightarrow{S}_{j}.
\overrightarrow{S}_{j+1}-H S_{j}^{x}] \nonumber \\
V&=&h\sum_{j} (-1)^{j} S_{j}^{z}, \label{efh6}\end{aligned}$$ where $\hat{H}_{0}$ is exactly solvable by the Bethe ansatz and the staggered field $h\ll1$ is very small. For a small perturbation V, we can use conformal estimates. The large distance asymptotic of the correlation function of the model in the absence of the staggered field $(h=0)$ is obtained [@bogoliubovn] as $$\begin{aligned}
\langle S_{j}^{z} S_{j+n}^{z}\rangle \sim
\frac{(-1)^{n}}{n^{\alpha(H)}}.\end{aligned}$$ Where $\alpha(H)$ is a function of the uniform ($H$) field and
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We prove the theorem: The second order quasi-linear differential operator as a second rank divergence free tensor in the equation of motion for gravitation could always be derived from the trace of the Bianchi derivative of the fourth rank tensor, which is a homogeneous polynomial in curvatures. The existence of such a tensor for each term in the polynomial Lagrangian is a new characterization of the Lovelock gravity.'
author:
- 'Naresh Dadhich[^1]'
title: Characterization of the Lovelock gravity by Bianchi derivative
---
The Bianchi differential identity involving the purely anti-symmetric derivative of a derivative ($D^2=0$) was famously interpreted by John Wheeler [@mtw] as the statement of the fact, [*boundary of boundary is zero*]{}. The familiar examples of it are curl of gradient and divergence of curl being zero. The former signifies a scalar while the latter a vector field. However its contraction in the two cases is vacuous and hence does not lead to a non-trivial statement$^{[1]}$. When we go beyond vector to a tensor field, it becomes interesting.\
Gravity is the universal force which means it links to all particles unmindful of their mass being non-zero or zero. It is its linkage to massless particles which leads to the profound realization that it could only be described by curvature of spacetime [@nd1; @nd2]. This means the dynamics of gravity resides in spacetime curvature which must fully and entirely determine it. That is the gravitational dynamics must follow from the curvature which is described by the fourth rank Riemann curvature tensor (it is defined as $A^i_{;lk} - A^i_{;kl} = R^i{}_{mlk}A^m$, a generalized “curl”). It involves second and first derivatives of the spacetime metric, $g_{ab}$. The Bianchi identity is given by the anti-symmetric derivative of $R^i{}_{mlk}$, $$R^i{}_{m[lk;n]} = 0 .$$ If the gravitational dynamics has to follow from the curvature, it has to follow from this identity which is the only available geometric relation. The only thing we can do to it is to contract on the available indices which does lead, unlike for scalar and vector, to a non-vacuous relation, $$G^{a}{}_{b;a} = 0, ~~~~ G_{ab} = R_{ab} - {1\over2}Rg_{ab}$$ where $R_{ab}$ is the Ricci tensor, the contraction of Riemann, while $R$ is the trace of Ricci. Now the trace (contraction) of the Bianchi identity yields a non-trivial differential identity from which we can deduce the following relation $$G_{ab} = \kappa T_{ab} - \Lambda g_{ab}, ~~~ T^{a}{}_{b;a} = 0$$\
where $T_{ab}$ is the second rank symmetric tensor with vanishing divergence and $\kappa$ and $\Lambda$ are constants. The left hand side of the equation is a second order differential operator on the metric $g_{ab}$. For this equation to describe dynamics of gravity, the tensor $T_{ab}$ should describe the source/charge for gravity which should also be universal. It should be shared by all particles and hence $T_{ab}$ should represent energy momentum distribution. Thus we obtain the Einstein equation for gravitation which entirely follows from the spacetime curvature. We have however two constants of which one $\kappa$ is to be determined by experimentally measuring the strength of the force and is identified with Newton’s constant, $\kappa = -8 \pi G/c^2$. Why is there new constant $\Lambda$ which though arises in the equation as naturally as the energy momentum tensor, $T_{ab}$? It is perhaps because of the absence of fixed spacetime background which exists for the rest of physics and the new constant may be a signature of this fact. It is the universal character of gravity which makes spacetime itself dynamic. The force free state would however be characterized by homogeneity and isotropy of space and homogeneity of time which will in general be described by spacetime of constant curvature and not necessarily of zero curvature. The new constant $\Lambda$ is the measure of the constant curvature of spacetime and it identifies the most general spacetime for force free state. It may in some deep and fundamental sense be related to the basic structure of spacetime.\
We also know that the complete contraction of Riemann gives the scalar curvature, $R$, the Einstein-Hilbert Lagrangian which on variation leads to the divergence free Einstein tensor, $G_{ab}$, and subsequently the Einstein equation. We thus have the two distinct but equivalent derivations for the gravitational dynamics. The former is simply driven by the geometry while the latter is in the spirit of every dynamical equation following from an action. It is always possible to write an action constructed from Riemann curvature for higher derivative gravity and derive the corresponding equation of motion. Similarly, is it possible to derive an analogue of $G_{ab}$, a divergence free differential operator from the Bianchi derivative of the higher order curvature polynomial? This is the question we wish to address and show that the answer is in affirmative. It would give yet another characterization of the Lovelock gravity.\
We believe that gravitational dynamics to follow from the spacetime curvature should be a general principle which should be true in general for higher order theories as well. So comes the question of going beyond the linear order in Riemann. Let us consider the quadratic tensor, $${\mbox{$\mathcal{R}$}}_{abcd} = R_{abmn}R_{cd}{}^{mn} + \alpha R_[{a}{}^{m}R_{b]mcd} + \beta R R_{abcd} \label{gb1}$$ where $\alpha, \beta$ are constants. We consider the Bianchi derivative, ${\mbox{$\mathcal{R}$}}_{ab[cd;e]}$ which on contraction gives $$g^{ac} g^{bd} {\mbox{$\mathcal{R}$}}_{ab[cd;e]} = (-2{\mbox{$\mathcal{R}$}}_e{}^c + {\mbox{$\mathcal{R}$}} \delta_e{}^c)_{;c} \label{bd1}$$ where ${\mbox{$\mathcal{R}$}}_{ac} = g^{bd} {\mbox{$\mathcal{R}$}}_{abcd}$ and $ {\mbox{$\mathcal{R}$}} = g^{ab} {\mbox{$\mathcal{R}$}}_{ab}$. It turns out that for $\alpha=4$ and $\beta=1$, we obtain $$\begin{aligned}
{\mbox{$\mathcal{R}$}}^{cd}{}{}_{[cd;e]}&=& (-2{\mbox{$\mathcal{R}$}}_e{}^c + {\mbox{$\mathcal{R}$}} \delta_e{}^c)_{;c} \nonumber \\
&=& (-H_e{}^c + {1\over2} L_{GB} \delta_e{}^c)_{;c}. \end{aligned}$$ That is $${\mbox{$\mathcal{R}$}}^{cd}{}{}_{[cd;e]} - {1\over2} (L_{GB} \delta_e{}^c)_{;c} = -H_e{}^c_{;c} = 0.\label{bd2}$$ The tensor $H_{ab}$ is divergence free, $H_a{}^b_{;b} = 0$, and is given by $$\begin{aligned}
H_{ab}& =& 2(RR_{ab} - 2R_{a}{}^{m}R_{bm} - 2R^{mn}R_{ambn}\nonumber \\
&+& R_{a}{}^{mnl}R_{bmnl}) - \frac{1}{2} L_{GB} g_{ab}.\end{aligned}$$ It results from the variation of the well-known Gauss-Bonnet Lagrangian $L_{GB} = R_{abcd}^2 - 4R_{ab}^2 + R^2$ where we have written $R_{abcd}^2 = R_{abcd}R^{abcd}$. That is, we can write $$H_{ab} = 2{\mbox{$\mathcal{R}$}}_{ab} - \frac{1}{2}{\mbox{$\mathcal{R}$}} g_{ab} \label{hab}$$ where ${\mbox{$\mathcal{R}$}} = L_{GB}$.\
Though $H_{ab}$ can be written in terms of ${\mbox{$\mathcal{R}$}}_{abcd}$ but it doesn’t follow directly from it as $G_{ab}$ does from $R_{abcd}$. We note that Bianchi derivative vanishes only for the Riemann curvature signifying the fact it can be written as a generalized curl of a vector. No other tensor will have vanishing Bianchi derivative. However to find the analogue of $G_{ab}$, we donot require vanishing of Bianchi derivative of the quadratic tensor ${\mbox{$\mathcal{R}$}}_{abcd}$ but instead vanishing of its trace would suffice. We see that even the trace of Bianchi derivative does not vanish but is equal to ${1\over2} (L_{GB} \delta_e{}^c)_{;c}$. It suggests that the curvature polynomial should also include term involving its own trace which would make no contribution in the linear case. Before we do that, let us write ${\mbox{$\mathcal{R}$}}_{abcd}$ and $H_{ab}$ for a general order $n$ in the Lovelock polynomial [@lov]. In the context of higher derivative gravity theories, a unified scheme of writing Lagrangian is given [@paddy] as an invariant product, $Q^{abcd}R_{abcd}$ where $Q^{abcd}$ has the same symmetry properties as the Riemann tensor. It is constructed from metric
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We report a femtosecond response in photoinduced magnetization rotation in the ferromagnetic semiconductor GaMnAs, which allows for detection of a four-state magnetic memory at the femtosecond time scale. The temporal profile of this cooperative magnetization rotation exhibits a discontinuity that reveals two distinct temporal regimes, marked by the transition from a highly non-equilibrium, carrier-mediated regime within the first 200 fs, to a thermal, lattice-heating picosecond regime.'
author:
- 'J. Wang'
- 'I. Cotoros'
- 'X. Liu'
- 'J. Chovan'
- 'J. K. Furdyna'
- 'I. E. Perakis'
- 'D. S. Chemla'
title: |
Memory Effect in the Photoinduced Femtosecond Rotation of Magnetization\
in the Ferromagnetic Semiconductor GaMnAs
---
![(Color online) Static magnetic memory. (a)-(b): Sweeping a slightly tilted [*B*]{} field (5$^o$ from the [*Z*]{}-axis and 33$^o$ from the [*X*]{}-axis) up (dashed line) and down (solid line) leads to consecutive 90$^o$ magnetization switchings between the [*XZ*]{} and [*YZ*]{} planes, manifesting as a “major" hysteresis loop in the Hall magneto-resistivity. (c)-(d): “Minor" hysteresis loop with [*B*]{} field sweeping in the vicinity of 0T. The magnetic memory state X$-$(0) or Y$+$(0) is parallel to one of the easy axis directions in the [*XY*]{} plane. ](Fig1f.eps)
\[dep\]
Magnetic materials displaying [*carrier-mediated*]{} ferromagnetic order offer fascinating opportunities for non-thermal, potentially [*femtosecond*]{} manipulation of magnetism. A model system of such materials is Mn-doped III-V ferromagnetic semiconductors that have received a lot of attention lately [@ohno1998]. On the one hand, their magnetic properties display a strong response to excitation with light or electrical gate and current via carrier density tuning [@Koshiharaetal97PRL; @ohnoetal2000; @wangetalPRL2007]. On the other hand, the strong coupling ($\sim$1 eV in GaMnAs) between carriers (holes) and Mn ions inherent in carrier-mediated ferromagnetism could enable a [*femtosecond*]{} cooperative magnetic response induced by photoexcited carriers. Indeed, the existence of a very early non-equilibrium, non-thermal femtosecond regime of collective spin rotation in (III,Mn)Vs has been predicted theoretically [@ChovanetalPRL2006]. In addition, a coherent driving mechanism for femtosecond spin rotation via [*virtual*]{} excitations has also been recently demonstrated in antiferro- and ferri-magnets [@coherent]. Nevertheless, all prior studies of photoexcited magnetization rotation in ferromagnetic (III,Mn)Vs showed dynamics on the few picosecond timescale, which accesses the quasi-equilibrium, quasi-thermal, lattice-heating regime [@psRotationGaMnAs]. Up to now in these materials, the main observation on the femtosecond time scale has been photoinduced demagnetization [@wangetalPRL2005; @wangetalreview2006; @wang2008; @Cywinski_PRB07].
Custom-designed (III,Mn)V hetero- and nano-structures show rich magnetic memory effects. One prominent example is GaMnAs-based four-state magnetic memory, where “giant” magneto-optical and magneto-transport effects allow for ultrasensitive magnetic memory readout [@fourstate]. However, all detection schemes demonstrated so far have been static measurements. Achieving an understanding of collective magnetic phenomena on the femtosecond time scale is critical for terahertz detection of magnetic memory and therefore essential for developing realistic “spintronic” devices and large-scale functional systems.
In this Letter, we report on photoinduced [*femtosecond*]{} collective magnetization rotation that allows for femtosecond detection of magnetic memory in GaMnAs. Our time-resolved magneto-optical Kerr effect (MOKE) technique directly reveals a photoinduced four-state magnetic hysteresis via a quasi-instantaneous magnetization rotation. We observe for the first time a distinct initial temporal regime of the magnetization rotation within the first $\sim$200 fs, during the photoexcitation and highly non-equilibrium, non-thermal carrier redistribution times. We attribute the existence of such a regime to a [*carrier-mediated*]{} effective magnetic field pulse, arising without assistance from either lattice heating or demagnetization.
The main sample studied was grown by low-temperature molecular beam epitaxy (MBE), and consisted of a 73-nm Ga$_{0.925}$Mn$_{0.075}$As layer on a 10 nm GaAs buffer layer and a semi-insulating GaAs \[100\] substrate. The Curie temperature and hole density were 77 K and $3 \times 10^{20}$ cm$^{-3}$, respectively. As shown in Fig. 1, our structure exhibits a four-state magnetic memory functionality. By sweeping an external magnetic field B aligned nearly perpendicularly to the sample normal, with small components in both the [*X*]{} and [*Y*]{} directions in the sample plane, one can sequentially access four magnetic states, X$+\rightarrow$Y$-\rightarrow$X$-\rightarrow$Y$+$, via abrupt 90$^o$ magnetization ($\mathbf{M}$) switchings between the [*XZ*]{} and [*YZ*]{} planes \[Fig. 1(a)\]. In these magnetic states, $\mathbf{M}$ aligns along a direction arising as a combination of the external B field and the anisotropy fields, which point along the in-plane easy axes \[100\] and \[010\]. The multistep magnetic switchings manifest themselves as abrupt jumps in the four-state hysteresis in the Hall magneto-resistivity $\rho _{Hall}$ \[Fig. 1(b)\] (planar Hall effect [@fourstate]). The continuous slopes of $\rho _{Hall}$ indicate a coherent out-of-plane $\mathbf{M}$ rotation during the perpendicular magnetization reversal (anomalous Hall effect [@ohno1998]). Fig. 1(c)-(d) show the B scans in the vicinity of 0T, with the field turning points between the coercivity fields, i.e., $B_{c1}<\left|B\right|<B_{c2}$. This leads to a “minor” hysteresis loop, accessesing two magnetic memory states at $B=$0T: X$-$(0) and Y$+$(0).
We now turn to the transient magnetic phenonmena. We performed time-resolved MOKE spectroscopy [@wangetalreview2006] using 100 fs laser pulses. The linearly polarized ($\sim$12 degree from the crystal axis \[100\]) UV pump beam was chosen at 3.1 eV, with peak fluence $\sim$ 10$\mu$J/cm$^2$. A NIR beam at 1.55 eV, kept nearly perpendicular to the sample ($\sim$ 0.65 degree from the normal), was used as probe. The signal measured in this polar geometry reflects the out-of-plane magnetization component, M$_z$.
![(Color) Photoinduced femtosecond four-state magnetic hysteresis. (a) B field scans of $\triangle \theta _{k}$ at 5K for time delays $\triangle t=$ -1 ps, 600 fs, and 3.3 ps. The traces are vertically offset for clarity. Inset (left): temporal profiles of normalized Kerr ($\theta _{k}$) and ellipticity ($\eta _{k}$) angle changes at 1.0T; Inset (right): static magnetization curve at 5K ($\sim$4 mrad), measured in the same experimental condition (but without the pump pulse). (b) Temporal profiles of photoinduced $\triangle \theta _{k}$ for the four magnetic states. Shaded area: pump–probe cross–correlation.[]{data-label="mag-dep"}](Fig2f.eps)
Fig. 2(a) shows the B field scan traces of the photoinduced change, $\Delta\theta_K$, in the Kerr rotation angle at three time delays, $\triangle t=$ -1 ps, 600 fs, and 3.3 ps. The magnetic origin of this femtosecond MOKE response [@KoopmansetAl00PRL] was confirmed by control measurements showing a complete overlap of the pump–induced rotation ($\theta _{k}$) and ellipticity ($\eta _{k}$) changes \[left inset, Fig. 2(a)\]. $\Delta\theta_K$ is negligible at $\triangle t=$-1 ps. However, a mere $\triangle t=$600 fs after photoexcitation, a clear photoinduced four-state magnetic hysteresis is observed in the magnetic field dependence of $\Delta\theta_K$ (and therefore $\Delta M_z$), with four abrupt switchings at $\left|B_{c1}\right|=$0.074T and $\left|B_{c1}\right|=$0.33T due to the magnetic memory effects. As marked by the arrows in Fig. 2(a), the four magnetic states X$+$, X$-$, Y$-$, Y$+$ for $\left|B\right|=$0.2T give different photoinduced $\Delta\theta_K$. It is critical to note that the steady-state MOKE curve, i.e. $\theta_K$ without pump field, doesn’t show any sign of magnetic switching or memory behavior \[right inset, Fig. 2(a)\]; these arise from the pump photoexcitation. The B field scans also show a saturation behavior at
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We propose the generalized uncertainty principle (GUP) with an additional term of quadratic momentum motivated by string theory and black hole physics as a quantum mechanical framework for the minimal length uncertainty at the Planck scale. We demonstrate that the GUP parameter, $\beta_0$, could be best constrained by the the gravitational waves observations; GW170817 event. Also, we suggest another proposal based on the modified dispersion relations (MDRs) in order to calculate the difference between the group velocity of gravitons and that of photons. We conclude that the upper bound reads $\beta_0 \simeq 10^{60}$. Utilizing features of the UV/IR correspondence and the obvious similarities between GUP (including non-gravitating and gravitating impacts on Heisenberg uncertainty principle) and the discrepancy between the theoretical and the observed cosmological constant $\Lambda$ (apparently manifesting gravitational influences on the vacuum energy density), known as [*catastrophe of non-gravitating vacuum*]{}, we suggest a possible solution for this long-standing physical problem, $\Lambda \simeq 10^{-47}~$GeV$^4/\hbar^3 c^3$.'
author:
- Abdel Magied Diab
- Abdel Nasser Tawfik
bibliography:
- 'upperBoundofGUPParameterV1.bib'
title: A Possible Solution of the Cosmological Constant Problem based on Minimal Length Uncertainty and GW170817 and PLANCK Observations
---
@pre@post
Introduction {#intro}
============
The cosmological constant, $\Lambda$, an essential ingredient of the theory of general relativity (GR) [@Einstein1917As], was guided by the idea that the evolution of the Universe should be static [@Tawfik:2011mw; @Tawfik:2008cd]. This model was subsequently refuted and accordingly the $\Lambda$-term was abandoned from the Einstein field equation (EFE), especially after the confirmation of the celebrated Hubble obervations in 1929 [@Hubble:1929ig], which also have verified the consequences of Friedmann solutions for EFE with vanishing $\Lambda$ [@Friedman:1922kd]. Nearly immediate after publishing GR, a matter-free solution for EFE with finite $\Lambda$-term was obtained by de Sitter [@deSitter:1917zz]. Later on when it has been realised that the Einstein [*static*]{} Universe was found unstable for small perturbations [@Mulryne:2005ef; @Wu:2009ah; @delCampo:2011mq], it was argued that the inclusion of the $\Lambda$-term remarkably contributes to the stability and simultaniously supports the expansion of the Universe, especially that the initial singularity of Friedmann-Lem$\hat{\mbox{a}}$itre-Robertson-Walker (FLRW) models could be improved, as well [@Weinberg1972AA; @Misner1984B]. Furthermore, the observations of type-Ia high redshift supernovae in late ninteeth of the last century [@Riess:1998cb; @Perlmutter:1998np] indicated that the expanding Universe is also accelerating, especially at a small $\Lambda$-value, which obviously contributes to the cosmic negative pressure [@Garriga:1999bf; @Martel:1997vi]. With this regard, we recall that the cosmological constant can be related to the vacuum energy density, $\rho$, as $\Lambda=8\pi G \rho/c^2$, where $c$ is the speed of light in vacuum and $G$ is the gravitational constant. In 2018, the PLANCK observations have provided us with a precise estimation of $\Lambda$, namely $\Lambda_{\mbox{Planck}} \simeq 10^{-47}$GeV$^4/\hbar^3 c^3$ [@Aghanim:2018eyx]. When comparing this tiny value with the theoretical estimation based on quantum field theory in weakly- or non-gravitating vacuum, $\Lambda_{\mbox{QFT}} \simeq 10^{74}$GeV$^4/\hbar^3 c^3$, there is, at least, a $121$-orders-of-magnitude-difference to be fixed [@Adler:1995vd; @Weinberg:1988cp; @Zeldovich:1968ehl].
The disagreement between both values is one of the greatest mysteries in physics and known as the cosmological constant problem or [*catastrophe of non-gravitating vacuum*]{}. Here, we present an attempt to solve this problem. To this end, we utilize the generalized uncertainty principle (GUP), which is an extended version of Heisenberg uncertainty principle (HUP), where a correction term encompassing the gravitational impacts is added, and thus an alternative quantum gravity approach emerges [@Tawfik:2014zca; @Tawfik:2015rva]. To summarize, the present attempt is motivated by the similarity of GUP (including non-gravitating and gravitating impacts on HUP) and the disagreement between theoretical and observed estimations for $\Lambda$ (manifesting gravitational influences on the vacuum energy density) and by the remarkable impacts of $\Lambda$ on early and late evolution of the Universe [@Tawfik:2019jsa; @Tawfik:2011mw; @Tawfik:2008cd]. So far, there are various quantum gravity approaches presenting quantum descriptions for different physical phenomena in presence of gravitational fields to be achnowledged, here [@Tawfik:2014zca; @Tawfik:2015rva].
The GUP offers a quantum mechanical framework for a potential minimal length uncertainty in terms of the Planck scale [@Tawfik:2017syy; @Tawfik:2016uhs; @Dahab:2014tda; @Ali:2013ma]. The minimal length uncertainty, as proposed by GUP, exhibits some features of the UV/IR correspondence [@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj], which has been performed in viewpoint of local quantum field theory. Thus, it is argued that the UV/IR correspondence is relevant to revealing several aspects of short-distance physics, such as, the cosmological constant problem [@Weinberg:1988cp; @Banks:2000fe; @Cohen:1998zx; @ArkaniHamed:2000eg]. Therefore, a precise estimation of the minimal length uncertainty strongly depends on the proposed upper bound of the GUP parameter, $\beta_0$ [@Dahab:2014tda; @Tawfik:2013uza].
Various ratings for the upper bound of $\beta_0$ have been proposed, for example, by comparing quantum gravity corrections to various quantum phenomena with electroweak [@Das:2008kaa; @Das:2009hs] and astronomical [@Scardigli:2014qka; @Feng:2016tyt] observations. Accordingly, $\beta_0$ ranges between $10^{33}$ to $10^{78}$ [@Scardigli:2014qka; @Feng:2016tyt; @Walker:2018muw]. As a preamble of the present study, we present a novel estimation for $\beta_0$ from the binary neutron stars merger, the gravitational wave event GW170817 reported by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Advanced Virgo collaborations [@TheLIGOScientific:2017qsa]. With this regard, there are different efforts based on the features of the UV/IR correspondence in order to interpret the $\Lambda$ problem [@Chang:2001bm; @Chang:2011jj; @Miao:2013wua; @Shababi:2017zrt; @Vagenas:2019wzd] with Liouville theorem in the classical limit [@Fityo:2008zz; @Chang:2001bm; @Wang:2010ct]. Having a novel estimation of $\beta_0$, a solution of the $\Lambda$ problem, [*catastrophe of non-gravitating vacuum*]{}, could be best proposed.
The present paper is organized as follows. Section \[MDRGUP\] reviews the basic concepts of the GUP approach with quadratic momentum. The associated modifications of the energy-momentum dispersion relations related to GR and rainbow gravity are also outlined in this section. In section \[GUPparameter\], we show that the dimensionless GUP parameter, $\beta_o$, could be, for instance, constrained to the gravitational wave event GW170817. Section \[LamdaProblem\] is devoted to calculating the vacuum energy density of states and shows how this contributes to understanding the cosmological constant problem with an quantum gravity approach, the GUP. The final conclusions are outlined in section \[conclusion\].
Generalized Uncertainty Principle and Modified Dispersion Relations \[MDRGUP\]
==============================================================================
Several approaches to the quantum gravity, such as GUP, predict a minimal length uncertainties that could be related to the Planck scale [@Tawfik:2015rva; @Tawfik:2014zca]. There were various laboratory experiments conducted to examine the GUP effects [@Bawaj:2014cda; @Marin:2013pga; @Pikovski:2011zk; @Khodadi:2018kqp]. In this section, we focus the discussion on GUP with a quadratic momentum uncertainty [@Tawfik:2015rva; @Tawfik:2014zca]. This version of GUP was obtained from black hole physics [@Gross:1987kza] and supported by [*gedanken*]{} experiments [@Maggiore:1993zu], which have been proposed Kempf, Mangano, and Mann (KMM), [@Kempf:1994su] x p , \[GUPunc
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We introduce higher-order support varieties for pairs of modules over a commutative local complete intersection ring, and give a complete description of which varieties occur as such support varieties. In the context of a group algebra of a finite elementary abelian group, we also prove a higher-order Avrunin-Scott-type theorem, linking higher-order support varieties and higher-order rank varieties for pairs of modules.'
address:
- |
Petter Andreas Bergh\
Institutt for matematiske fag\
NTNU\
N-7491 Trondheim\
Norway
- |
David A. Jorgensen\
Department of mathematics\
University of Texas at Arlington\
Arlington\
TX 76019\
USA
author:
- 'Petter Andreas Bergh & David A. Jorgensen'
title: 'Realizability and the Avrunin-Scott theorem for higher-order support varieties'
---
[^1]
Introduction {#sec:intro}
============
Support varieties for modules over commutative local complete intersections were introduced in [@Avramov] and [@AvramovBuchweitz], inspired by the cohomological varieties of modules over group algebras of finite groups. These geometric invariants encode several homological properties of the modules. For example, the dimension of the variety of a module equals its complexity. In particular, a module has finite projective dimension if and only if its support variety is trivial.
In this paper, we define higher-order support varieties for pairs of modules over complete intersections. These varieties are defined in terms of Grassmann varieties of subspaces of the canonical vector space associated to the defining regular sequence of the complete intersection. Thus, for a fixed dimension $d$, the support varieties of order $d$ are subsets of the Grassmann variety of $d$-dimensional subspaces of the canonical vector space, under a Pl" ucker embedding into $\mathbb P^{{c \choose d}-1}$. For $d=1$, we recover the classical support varieties: the varieties of order $1$ are precisely the projectivizations of the support varieties defined in [@AvramovBuchweitz].
We show that several of the results that hold for classical support varieties also hold for the higher-order varieties. Among these is the realizability result: we give a complete description of the closed subsets of the Grassmann variety that occur as higher-order support varieties. We also prove a higher-order Avrunin-Scott result for group algebras of finite elementary abelian groups. Namely, we extend the notion of $r$-rank varieties from [@CarlsonFriedlanderPevtsova] to higher-order rank varieties of pairs of modules and show that these varieties are isomorphic to the higher-order support varieties.
In Section 2 we give our definition of higher-order support varieties, and prove some of their elementary properties. In particular, we show that they are well-defined, independent of the choice of corresponding intermediate complete intersection, and are in fact closed subsets of the Grassmann variety. In Section 3 we discuss the realizability question, and in Section 4 we prove the higher-order Avrunin-Scott result.
Higher-order support varieties {#sec:hdsv}
==============================
In this section and the next, we fix a regular local ring $(Q, {\operatorname{\mathfrak{n}}\nolimits}, k)$ and an ideal $I$ generated by a regular sequence of length $c$ contained in ${\operatorname{\mathfrak{n}}\nolimits}^2$. We denote by $R$ the complete intersection ring $$R = Q/I,$$ and by $V$ the $k$-vector space $$V=I/{\operatorname{\mathfrak{n}}\nolimits}I.$$ For an element $f\in I$, we let $\overline f$ denote its image in $V$.
If the codimension of the complete intersection $R=Q/I$ is at least 2, then $V$ has dimension at least 2, and it makes sense to consider subspaces $W$ of $V$. Each such subspace has many corresponding complete intersections, in the following sense: if $W$ is a subspace of $V$, then choosing preimages in $I$ of a basis of $W$ we obtain another regular sequence [@BrunsHerzog Theorem 2.1.2(c,d)], and the ideal $J\subseteq I$ it generates. We thus get natural projections of complete intersections $Q\to Q/J\to R$. We call $Q/J$ a *complete intersection intermediate to $Q$ and $R$*, or when the context is clear, simply an *intermediate complete intersection*.
We now give our definition of higher-order support variety. We fix a basis of $V$, and let ${\operatorname{G}\nolimits}_d(V)$ denote the Grassmann variety of $d$th order subspaces of $V$ under the Pl" ucker embedding into $\mathbb P^{{c \choose d}-1}$ with respect to the chosen basis of $V$.
We set $$V_R^d(M,N)=\{p_W\in{\operatorname{G}\nolimits}_d(V)\mid {\operatorname{Ext}\nolimits}_{Q/J}^i(M,N)\ne 0 \text{ for infinitely many $i$}\},$$ where $W$ is a $d$th order subspace of $V$, $p_W$ is the corresponding point in the Grassmann variety ${\operatorname{G}\nolimits}_d(V)$, and $Q/J$ is an intermediate complete intersection corresponding to $W$. We also define ${\operatorname{V}\nolimits}_R^d(M)={\operatorname{V}\nolimits}_R^d(M,k)$.
We note that ${\operatorname{V}\nolimits}_R^1(M,N)$ is the projectivization of the affine support variety ${\operatorname{V}\nolimits}_R(M,N)$ defined in [@AvramovBuchweitz].
There are two aspects of the definition which warrant further discussion.
1. \[independent\] The definition is independent of the chosen intermediate complete intersection $Q/J$ corresponding to $W$, and
2. \[closed\] ${\operatorname{V}\nolimits}_R^d(M,N)$ is a closed set in $G_d(V)$.
We next give proofs of these two statements.
Let $Q/J$ and $Q/J'$ be two complete intersections intermediate to $Q$ and $R$. The condition that $$(J+{\operatorname{\mathfrak{n}}\nolimits}I)/{\operatorname{\mathfrak{n}}\nolimits}I=(J'+{\operatorname{\mathfrak{n}}\nolimits}I)/{\operatorname{\mathfrak{n}}\nolimits}I$$ in $V$ defines an equivalence relation on the set of such intermediate complete intersections. The following result addresses (\[independent\]) above.
Suppose that $Q/J$ and $Q/J'$ are equivalent complete intersections intermediate to $Q$ and $R$, that is, $(J+{\operatorname{\mathfrak{n}}\nolimits}I)/{\operatorname{\mathfrak{n}}\nolimits}I=(J'+{\operatorname{\mathfrak{n}}\nolimits}I)/{\operatorname{\mathfrak{n}}\nolimits}I$ in $V$. Then for all finitely generated $R$-modules $M$ and $N$ one has ${\operatorname{Ext}\nolimits}_{Q/J}^i(M,N)=0$ for all $i\gg 0$ if and only if ${\operatorname{Ext}\nolimits}_{Q/J'}^i(M,N)=0$ for all $i\gg 0$.
Let $W=(J+{\operatorname{\mathfrak{n}}\nolimits}I)/{\operatorname{\mathfrak{n}}\nolimits}I$ and consider the natural map of $k$-vector spaces $\varphi_{J}:J/{\operatorname{\mathfrak{n}}\nolimits}J\to W\subseteq V$ defined by $f+{\operatorname{\mathfrak{n}}\nolimits}J \mapsto f+{\operatorname{\mathfrak{n}}\nolimits}I$. This is an isomorphism: it is onto by construction, and one-to-one since $J\cap {\operatorname{\mathfrak{n}}\nolimits}I={\operatorname{\mathfrak{n}}\nolimits}J$. The condition that $(J+{\operatorname{\mathfrak{n}}\nolimits}I)/{\operatorname{\mathfrak{n}}\nolimits}I=(J'+{\operatorname{\mathfrak{n}}\nolimits}I)/{\operatorname{\mathfrak{n}}\nolimits}I$ is equivalent to $\varphi_J(J/{\operatorname{\mathfrak{n}}\nolimits}J)=\varphi_{J'}(J'/{\operatorname{\mathfrak{n}}\nolimits}J')$. By [@BerghJorgensen Proposition 3.2], one has the equality $\varphi_J({\operatorname{V}\nolimits}_{Q/J}(M,N))=\varphi_{J'}({\operatorname{V}\nolimits}_{Q/J'}(M,N))$, where ${\operatorname{V}\nolimits}_{Q/J}(M,N)$ denotes the affine support variety of $M$ and $N$ over the complete intersection $Q/J$. By [@AvramovBuchweitz Proposition 2.4(1) and Theorem 2.5] one has that ${\operatorname{Ext}\nolimits}_{Q/J}^i(M,N)=0$ for all $i\gg 0$ if and only if ${\operatorname{V}\nolimits}_{Q/J}(M,N)=\{0\}$. The same holds over $Q/J'$, and thus the result follows by the injectivity of $\varphi_{J
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study the phase diagram of a Dirac semimetal in a magnetic field at a nonzero charge density. It is shown that there exists a critical value of the chemical potential at which a first-order phase transition takes place. At subcritical values of the chemical potential the ground state is a gapped state with a dynamically generated Dirac mass and a broken chiral symmetry. The supercritical phase is the normal (gapless) phase with a nontrivial chiral structure: it is a Weyl semimetal with a pair of Weyl nodes for each of the original Dirac points. The nodes are separated by a dynamically induced chiral shift. The direction of the chiral shift coincides with that of the magnetic field and its magnitude is determined by the quasiparticle charge density, the strength of the magnetic field, and the strength of the interaction. The rearrangement of the Fermi surface accompanying this phase transition is described.'
author:
- 'E. V. Gorbar'
- 'V. A. Miransky'
- 'I. A. Shovkovy'
date: 'July 26, 2013'
title: Engineering Weyl nodes in Dirac semimetals by a magnetic field
---
Introduction {#1}
============
During past decades, a remarkable overlap of such seemingly different areas in physics as condensed matter and relativistic physics took place (for a recent review, see Ref. ). It was especially clearly manifested in studying graphene.[@graphene] Such well known phenomena as the Klein paradox, the dynamics of a supercritical charge, and the dynamical generation of a Dirac mass in a magnetic field (magnetic catalysis), revealed in relativistic field theory, were first observed in studies of graphene (see Refs. ). In the present paper, we will consider manifestations in condensed matter of another relativistic phenomenon: the rearrangement of the Fermi surface in three dimensional relativistic matter in a magnetic field. The original motivation for studying this phenomenon was its possible realization in magnetars and pulsars, and in heavy ion collisions.[@Gorbar:2009bm; @Gorbar:2011ya] But as we discuss below, it could also be relevant for such new materials as Dirac and Weyl semimetals.
Dirac and Weyl semimetals possess low-energy quasiparticles near the Fermi surface, which are described by the Dirac and Weyl equation, respectively.[@review] As was established long ago, an example of a semimetal whose low-energy effective theory includes three dimensional Dirac fermions is yielded by bismuth (for reviews, see Refs. ). On the other hand, examples of the realization of the Weyl semimetals have been considered only recently.[@Wan; @Burkov1; @Burkov2] Weyl semimetals, which are three dimensional analogs of graphene, present a new class of materials with nontrivial topological properties.[@Volovik] Since their electronic states in the vicinity of Weyl nodes have a definite chirality, this leads to quite unique transport and electromagnetic properties of these materials.
The most interesting signatures of Dirac and Weyl semimetals discussed in the literature [@Burkov1; @1210.6352; @Franz; @Carbotte; @Basar; @Abanin; @Landsteiner] are connected with different nondissipative transport phenomena intimately related to the axial anomaly.[@anomaly] Many of them were previously suggested in studies of heavy ion collisions (for a review, see Ref. ).
In this paper, we will consider a different signature of Dirac semimetals: a dynamical rearrangement of their Fermi surfaces in a magnetic field. As we show below, this rearrangement is quite spectacular: a Dirac semimetal is transformed into a Weyl one. The resulting Weyl semimetal has a pair of Weyl nodes for each of the original Dirac points. Each pair of the nodes is separated by a dynamically induced (axial) vector $2\mathbf{b}$, whose direction coincides with the direction of the magnetic field. The magnitude of the vector $\mathbf{b}$ is determined by the quasiparticle charge density, the strength of the magnetic field, and the strength of the interaction. This phenomenon of the dynamical transformation of Dirac into Weyl semimetals is a condensed matter analog of the previously studied dynamical generation of the chiral shift parameter in magnetized relativistic matter in Refs. .
This paper is organized as follows. In Sec. \[section2\] we introduce the model and set up the notations. The gap equation for the fermion propagator in the model is derived in Sec. \[section3\]. We show that, at a nonzero charge density, a pair of Weyl nodes necessarily arises in the normal phase of a Dirac metal as soon as a magnetic field is turned on. In Sec. \[section4\] a perturbative solution of the gap equation describing the normal phase of the model is analyzed. A nonperturbative solution with a dynamical gap that spontaneously breaks the chiral symmetry is analyzed in Sec. \[section5\]. A phase transition between the normal phase and the phase with chiral symmetry breaking is revealed and described. In Sec. \[section6\] we compare the dynamics in Dirac semimetals and graphene. A deep connection of the normal phase of a Dirac metal in a magnetic field with the quantum Hall state with the filling factor $\nu = 2$ in graphene is pointed out. The discussion of the results and conclusions is given in Sec. \[section7\]. For convenience, throughout this paper, we set $\hbar=1$.
Model {#section2}
=====
As stated in the Introduction, the main goal of this paper is to show that a dynamical transformation of Dirac semimetals into Weyl ones can be achieved by applying an external magnetic field to the former. It is convenient, however, to start our discussion from writing down the general form of the low-energy Hamiltonian for a Weyl semimetal, $$H^{\rm (W)}=H^{\rm (W)}_0+H_{\rm int},
\label{Hamiltonian-model-Weyl}$$ where $$H^{\rm (W)}_0=\int d^3r \left[\,v_F \psi^{\dagger} (\mathbf{r})\left(
\begin{array}{cc} \bm{\sigma}\cdot(-i\bm{\nabla}-\mathbf{b}) & 0\\ 0 &
-\bm{\sigma}\cdot(-i\bm{\nabla}+\mathbf{b}) \end{array}
\right)\psi(\mathbf{r})-\mu_{0}\, \psi^{\dagger} (\mathbf{r})\psi(\mathbf{r})
\right]
\label{free-Hamiltonian}$$ is the Hamiltonian of the free theory, which describes two Weyl nodes of opposite (as required by the Nielsen–Ninomiya theorem [@NN]) chirality separated by vector $2\mathbf{b}$ in momentum space. In the rest of this paper, following the terminology of Refs. , we will call $\mathbf{b}$ the chiral shift parameter. There are two reasons for choosing this terminology. First, as Eq. (\[free-Hamiltonian\]) implies, vector $\mathbf{b}$ shifts the positions of Weyl nodes from the origin in the momentum space and, secondly, the shift has opposite signs for fermions of different chiralities. The other notations are: $v_F$ is the Fermi velocity, $\mu_{0}$ is the chemical potential, and $\bm{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ are Pauli matrices associated with the conduction-valence band degrees of freedom in a generic low-energy model.[@Burkov3] Based on the similarity of the latter to the spin matrices in the relativistic Dirac equation, we will call them pseudospin matrices.
The interaction part of the Hamiltonian describes the Coulomb interaction, i.e., $$H_{\rm int} = \frac{1}{2}\int d^3rd^3r^{\prime}\,\psi^{\dagger}(\mathbf{r})\psi(\mathbf{r})U(\mathbf{r}-\mathbf{r}^{\prime})
\psi^{\dagger}(\mathbf{r}^{\prime})\psi(\mathbf{r}^{\prime}).
\label{int-Hamiltonian}$$ In order to present our results in the most transparent way, in this study we will utilize a simpler model with a contact four-fermion interaction, $$U(\mathbf{r}) = \frac{e^2}{\kappa |\mathbf{r}|} \rightarrow g\, \delta^3(\mathbf{r}),
\label{model-interaction}$$ where $\kappa$ is a dielectric constant and $g$ is a dimensionful coupling constant. As we argue in Sec. \[section7\], such a model interaction should at least be sufficient for a qualitative description of the effect of the dynamical generation of the chiral shift parameter by a magnetic field in Dirac semimetals.
Before proceeding further with the analysis, we find it very convenient to introduce the four-dimensional Dirac matrices in the chiral representation: $$\gamma^0 = \left( \begin{array}{cc} 0 & -I\\ -I & 0 \end{array} \right),\qquad
\bm{\gamma} = \left( \begin{array}{cc} 0& \bm{\sigma} \\ - \bm{\sigma} & 0 \end{array} \right),
\label{Dirac-matrices}$$ where $I$ is the two-dimensional unit matrix, and rewrite our model Hamiltonian in a relativistic form, $$H^{\rm (W)} = \int d^3 r\, \bar{\psi} (\mathbf{r})\left[
-i v_F (\bm{\gamma}\cdot \bm{\nabla})-(\mathbf{b}\cdot \bm{\gamma})\gamma^5-\mu_{0}\gamma^0
\right]\psi(\mathbf{r})
+\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Superlinear scaling in cities, which appears in sociological quantities such as economic productivity and creative output relative to urban population size, has been observed but not been given a satisfactory theoretical explanation. Here we provide a network model for the superlinear relationship between population size and innovation found in cities, with a reasonable range for the exponent.'
author:
- Samuel Arbesman
- 'Jon M. Kleinberg'
- 'Steven H. Strogatz'
bibliography:
- 'cityscaling.bib'
title: Superlinear Scaling for Innovation in Cities
---
Introduction
============
It has been known for nearly a hundred years that living things obey scaling relationships. Max Kleiber first recognized that the metabolic rates of different mammals scale according to their masses raised to a $3/4$-power [@ARB:Kle32]. More recently, Geoffrey West and his colleagues have provided a theoretical explanation for this scaling law, as well as for many other allometric laws found in biology [@ARB:Wes97; @ARB:Whi2006]. Their theory is based upon the fractal branching networks (such as circulatory systems) found in all living things, whose function is to convey energy and nutrients to all parts of the organism. They argue that the larger the organism, the more efficient the system that can be constructed to provide energy, thereby yielding the observed sublinear exponent of $3/4$.
More recently, West and his team examined a variety of properties of cities. They found that cities, which have long been compared to living things [@ARB:Zipf49; @ARB:Jacobs01; @ARB:Aristotle98], obey scaling relationships as well [@ARB:Bet2007]. Similar to living things, cities have economics of scale, yielding sublinear scaling for such quantities as the number of gas stations within a city as a function of its population. In other words, you need fewer gas stations per person, in a bigger city. Examples of such scaling laws are shown in the upper portion of table \[table\_1\].
On the other hand, cities also exhibit superlinear scaling, which appears in relation to sociological quantities. As shown in the lower part of table \[table\_1\], properties of cities related to economic productivity and creative output have exponents that are all found to cluster between 1 and 1.5, with the mean around 1.2. Thus, the productivity *per person* increases as a city gets larger. However, this superlinear scaling has not been given a satisfactory mathematical explanation.
[l d]{} Urban Indicators ($y$) & ()\
Gasoline stations & 0.77\
Gasoline sales & 0.79\
Length of electrical cables & 0.87\
Road surface & 0.83\
New patents & 1.27\
Inventors & 1.25\
Private R & D employment & 1.34\
ÔSupercreativeÕ employment & 1.15\
R & D establishments & 1.19\
R& D employment & 1.26\
Total wages & 1.12\
Total bank deposits & 1.08\
GDP & 1.15\
\[table\_1\]
Here we suggest a theoretical explanation for the superlinear relationship between population size and innovation found in cities, with a reasonable range for the exponent. Due to the sociological nature of the variables being measured, it is natural to use a network model of a city, since it is reasonable to assume that network effects must underlie the superlinear scaling, as West and his colleagues have suggested [@ARB:Bet2007; @ARB:Bet2008]. For this, we draw on a recent class of models that derives superlinear scaling and [*densification properties*]{} from hierarchically organized networks [@ARB:Les2005], adapting these models to the present question of productivity.
Model and Results
=================
We first assume that all social interactions and relationships are arranged in a hierarchical tree structure [@ARB:Kle2001; @ARB:Les2005; @ARB:Wat2002]. Picture a binary tree, or in general, a tree where each branch splits into $b$ new branches. For example, in a city, each person is in a household, and there are many households on a block, and many blocks in a neighborhood, and so forth. Or the grouping could be based on your family tree, or corporations, or many other ways to group individuals. While in reality each individual belongs to many independent hierarchies [@ARB:Wat2002], here we simplify it as a single hierarchy, with branching number $b \geq 2$. We define the [*distance*]{} $d$ between two individuals in this hierarchy to be the height of their lowest common ancestor. We view the total system as a city, meaning that a city of population $N$ represents a single tree that contains $N$ leaves. On top of the tree structure, which serves to determine the social distance among nodes, a random graph is placed showing the social connections — who actually knows whom.
![\[fig\_1\]Network representation of the social structure of a city’s inhabitants. For example, if we are determining distance from individual A, then B is at distance 1, while G is at a distance of 3.](fig_1){width="3"}
Our modeling strategy is to use these social connections as the basis for a city’s productivity: at a high level, we assume that each interaction between a pair of people contributes to the overall productivity, in a way that depends on the distance $d$ as measured within the hierarchy. More concretely, our procedure for generating networks will produce a directed graph, and we will account for the productivity benefits of each edge $(v,w)$ by allocating it to $v$’s overall productivity. This allocation to $v$ rather than to both nodes is essentially for purposes of analysis, since we will be focusing in the end on the total population’s productivity rather than any one individual’s, and for determining this total we will see that it does not matter to which individual we allocate the benefits of the edge $(v,w)$.
The total creative productivity of the city is defined to be the sum of the productivities of each individual, and so we first consider how to compute individual productivities. To calculate the total productivity of a single person, three separate effects must be considered: (1) the probability of connecting to an individual at distance $d$; (2) the number of available people at distance $d$; and (3) the creative output that is obtained by linking to a single person at distance $d$. Multiplying these together gives the productivity due to one person linking to all of his collaborators at distance $d$, as seen below: $$\left [ \dfrac{\text{\#
contacts at }d}{\text{\# people at }d} \right ] \left [ \text{\#
people at }d \right ] \left [ \dfrac{\text{output at
}d}{\text{contacts at }d} \right ]$$ By summing this term over all distances, the total creative contribution of a single individual is obtained. The functional form of each term in the above recipe for calculating the productivity of a single individual is discussed below.
Taking the first term, the social connections between collaborators are constructed such that the likelihood of forming a connection at a certain social distance drops off exponentially fast with distance [@ARB:Kle2001; @ARB:Les2005; @ARB:Wat2002]. That is, the probability of a connection being made between nodes of a social distance $d$ (where $d$ is the height of the first common internal node) is assumed proportional to $b^{-\alpha d}$, where $\alpha$ is a tunable parameter greater than or equal to zero.
It is natural that the connection probability should decay with social distance, but why exponentially? We have assumed that the social network tree is self-similar at all levels (values of $d$). Since the tree is self-similar, it makes sense to have the function also be self-similar (scale-free) with respect to the value of $d$, and doing this yields an exponential function (this assumption is relaxed in the next section).
Since at each increase in $d$ there are exponentially more potential contacts to interact with, we multiply the above function by a second term, $b^{d}$, which means that as we increase $d$, while the likelihood of making a connection decays, there are exponentially more contacts to make. To keep things simple, we suppose connections are only made between residents of the city (connections outside a city are viewed as contributing less directly to the city’s total productivity, and are ignored).
Lastly, the usefulness of a social connection within a city is assumed to vary with its social distance. For example, one could assume that there is a productivity benefit as social distance increases. This can be explained as being due to the fact that individuals that are socially distant are exposed to different ideas and experiences, and that collaboration between two more socially distant individuals is more productive than interaction between ones that are closer. However, the value of a social connection is left open, and simply assumed to be proportional to $b^{\beta d}$, where $\beta$ is a tunable parameter that can hold any value (even negative values, allowing the value of a connection to decrease with distance). An exponential function is reasonable here as well, if we assume that a connection’s innovation potential depends on the number of individuals that lie between the two endpoints of the connection in social space. This assumption is also relaxed in the next section.
The total productivity of the social connections within an $N$-person city is now a random variable equal to the sum of all the individual product
|
{
"pile_set_name": "ArXiv"
}
| null |
---
title: 'Measurement of $A_{\Gamma}$'
---
The measurement of the charm CP violation observable $A_{\Gamma}$ using of $pp$ collisions at $\sqrt{s}=7$ TeV recorded by the LHCb detector in 2011 is presented. This new result is the most accurate to date.
Introduction
============
CP violation in charm meson decays is expected to be small in the Standard Model (SM) and any significant enhancement would be a signal of New Physics (NP). Thus far no CP violation has been unambiguously observed in the charm system.
The CP violation observable $A_{\Gamma}$ is defined as the asymmetry of the effective lifetimes of and decaying to the same CP eigenstate, or , $$A_{\Gamma} = \frac{\hat{\Gamma}(\Dz\to\Kp\Km) - \hat{\Gamma}(\Dzb\to\Kp\Km)}{\hat{\Gamma}(\Dz\to\Kp\Km) + \hat{\Gamma}(\Dzb\to\Kp\Km)} \approx \frac{A_{m}+A_{d}}{2}y\cos\phi-x\sin\phi,$$ where $A_{m}$ and $A_{d}$ are the asymmetries due to CP violation in mixing and decay respectively, $\phi$ is the interference phase between mixing and decay and $x$ and $y$ are the charm mixing parameters.
In the Standard Model [$A_{\Gamma}$]{} is expected to be small[@Lenz]($\sim$10$^{-4}$) and roughly independent of the final state. New Physics (NP) models may introduce larger CP violation and some final state dependence of the phase $\phi$ leading to a difference in [$A_{\Gamma}$]{} between the and final states[@Sokoloff], $$\Delta A_{\Gamma} = A_{\Gamma}(KK) - A_{\Gamma}(\pi\pi) = \Delta A_{D} y\cos\phi + (A_{M}+A_{D})y\Delta\cos\phi - x\Delta\sin\phi.$$
The experimental status of the measurement of [$A_{\Gamma}$]{}, including the Heavy Flavour Averaging Group (HFAG)[@HFAG] average and excluding the results presented here, is shown in Fig. \[fig:agamma\].
![Experimental status of [$A_{\Gamma}$]{}.[]{data-label="fig:agamma"}](a_gamma_14may12.pdf){width="48.00000%"}
Presented here are new results for the measurement of using 1 fb$^{-1}$ of $pp$ collisions at a centre of mass energy of 7 TeV recorded by the LHCb detector in 2011[@paper].
Analysis Method
===============
The mean lifetimes of the and are extracted via a fit to their decay times. The data to be fitted is broken into eight subsets. The splits are motivated by the two detector magnet polarities with which data was taken and two separate data-taking periods to account for know differences in detector alignment and calibration . Finally the and candidates have been fitted separately.
The initial flavour of the is determined by searching for the decay $\Dstarp\to\Dz{\HepParticle{\pi}{s}{+} }$ where the charge on the pion indicates the flavour. Due to the small $Q$ value of this decay the pion is referred to as slow.
The procedure is carried out in two stages. In the first the mass and the difference between the and masses ([$\Delta m$ ]{}) are fitted simultaneously. This allows for the separation of the signal and background components and the determination of the background probability density functions in the subsequent fits. Example mass and [$\Delta m$ ]{}fit results for the final state can be see in Fig. \[fig:massfit\].
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Fit of the mass (left) and [$\Delta m$ ]{}(right) for subset of data containing $\Dzb\to\Kp{}\Km$ candidates with magnet polarity down for the earlier run period.[]{data-label="fig:massfit"}](Massfit_D0bar_KK_log.pdf "fig:"){width="48.00000%"} ![Fit of the mass (left) and [$\Delta m$ ]{}(right) for subset of data containing $\Dzb\to\Kp{}\Km$ candidates with magnet polarity down for the earlier run period.[]{data-label="fig:massfit"}](Deltamfit_D0bar_KK_log.pdf "fig:"){width="48.00000%"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The second stage fits decay times and the natural logarithm of the impact parameter $\chi^{2}$ (). Those candidates originating from decays (secondary) have longer measured lifetimes than those originating at the primary vertex (prompt) as the has not been reconstructed. It is therefore necessary to separate these in the fit to avoid biasing the lifetime measurement. This is done using the variable. Due to the flight distance of the the impact parameter of the is larger than those of prompt candidates as shown in Fig. \[fig:secondary\]. Example fits are in Fig. \[fig:timefit\].
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The separation of prompt (left) and secondary (right) decays by considering their impact parameters.[]{data-label="fig:secondary"}](prompt.pdf "fig:"){width="30.00000%"} ![The separation of prompt (left) and secondary (right) decays by considering their impact parameters.[]{data-label="fig:secondary"}](secondary.pdf "fig:"){width="30.00000%"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Fit of the (left) and decay time (right) for data subset containing the $\Dzb\to\Kp{}\Km$ candidates with magnet polarity down for the earlier run period.[]{data-label="fig:timefit"}](LogIPfit_D0bar_KK_log.pdf "fig:"){width="48.00000%"} ![Fit of the (left) and decay time (right) for data subset containing the $\Dzb\to\Kp{}\Km$ candidates with magnet polarity down for the earlier run period.[]{data-label="fig:timefit"}](Timefit_D0bar_KK_log.pdf "fig:"){width="48.00000%"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Lifetime biases due to the acceptance of the trigger and selections are corrected for using the “swimming” method. The primary vertex is moved along the direction of its flight and the trigger rerun to find the point in lifetime at which the candidate changes from being rejected to accepted. One can thus construct an acceptance function in lifetime for each event as shown in Fig. \[fig:swimming\]. An average acceptance function for the whole data set can then be constructed and folded in to the fit. For a complete description see [@Vava].
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The swimming method. The primary vertex is ‘swum’ along the direction (from left to right). The trigger is rerun for each position and the lifetime of the candidate at which it becomes accepted by the trigger is found (middle).[]{data-label="fig:swimming"}](swim1.pdf "fig:"){width="30.00000%"} ![The swimming method. The primary vertex is ‘swum’ along the direction (from left to right). The trigger is rerun for each position and the lifetime of the candidate at which it becomes accepted by the trigger is found (middle).[]{data-label="fig:swimming"}](swim2.pdf "fig:"){width="30.00000%"} ![The swimming method. The primary vertex is ‘swum’ along the direction (from left to right). The trigger is rerun for each position and the lifetime of the candidate at which it becomes accepted by the trigger is found (middle).[]{data-label="fig:swimming"}](swim3.pdf "fig:"){width="30.00000%"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Summary of systematic uncertainties
===================================
The systematic uncertainties of the method are evaluated through a mixture of studies on simplified simulated data and variations to the fit. Table \[tab:systematics\] summarises the results of these studies. Additionally some extra considerations such as detector resolution and track reconstruction efficiency (amongst others) are looked at and found to have a negligible effect on the resultant measurement. The data are also split into bins of various kinematic variables (for example momentum $p$, transverse momentum $p_{T}$ and flight direction) and no systematic variation in the result is found.
The dominant systematic uncertainty comes from the acceptance function. This includes the uncertainty of the turning point positions determined by the swimming method and their subsequent utilisation in the fit procedure.
Effect ()$\times 10^{-3}$ ()$\times 10^{-3}$
------------------------ -------------------- --------------------
Mis-reconstructed bkg. $\pm 0.02$ $\pm 0.00$
Charm from [ ]{} $\pm 0.07$ $\pm0.07$
Other backgrounds $\pm0.02$ $\pm0.04$
Acceptance function $\pm0.09$ $\pm0.11$
Total $\pm0.12$ $\pm0.14$
: Summary of the systematic uncertainties on the measurement of for the two final states.[]{data-label="tab:systematics"}
Results
=======
The results of the measurement for the and final states are: $$A_{\Gamma}(KK) = (0.35\pm0.62_{stat}\pm0.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We present the results from a multiwavelength study of the flaring activity in HBL, 1ES 1959+650, during January 2015-June 2016. The source underwent significant flux enhancements showing two major outbursts (March 2015 and October 2015) in optical, UV, X-rays and gamma-rays. Normally, HBLs are not very active but 1ES 1959+650 has shown exceptional outburst activity across the whole electromagnetic spectrum (EMS). We used the data from Fermi-LAT, Swift-XRT $\&$ UVOT and optical data from Mt. Abu InfraRed Observatory (MIRO) along with archival data from Steward Observatory to look for possible connections between emissions at different energies and the nature of variability during flaring state. During October 2015 outburst, thirteen nights of optical follow-up observations showed brightest and the faintest nightly averaged V-band magnitudes as 14.45(0.03) and 14.85(0.02), respectively. In optical, the source showed a hint of optical intra-night variability during the outburst. A significant short-term variability in optical during MJD 57344 to MJD 57365 and in gamma-rays during MJD 57360 and MJD 57365 was also noticed. Multiwavelength study suggests the flaring activity at all frequencies to be correlated in general, albeit with diverse flare durations. We estimated the strength of the magnetic field as 4.21 G using the time-lag between optical and UV bands as synchrotron cooling time scale (2.34 hrs). The upper limits on the sizes of both the emission regions, gamma-ray and optical, are estimated to be of the order of $10^{16}$cm using shortest variability time scales. The quasi-simultaneous flux enhancements in 15 GHz and VHE gamma-ray emissions indicates to a fresh injection of plasma into the jet, which interacts with a standing sub-mm core resulting in co-spatial emissions across the EMS. The complex and prolonged behavior of the second outburst in October 2015 is discussed in detail.'
author:
- 'Navpreet Kaur$^{1, 2}$, S. Chandra$^3$, Kiran S Baliyan$^1$, Sameer$^{1,4}$, & S. Ganesh$^1$'
bibliography:
- 'reference.bib'
title: 'Multi-wavelength study of flaring activity in HBL 1ES 1959+650 during 2015-16 '
---
Introduction
============
Blazars are a sub-class of Active Galactic Nuclei (AGN), with a relativistic jet pointed at small angles ($<$ 15$^{\circ}$) to our line of sight [@urry1995]. The emission in blazars is mostly dominated by the highly variable non-thermal continuum flux; the variability time scale ranging from tens of minutes to a few years, across the whole electromagnetic spectrum (EMS) [@bland1979; @wagner1995; @fan2005; @fan2009]. Their spectral energy distribution (SED) has two characteristic broad peaks implying two different emission processes at work, namely, the synchrotron process, from radio to UV/X-ray energies [@urrymush1982] and the inverse-Compton (IC) process in which high energy emission (X-ray to TeV $\gamma$-rays) is produced via up-scattering of low energy seed photons by the relativistic electrons that gave rise to synchrotron emission. The origin(s) of the seed photons are still under debate [@baliyan2005; @bottcher2005]. According to the leptonic scenario, the IC photons may either be generated by the up-scattering of the synchrotron photons by the same population of leptons ($e^-/e^+$) [@konigl1981; @marschergear1985; @ghis-tavec2009] under Synchrotron Self Comptonization (SSC) process or the photons from the external regions, e.g., torus, accretion disk, line emitting regions, etc., serving as seeds for the up-scattering to higher energies, under External Comptonization (EC) [@bottcher2007] process. On the other hand, in hadronic models, the high energy emission in blazars is mainly produced by the proton synchrotron and pion decay in the jet plasma [@mannheim1989]. Blazars comprise the two kinds of objects : 1.) Flat Spectrum Radio Quasars (FSRQ), identified by emission lines in the optical/UV spectra, and 2.) BL Lac objects, identified by the extremely weak lines or a featureless optical/UV continuum [@stickel1993]. The classification based on the broad-band SEDs divides the BL Lac objects into three sub-categories, namely; high energy peaked BL Lac objects (HBLs; $\nu_{s} \textgreater 10^{15} $ Hz ), Intermediate energy peaked BL Lac objects (IBLs; $ 10^{14} Hz \textless \nu_{s} \textless 10^{15} $ Hz) and Low energy peaked BL Lac objects (LBLs; $ \nu_{s} \textless 10^{14} $ Hz). The high energy emission in the HBLs is generally well explained by the SSC models, with a possible EC component in a few exceptional flaring states [@bottcher2007].
The blazars being extremely variable across the EMS, their variability can serve as a tool to understand AGN structure and emission processes, as their central engines are too compact to be resolvable [@ciprini2003; @marscher2008]. The HBLs generally are less variable than LBLs [@jannuzi1994] but some of them are very active with flares and outbursts detected almost over the complete accessible EM spectrum, ranging from the radio to TeV $\gamma$-rays [@acciari2011; @furniss2015].
While almost all the TeV flares are witnessed to have a counterpart in optical and X-rays, barring some orphan flares, the GeV energy region might show weaker activity. Blazars also show significant polarization in optical [@chandra2011 and references there-in] and radio wavelengths which is a measure of the alignment and the strength of the magnetic field. The changes in the degree of optical polarization (DP) and position angle (PA) are commonly seen during the flares in the blazars. Such rapid variations in DP and PA during a flare have been modeled for many sources (e.g., for 1ES 1011+496 - @aleksic2016, Mrk 421 - @zhang2015, for 3C 279 - @kiehlmann2016 [@hayashida2012; @abdoPol2010]). The outbursts in blazars are mostly thought to be a manifestation of the shock formation and their movement down the jet [@orienti2015; @marscher2010], internal inhomogeneities and their interaction with shocks, re-collimation of shocks downstream the jet causing re-acceleration [@spada2001] or, a new population of the relativistic plasma injected into the jet. In spite of the considerable efforts until now, none of the proposed models are able to explain blazar phenomena. Our understanding of the geometry of the jet, the emission processes responsible for different flaring activities and the behavior of the objects during their quiescent phase are limited by the sample size and the scarcity of simultaneous data over a broad energy range. Therefore, there is need for extensive multi-wavelength studies on a large sample of blazars to enable a comprehensive understanding of the emission processes, in general.
The HBL 1ES 1959+650, redshift z=0.048 [@perlman1996], was first detected in radio band using NRAO Green Bank Telescope [@gregorycondon1991] and observed in X-rays during the slew survey by the Einstein Imaging Proportional Counter [@elvis1992]. The first TeV detection of this source was reported by the Seven Telescope Array group in 1999 [@nishiyama1991]. This source was identified as an optical BL Lac object by @schachter1993. Later, a bright ($m_{R}$ = 14.9) elliptical galaxy was confirmed by @scarpa2000 as the host. The source has undergone various outburst stages, including intense activity at very high energies (GeV-TeV). @krawczynski2004 reported an “orphan” flare at VHE during an outburst in 2002, in a multi-wavelength campaign (WHIPPLE and HEGRA for TeV, RXTE for X-rays, Boltwood and Abastumani observatory for optical, UMRAO for radio 14.5 GHz) from May 18 - August 14, 2002. The authors reported a correlation between the $\gamma$-ray and X-ray fluxes but during orphan TeV flare, no enhancement in X-ray flux was seen and there was no correlation between optical and X-ray/$\gamma$-ray emissions. @bottcher2005 explained 2002 orphan TeV flare using a hadronic synchrotron mirror model in which the orphan TeV photons originated from the interaction of relativistic protons with an external photon field supplied by synchrotron radiation reflected off a dilute reflector. Another intense flaring activity was seen during 2012 April- June covered in a multi wavelength campaign by @aliu2014. During the outburst, 1ES 1959+650 emitted enhanced flux in gamma-rays without any significant simultaneous rise at X-ray energies. The authors proposed a reflected emission model to explain elevated $\gamma$-ray flux, via pion production with very high energy protons (10-100 TeV).
1ES 1959+650 was reported in unprecedented high flux state across all the energies (gamma-ray, X-ray, UV, Optical and radio) during 2015 which extended to 2016 as well. The source underwent two major outbursts; first in X-rays during March 2015
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'A real Bott manifold is the total space of a sequence of $\R P^1$ bundles starting with a point, where each $\R P^1$ bundle is the projectivization of a Whitney sum of two real line bundles. A real Bott manifold is a real toric manifold which admits a flat riemannian metric. An upper triangular $(0,1)$ matrix with zero diagonal entries uniquely determines such a sequence of $\R P^1$ bundles but different matrices may produce diffeomorphic real Bott manifolds. In this paper we determine when two such matrices produce diffeomorphic real Bott manifolds. The argument also proves that any graded ring isomorphism between the cohomology rings of real Bott manifolds with $\Z/2$ coefficients is induced by an affine diffeomorphism between the real Bott manifolds. In particular, this implies the main theorem of [@ka-ma08] which asserts that two real Bott manifolds are diffeomorphic if and only if their cohomology rings with $\Z/2$ coefficients are isomorphic as graded rings. We also prove that the decomposition of a real Bott manifold into a product of indecomposable real Bott manifolds is unique up to permutations of the indecomposable factors.'
address: 'Department of Mathematics, Osaka City University, Sumiyoshi-ku, Osaka 558-8585, Japan.'
author:
- Mikiya Masuda
title: Classification of real Bott manifolds
---
[^1]
Introduction
============
A [*real Bott tower*]{} of height $n$, which is a real analogue of a Bott tower introduced in [@gr-ka94], is a sequence of $\R P^1$ bundles $$\label{tower}
M_n\stackrel{\R P^1}\longrightarrow M_{n-1}\stackrel{\R P^1}\longrightarrow
\cdots\stackrel{\R P^1}\longrightarrow M_1
\stackrel{\R P^1}\longrightarrow M_0=\{\textrm{a point}\}$$ such that $M_j\to M_{j-1}$ for $j=1,\dots,n$ is the projective bundle of the Whitney sum of a real line bundle $L_{j-1}$ and the trivial real line bundle over $M_{j-1}$, and we call $M_n$ a [*real Bott manifold*]{}. A real Bott manifold naturally supports an action of an elementary abelian 2-group and provides an example of a real toric manifold which admits a flat riemannian metric invariant under the action. Conversely, it is shown in [@ka-ma08] that a real toric manifold which admits a flat riemannian metric invariant under an action of an elementary abelian 2-group is a real Bott manifold.
Real line bundles are classified by their first Stiefel-Whitney classes as is well-known and $H^1(M_{j-1};\Z/2)$, where $\Z/2=\{0,1\}$, is isomorphic to $(\Z/2)^{j-1}$ through a canonical basis, so the line bundle $L_{j-1}$ is determined by a vector $A_j$ in $(\Z/2)^{j-1}$. We regard $A_j$ as a column vector in $(\Z/2)^n$ by adding zero’s and form an $n\times n$ matrix $A$ by putting $A_j$ as the $j$-th column. This gives a bijective correspondence between the set of real Bott towers of height $n$ and the set $\T(n)$ of $n\times n$ upper triangular $(0,1)$ matrices with zero diagonal entries. Because of this reason, we may denote the real Bott manifold $M_n$ by $M(A)$.
Although $M(A)$ is determined by the matrix $A$, it happens that two different matrices in $\T(n)$ produce (affinely) diffeomorphic real Bott manifolds. In this paper we introduce three operations on $\T(n)$ and say that two elements in $\T(n)$ are [*Bott equivalent*]{} if one is transformed to the other through a sequence of the three operations. Our first main result is the following.
\[main\] The following are equivalent for $A,B$ in $\T(n)$:
1. $A$ and $B$ are Bott equivalent.
2. $M(A)$ and $M(B)$ are affinely diffeomorphic.
3. $H^*(M(A);\Z/2)$ and $H^*(M(B);\Z/2)$ are isomorphic as graded rings.
Moreover, any graded ring isomorphism from $H^*(M(A);\Z/2)$ to $H^*(M(B);\Z/2))$ is induced by an affine diffeomorphism from $M(B)$ to $M(A)$.
In particular, we obtain the following main theorem of [@ka-ma08].
\[maincoro\] Two real Bott manifolds are diffeomorphic if and only if their cohomology rings with $\Z/2$ coefficients are isomorphic as graded rings.
It is asked in [@ka-ma08] whether Corollary \[maincoro\] holds for any real toric manifolds but a counterexample is given in [@masu08].
We say that a real Bott manifold is *indecomposable* if it is not diffeomorphic to a product of more than one real Bott manifolds. Using Corollary \[maincoro\] together with an idea used to prove Theorem \[main\], we are able to prove our second main result.
\[main1\] The decomposition of a real Bott manifold into a product of indecomposable real Bott manifolds is unique up to permutations of the indecomposable factors.
In particular, we have
\[main1coro\] Let $M$ and $M'$ be real Bott manifolds. If $S^1\times M$ and $S^1\times M'$ are diffeomorphic, then $M$ and $M'$ are diffeomorphic.
It would be interesting to ask whether Theorem \[main1\] and Corollary \[main1coro\] hold for any real toric manifolds.
The author learned from Y. Kamishima that Corollary \[main1coro\] can also be obtained from the method developed in [@ka-na08] and [@nazr08] and that the cancellation property above fails to hold for general compact flat riemannian manifolds, see [@char65-1].
This paper is organized as follows. In Section \[sect:rbott\] we describe $M(A)$ and its cohomology rings explicitly in terms of the matrix $A$. In Section \[sect:matrix\] we introduce the three operations on $\T(n)$. To each operation we associate an affine diffeomorphism between real Bott manifolds in Section \[sect:affine\], which implies the implication (1) $\Rightarrow$ (2) in Theorem \[main\]. The implication (2) $\Rightarrow$ (3) is trivial. In Section \[sect:cohom\] we prove the latter statement in Theorem \[main\]. The argument also establishes the implication (3) $\Rightarrow$ (1). In the proof we introduce a notion of eigen-element and eigen-space in the first cohomology group of a real Bott manifold using the multiplicative structure of the cohomology ring and they play an important role on the analysis of isomorphisms between cohomology rings. Using this notion, we prove Theorem \[main1\] in Section \[sect:decom\].
Real Bott manifolds and their cohomology rings {#sect:rbott}
==============================================
As mentioned in the Introduction, a real Bott manifold $M(A)$ of dimension $n$ is associated to a matrix $A\in\T(n)$. In this section we give an explicit description of $M(A)$ and its cohomology ring.
We set up some notation. Let $S^1$ denote the unit circle consisting of complex numbers with unit length. For elements $z\in S^1$ and $a\in
\Z/2$ we use the following notation $$z(a):=\begin{cases} z \quad&\text{if $a=0$}\\
\bar z\quad&\text{if $a=1$}.
\end{cases}$$ For a matrix $A$ we denote by $A^i_j$ the $(i,j)$ entry of $A$ and by $A^i$ (resp. $A_j$) the $i$-th row (resp. $j$-th column) of $A$.
Now we take $A$ from $\T(n)$ and define involutions $a_i$’s on $T^n:=(S^1)^n$ by $$\label{ai}
a_i(z_1,\dots,z_n):=(z_1,\dots,z_{i-1},-z_i,z_{i+1}(A^i_{i+1}),\dots,
z_n(A^i_n))$$ for $i=1,\dots,n$. These involutions $a_i$’s commute with each other and generate an elementary abelian 2-group of rank $n$, denoted by $G(A)$. The action of $G(A)$ on $T^n$ is free and the orbit space is the desired real Bott manifold $M(A)$.
$M(A)$ is a flat riemannian manifold. In fact, Euclidean motions $s_i$’s $(i=1,\dots,n)$ on $\R^n$ defined by $$s_i(u_1,\dots,u
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The two time-dependent Schrödinger equations in a potential $V(s,u)$, $u$ denoting time, can be interpreted geometrically as a moving interacting curves whose Fermi-Walker phase density is given by $-(\partial V/\partial s)$. The Manakov model appears as two moving interacting curves using extended da Rios system and two Hasimoto transformations.'
address: |
$^{1}$ Institute of Electronics, Bulgarian Academy of Sciences, 72 Tsarigradsko chaussee, 1784 Sofia, Bulgaria\
$^{2}$ Université de Cergy-Pontoise, 2 avenue, A. Chauvin, F-95302, Cergy-Pontoise Cedex, France\
$^{3}$Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 72 Tsarigradsko chaussee, 1784 Sofia, Bulgaria
author:
- 'N. A. KOSTOV$^{1}$, R. DANDOLOFF$^{2}$, V. S. GERDJIKOV$^{3}$ and G. G. GRAHOVSKI$^{2,3}$'
title: THE MANAKOV SYSTEM AS TWO MOVING INTERACTING CURVES
---
Introduction
============
In recent years, there has been a large interest in the applications of the Frenet-Serret equations [@e60; @RogSchief] for a space curve in various contexts, and interesting connections between geometry and integrable nonlinear evolution equations have been revealed. The subject of how space curves evolve in time is of great interest and has been investigated by many authors. Hasimoto [@h72] showed that the evolution of a thin vortex filament regarded as a moving space curve can be mapped to the nonlinear Schrödinger equation (NLSE): $$\begin{aligned}
\label{eq:1}
i \Psi_{u}+\Psi_{ss}+\frac{1}{2}|\Psi|^{2}\Psi=0,\end{aligned}$$ Here, u and s are time and space variables, respectively, subscripts denote partial derivatives. Lamb [@L77] used Hasimoto transformation to connect other motions of curves to the mKdV and sine-Gordon equations. Langer and Perline [@lr91] showed that the dynamics of non-stretching vortex filament in $\bbbr^{3}$ leads to the NLS hierarchy. Motions of curves on $S^{2}$ and $S^{3}$ were considered by Doliwa and Santini [@DoSa]. Lakshmanan [@Laks] interpreted the dynamics of a nonlinear string of fixed length in $\bbbr^{3}$ through the consideration of the motion of an arbitrary rigid body along it, deriving the AKNS spectral problem without spectral parameter. Recently, Nakayama [@Nakaya] showed that the defocusing nonlinear Schrödinger equation, the Regge-Lund equation, a coupled system of KdV equations and their hyperbolic type arise from motions of curves on hyperbola in the Minkowski space. Recently the connection between motion of space or plane curves and integrable equations has drawn wide interest and many results have been obtained [@FokGel; @dodd; @ciel; @ChouQu; @LanPer; @H02; @CaIv].
Preliminaries {#sec:2}
=============
The Manakov model {#sec:2.1}
-----------------
Time-dependent Schrödinger equation in potential $V(s,u)$ $$\begin{aligned}
i\Psi_{u}+\Psi_{ss}-V(s,u)\Psi=0,\end{aligned}$$ goes into the NLS eq. (\[eq:1\]) if the potential $V(s,u)=-1/2
|\psi(s,u)|^2$. Similarly, a set of two time-dependent Schrödinger equations: $$\begin{aligned}
i\Psi_{1,u}+\Psi_{1,ss}-V(s,u)\Psi_{1}=0,\quad
i\Psi_{2,u}+\Psi_{2,ss}-V(s,u)\Psi_{2}=0,\end{aligned}$$ where $V(s,u)=-|\Psi_{1}|^{2} -|\Psi_{2}|^{2}$, can be viewed as the Manakov system: $$\begin{aligned}
&&i\Psi_{1,u}+\Psi_{1,ss}+(|\Psi_{1}|^{2}
+|\Psi_{2}|^{2})\Psi_{1}=0,\label{ManakovSys1}\\
&&i\Psi_{2,u}+\Psi_{2,ss}+(|\Psi_{1}|^{2}+|\Psi_{2}|^{2})\Psi_{2}=0.
\label{ManakovSys2}\end{aligned}$$ It is convenient to use two Hasimoto transformations [@h72] $$\begin{aligned}
\Psi_{i}=\kappa_{i}(s,u) \exp\left[i\int^{s}\tau_{i}(s',u)d
s'\right],\quad i=1,2,\end{aligned}$$ in Eqs. (\[ManakovSys1\]), (\[ManakovSys2\]). Equating imaginary and real parts, this leads to the coupled partial differential equations (extended daRios system [@r1906]) $$\begin{aligned}
&&\kappa_{i,u}=-(\kappa_{i}\tau_{i})_{s}-\kappa_{i,s}\tau_{i},\quad
i=1,2 ,\label{daRios1}\\
&&\tau_{i,u}=\left[\frac{\kappa_{i,ss}}{\kappa_{i}}-\tau_{i}^{2}\right]_{s}-V(s,u)_{s},
\label{daRios2}\end{aligned}$$ where $$\begin{aligned}
V(s,u)=-|\Psi_{1}|^{2}
-|\Psi_{2}|^{2}=-\kappa_{1}^{2}-\kappa_{2}^2.\end{aligned}$$
Soliton curves {#sec:2.2}
--------------
A three dimensional space curve is described in parametric form by a position vectors ${\bf r}_{i}(s), i=1,2$, where s is the arclength. Let ${\bf t}_{i}=(\partial {\bf r}_{i}/\partial s),
i=1,2$ be the two unit tangent vectors along the two curves. At a given instant of time the triads of unit orthonormal vectors $({\bf t}_{i},{\bf n}_{i},{\bf b}_{i})$, where ${\bf n}_{i}$ and ${\bf b}_{i}$ denote the normals and binormals, respectively, satisfy the Frenet-Serret equations for two curves $$\begin{aligned}
\label{FSeq}
{\bf t}_{i,s}=\kappa_{i} {\bf n_{i}},\quad {\bf
n}_{i,s}=-\kappa_{i} {\bf t}_{i}+ \tau_{i}{\bf b}_{i},\quad {\bf
b}_{i,s}=-\tau_{i} {\bf n}_{i},\quad i=1,2 ,\end{aligned}$$ $\kappa_{i}$ and $\tau_{i}$ denote, respectively the two curvatures and torsions of the curves. A moving curves are described by $r_{i}(s,u)$, where u denote time. The temporal evolution of two triads corresponding to a given value $s$ can be written in the general form as $$\begin{aligned}
{\bf t}_{i,u}=g_{i} {\bf n}_{i}+h_{i} {\bf b}_{i},\quad {\bf
n}_{i,u}=-g_{i} {\bf t}_{i} + \tau^{0}_{i}{\bf b}_{i},\quad {\bf
b}_{i,u}=-h_{i} {\bf t}_{i} - \tau^{0}_{i}{\bf n}_{i},\end{aligned}$$ where the coefficients $g_{i}$,$h_{i}$ and $\tau_{i}^{0}$, as well as $\kappa_{i}$ and $\tau_{i}$, are functions of $s$ and $u$. $$\begin{aligned}
\left(%
\begin{array}{c}
{\bf t}_{i} \\
{\bf n}_{i} \\
{\bf b}_{i} \\
\end{array}%
\right)_{s}
=\left(%
\begin{array}{ccc}
0 & \kappa_{i} & 0 \\
-\kappa_{i} & 0 & \tau_{i} \\
0 & -\tau_{i} & 0 \\
\end{array}%
\right) \left(%
\begin{array}{c}
{\bf t}_{i} \\
{\bf n}_{i} \\
{\bf b}_{i} \\
\end{array}%
\right),\quad
\left(%
\begin{array}{c}
{\bf t}_{i} \\
{\bf n}_{i} \\
{\bf b}_{i} \\
\end{array}%
\right)_{u}
=\left(%
\begin{array}{ccc}
0 & g_{i} & h_{i} \\
-g_{i} & 0 & \tau^{0}_{i} \\
-h_{i} & -\tau^{0}_{i} & 0 \\
\end{array}%
\right) \left(%
\begin{array}{c}
{\bf t}_{i
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study the phase behavior of bowl-shaped particles using computer simulations. These particles were found experimentally to form a meta-stable worm-like fluid phase in which the bowl-shaped particles have a strong tendency to stack on top of each other \[M.Marechal *et al*, Nano Letters **10**, 1907 (2010)\]. In this work, we show that the transition from the low-density fluid to the worm-like phase has an interesting effect on the equation of state. The simulation results also show that the worm-like fluid phase transforms spontaneously into a columnar phase for bowls that are sufficiently deep. Furthermore, we describe the phase behavior as obtained from free energy calculations employing Monte Carlo simulations. The columnar phase is stable for bowl shapes ranging from infinitely thin bowls to surprisingly shallow bowls. Aside from a large region of stability for the columnar phase, the phase diagram features four novel crystal phases and a region where the stable fluid contains worm-like stacks.'
author:
- Matthieu Marechal
- Marjolein Dijkstra
bibliography:
- 'bowls.bib'
title: 'Phase behavior and structure of colloidal bowl-shaped particles: simulations'
---
Introduction
============
The concept of a mesogenic particle in the form of a bowl is relatively old in the molecular liquid crystal community. Such molecules are expected to form a columnar phase, which can be ferroelectric, i.e., a phase with a net electric dipole moment, when the particles possess a permanent dipole moment. Ferroelectric phases have potential applications for optical and electronic devices. In fact, crystalline (as opposed to liquid crystalline) ferroelectrics are already applied in sensors, electromechanical devices and non-volatile memory [@FerroApp]. A columnar ferroelectric phase may have the advantage over a crystal, that grain boundaries and other defects anneal out faster due to the partially fluid nature of the columnar phase. In reality, columnar phases of conventional disc-like particles often exhibit many defects, as flat thin discs can diffuse out of a column and columns can split up. The presence of these defects limits their potential use for industrial applications [@simulation-bowls]. Less defects are expected in a columnar phase of bowl-shaped mesogens, where particles are supposed to be more confined in the lateral directions. A whole variety of bowl-like molecules have already been synthesized and investigated experimentally [@Sawamura2002; @simpson2004; @xu1993rbl; @malthete1987icc]. In addition, buckybowlic molecules, *i.e.* fragments of $C_{60}$ whose dangling bonds have been saturated with hydrogen atoms, have been shown to crystallize in a columnar fashion [@Rabideau1996; @Forkey1997; @Matsuo2004; @Sakurai2005; @Kawase2006]. However, the number of theoretical studies is very limited as it is difficult to model the complicated particle shape in theory and simulations. In a recent simulation study, the attractive-repulsive Gay-Berne potential generalized to bowl-shaped particles has been used to investigate the stacking of bowl-like mesogens as a function of temperature [@simulation-bowls]. The authors reported a nematic phase and a columnar phase. This columnar phase did not exhibit overall ferroelectric order, although polar regions were found. In another very recent simulation study [@Cinacchi2010] of hard contact lenses (infinitely thin, shallow bowls), a new type of fluid phase was found in which the particles cluster on a spherical surface for bowls which are not too shallow. No columnar phase was found since the focus was on rather shallow bowls at a relatively low densities.
Recently, a procedure has been developed to synthesize bowl-shaped colloidal particles [@Carmen]. This method starts with the preparation of highly uniform oil-in-water emulsion droplets. Subsequently, the droplets were used as templates around which a solid shell with tunable thickness is grown. In the next step of the synthesis, the oil in the droplets is dissolved and finally, during drying, the shells collapse into hemispherical double-walled bowls.
In addition to these larger, more easily imaged colloids, a whole variety of bowl-shaped nanoparticles and smaller colloids have been synthesized and characterized [@Charnay2003; @Wang2004; @Liu2005; @Jagadeesan2008; @Love2002; @Hosein2007], and possible applications of these systems have been put forward. We also note that recently hemispherical particles were synthesized at an air-solution interface [@higuchi] and on a substrate [@Xia]. These hemispherical particles are intended to be used as microlense arrays, but they can also serve as a new type of shape-anisotropic colloidal particle.
In our simulations, we model the particles as the solid of revolution of a crescent (see Fig. \[fig:particles\]a). The diameter $\sigma$ of the particle and the thickness $D$ are defined as indicated in Fig. \[fig:particles\]a. We define the shape parameter of the bowls by a reduced thickness $D/\sigma$, such that the model reduces to infinitely thin hemispherical surfaces for $D/\sigma=0$ and to solid hemispheres for $D/\sigma=0.5$. The advantages of this simple model is that it interpolates continuously between an infinitely thin bowl and a hemispherical solid particle (the two colloidal model systems, which we discussed above), and that we can derive an algorithm that tests for overlaps between pairs of bowls, which is a prerequisite for Monte Carlo simulations of hard-core systems.
In a recent combined experimental and simulation study (for which we performed the simulations), the phase behavior of repulsive bowl-shaped colloids was investigated [@Marechal2010bowls]. The colloids were shown to form a worm-like fluid phase, in which the particles form long curved stacks running in random directions. By comparing the distribution of stack lengths, the simulation model was shown to describe the colloidal particles well. No evidence of columnar ordering was found in the experiments and in simulations of bowls with corresponding deepness, which was explained by the glassy behavior of the particles preventing rearrangements. The phase behavior of the model particles is expected to also describe other repulsive bowl shaped particles well, provided that the dimensions of the simulation particle are chosen such that the diameter of a stack and the inter-particle distance in the stack are the same as for the particles to be modeled.
In this work, we expand the simulation results on the hard bowl-shaped particles. First, we elaborate on the model for the collapsed shells; the overlap algorithm is described in the appendix. Also, the (free energy) methods are explained in more detail than in Ref. [@Marechal2010bowls]. In the results section, we study the properties of the isotropic phase. We investigate the nature and the location of the transition between the homogeneous fluid phase and the fluid phase that contains the worm-like stacks. Furthermore, we show the packing diagram and the phase diagram with a tentative homogeneous–to–worm-like fluid transition line. In the last section we summarize and discuss the results.
![ (a) The theoretical model of the colloidal bowl is the solid of revolution of a crescent around the axis indicated by the dashed line. The thickness of the double-walled bowl is denoted by $D$ and the diameter of the bowl by $\sigma$. (b) The bowls are defined using two spheres of radii $R_1$ and $R_2$, that are a distance of $L$ apart. The direction vector, $\mathbf{u}_i$ and the reference point of the particle, $\mathbf{r}_i$, (the dot in the center of the smaller sphere) are indicated. \[fig:particles\]](particles){width="45.00000%"}
Methods
=======
Model
-----
We describe the model that we use to represent the bowls in more detail. Consider a sphere with a radius $R_1$ at the origin and a second sphere with radius $R_2>R_1$ at position $-L\mathbf{u}_i$, where $\mathbf{u}_i$ is the unit vector denoting the orientation of the bowl and $L>0$. The bowl is represented by that part of the sphere with radius $R_1$ that has no overlap with the larger sphere, see Fig. \[fig:particles\]b. We have chosen the values for $L$ and $R_2$ such that the bowls are hemispherical (see appendix for explicit expressions for $L$ and $R_2$). We define the thickness of the bowls by $D=L-(R_2-R_1)$, such that the model reduces to the surface of a hemisphere for $D=0$ and to a solid hemisphere for $D=R_1$. The volume of the particle is $\frac{\pi}{4}\, D\,
(\sigma^2 - D\sigma + \frac{2}{3} D^2)$, where $\sigma\equiv 2R_1$ is our unit of length. The algorithm to determine overlap between our bowls is described in the appendix.
Fluid phase
-----------
We employ standard $NPT$ MC simulations to obtain the equation of state (EOS) for the fluid phase. In addition, we obtain the compressibility by measuring the fluctuations in the volume: $$\frac{\langle V^2\rangle - \langle V\rangle^2}{\langle V\rangle}=\frac{k_B T}{\rho} \, \left(\frac{\partial\rho}{\partial P}\right)_T,$$ where $\rho=N/V$ is the number density and the
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The ALHAMBRA survey aims to cover 4 square degrees using a system of 20 contiguous, equal width, medium-band filters spanning the range 3500 $\AA$ to 9700 $\AA$ plus the standard JHKs filters. Here we analyze deep near-IR number counts of one of our fields (ALH08) for which we have a relatively large area (0.5 square degrees) and faint photometry (J=22.4, H=21.3 and K=20.0 at the 50% of recovery efficiency for point-like sources). We find that the logarithmic gradient of the galaxy counts undergoes a distinct change to a flatter slope in each band: from 0.44 at $[17.0,18.5]$ to 0.34 at $[19.5,22.0]$ for the J band; for the H band 0.46 at $[15.5,18.0]$ to 0.36 at $[19.0,21.0]$, and in Ks the change is from 0.53 in the range $[15.0,17.0]$ to 0.33 in the interval $[18.0,20.0]$. These observations together with faint optical counts are used to constrain models that include density and luminosity evolution of the local type-dependent luminosity functions. Our models imply a decline in the space density of evolved early-type galaxies with increasing redshift, such that only 30$\%$ - 50$\%$ of the bulk of the present day red-ellipticals was already in place at $z\sim1$.'
author:
- 'D. Cristóbal-Hornillos,J. A. L. Aguerri, M. Moles, J. Perea, F. J. Castander, T. Broadhurst, E. J. Alfaro, N. Benítez, J. Cabrera-Caño, J. Cepa, M. Cerviño, A. Fernández-Soto, R. M. González Delgado, C. Husillos, L. Infante , I. Márquez, V. J. Martínez, J. Masegosa, A. del Olmo, F. Prada, J. M. Quintana, and S. F. Sánchez'
title: 'Near-IR Galaxy Counts and Evolution from the Wide-Field ALHAMBRA survey[^1]'
---
Introduction
============
It is well understood that the stellar masses of galaxies are better examined with near-IR (NIR) observations compared to shorter wavelengths mainly because the near-IR light is relatively less affected by recent episodes of star formation and by internal dust extinction. Moreover, the K-corrections are also smaller and better constrained in the NIR and hence massive high redshift objects are relatively prominent in the NIR. Despite this relative insensitivity to luminosity evolution and the effects of dust, the hope of using the NIR counts to constrain the cosmological parameters has not proved feasible because evolution in the space density of galaxies was soon understood to be of comparable significance for the NIR counts as the cosmological curvature [@1992Natur.355...55B].
Disentangling the effects of cosmology from evolution is not straightforward even in the NIR, and now it has become more appropriate to turn the question around and make use of the impressive constraints on the cosmological parameters from WMAP [@2003ApJS..148..175S], and type Ia supernovae [@1998AJ....116.1009R; @1999ApJ...517..565P], in order to measure more carefully the rate of evolution . In addition, imaging in the NIR has progressed well with fully cryogenic wide-field imagers now available on several 4m class telescopes. We also have at our disposal now much better estimates of the luminosity functions of different classes of galaxies from the large local redshift surveys in particular the SDSS [@2003ApJ...592..819B; @2003AJ....125.1682N], or 2dFGRS [@2001MNRAS.326..255C].
The evolution of the luminosity functions has been addressed making use of spectroscopic redshift surveys. However, the results of those studies differ due to the limitations in terms of low statistic, or small fields probed which lead to uncertainties due to the large scale structure. In a mild (20-30%) evolution in the number density of massive objects since $z\sim1$ is found. Also a roughly constant number density for red galaxies to $z=0.8$ is found in [@2002ApJ...571..136I]. Using COMBO-17 photometric redshift information , and the rest-frame color bimodality at each redshift, [@2004ApJ...608..752B] with a sample of $\sim$25000 galaxies, concluded that the stellar mass in red-galaxies has increase in a factor 2-3 from $z\sim1$ to the present. Combining spectroscopic with photometric redshift data point to an increase in a factor of $\sim2.7$ for the density of red bulge-dominated galaxies between $z=1$ and $z=0.6$. [@2007ApJ...665..265F] found a different evolution since $z\sim1$ in number densities in the red and blue galaxy population, being constant for the blue galaxies, while the number density of the red galaxies increases in a factor 3. Wide field imaging with larger covered area, and greater numbers selected to uniform faint limits is complementary to the redshift surveys in examining statistical models proposed for evolution.
In practical terms it is most useful to combine faint NIR counts with deep blue counts when examining models of evolution to contrast the effects of luminosity and density evolution which affect these two spectral ranges in different ways. In [@2000MNRAS.311..707M; @2001MNRAS.323..795M] the authors use non-evolving models with a higher $\phi^*$ normalization in the B-band even if this leads to an over-prediction of bright galaxies than is observed. pointed out that both the optical and NIR counts present an excess over the no-evolution models, finding passive evolution models more suitable to match the distributions. The authors emphasize nevertheless their disappointment with the fact that in the passive evolution models the faint number counts are dominated by early-type galaxies, whereas the real data show that spiral and Sd/Irr galaxies are the main contributors to the faint counts even in the K-band.
A characteristic feature of the NIR galaxy counts reported in several works is the change of slope at $17 \leq Ks \leq
18$. This distinctive flattening is not observed in the B-band counts. This effect has been interpreted in terms of a change in the dominant galaxy population, becoming increasingly dominated by an intrinsically bluer population [@1993ApJ...415L...9G; @2006ApJ...639..644E]. In the model described in [@2003ApJ...595...71C] a delay in the formation of the bulk of the early-type galaxies to $z_{form}<2$ and the presence of a dwarf star-forming population are invoked to match the Ks-band counts. A similar dwarf star-forming population at $z>1$, that is not present at lower redshift was found compatible in [@1996Natur.383..236M; @2001MNRAS.323..795M] but that work uses a $q_0=0.5, \Lambda=0.0$ cosmology, requiring some revision.
Alternatively, an increase of $\phi^*$ for late-type galaxies, driven via mergers, can produce similar results without introducing an ad hoc population that is unseen in the local LFs [@2006ApJ...639..644E]. In any case, a low $z_{form}\sim1.5$ for the ellipticals remains necessary to generate a significant decrease in the number of red galaxies and to account for the distinctive break in the NIR count slope at Ks$\sim$17.5.
Here we use the NIR data from the first completed ALHAMBRA field, hereafter ALH08 (details of the project can be found in [@MOLES2008] and http://www.iaa.es/alhambra). The limiting magnitudes (at S/N=5 in an aperture diameter of 2$\times$FWHM) reached in the 3 NIR bands are in mean for the eight frames in ALH8: J=22.6, H=21.5 and Ks=20.1 with a 0.3 rms (Vega system), and the total area covered amounts to $\sim 0.5$ square degrees. The completed survey will cover 8 independent fields with a total area of 4 square degrees. The ALHAMBRA survey occupies a middle ground in terms of the product of depth and area in all three standard NIR filters. The bright end of the counts is well constrained by our relatively large area allowing a careful examination of the location and size of the break in the count-slopes in J,H and Ks at fainter magnitudes. We have paid special attention to S/G separation, which at the intermediate magnitude range, is effectively achieved using optical-NIR color indices by combining our ALH08 NIR data with the corresponding Sloan DR5 data.
Unless specified otherwise, all magnitudes here are presented in the Vega system, and the favored cosmological model, with H$_{0} = 70$ , $\Omega_{M} = 0.3$, $\Omega_{\Lambda} = 0.7$ was adopted through this paper.
Observing Strategy and Data Reduction
=====================================
The ALHAMBRA survey is collecting data in 23 optical-NIR filters using the Calar Alto 3.5m telescope with the cameras LAICA for the
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
For each ${\small b\in\left( 0,\,\infty\right) }$ we intend to generate a decreasing sequence of subsets $\left( \mathcal{Y}_{b}^{\left( n\right)
}\right) \subset Y_{\mathrm{conc}}$ depending on $b$ such that whenever $n\in\mathbb{N}$, then $\mathcal{A}\cap\mathcal{Y}_{b}^{\left( n\right) }%
$ is dense in $\mathcal{Y}_{b}^{\left( n\right) }$ and the following four sets $\mathcal{Y}_{b}^{\left( n\right) }$, $\mathcal{Y}_{b}^{\left(
n\right) }\backslash\left( \mathcal{A}\cap\mathcal{Y}_{b}^{\left( n\right)
}\right) $, $\mathcal{A}\cap\mathcal{Y}_{b}^{\left( n\right) }$ and $\mathcal{Y}_{\mathrm{conc}}$ are pairwise equinumerous. Among others we also show that if $f$ is any measurable function on a measure space $\left(
\Omega,\mathcal{F},\lambda\right) $ and $p\in\left[ 1,\infty\right) $ is an arbitrary number then the quantities $\left\Vert f\right\Vert _{L^{p}}$ and $\sup_{\Phi\in\widetilde{\mathcal{Y}_{\mathrm{conc}}}}\left( \Phi\left(
1\right) \right) ^{-1}\left\Vert \Phi\circ\left\vert f\right\vert
\right\Vert _{L^{p}}$ are equivalent, in the sense that they are both either finite or infinite at the same time.
address: |
Institute of Mathematics\
University of Miskolc\
H-3515 Miskolc–Egyetemváros\
Hungary
author:
- 'N. K. Agbeko'
date: 'March 24th, 2006'
title: 'Bijections and metric spaces induced by some collective properties of concave Young-functions'
---
Introduction
============
We know that concave functions play major roles in many branches of mathematics for instance probability theory ([@BURK1973], [@GARS1973], [@MOGY1981], say), interpolation theory (cf. [@TRIEB1978], say), weighted norm inequalities (cf. [@GARFRAN1985], say), and functions spaces (cf. [@SINN2002], say), as well as in many other branches of sciences. In the line of [@BURK1973], [@GARS1973] and [@MOGY1981], the present author also obtained in martingale theory some results in connection with certain collective properties or behaviors of concave Young-functions (cf. [@AGB1986], [@AGB1989]). The study presented in [@AGB2005] was mainly motivated by the question why strictly concave functions possess so many properties, worth to be characterized using appropriate tools that await to be discovered.
We say that a function $\Phi:\left[ 0,\,\infty\right) \rightarrow\left[
0,\,\infty\right) $ belongs to the set $\mathcal{Y}_{\mathrm{conc}}$ (and is referred to as a concave Young-function) if and only if it admits the integral representation$$\Phi\left( x\right) =\int\nolimits_{0}^{x}\varphi\left( t\right) dt,
\label{id1}%$$ (where $\varphi:\left( 0,\,\infty\right) \rightarrow\left( 0,\,\infty
\right) $ is a right-continuous and decreasing function such that it is integrable on every finite interval $\left( 0,\,x\right) $) and $\Phi\left(
\infty\right) =\infty$. It is worth to note that every function in $\mathcal{Y}_{\mathrm{conc}}$ is strictly concave.
We will remind some results obtained so far in [@AGB2005].
We shall say that a concave Young-function $\Phi$ satisfies the *density-level property* if $A_{\Phi}\left( \infty\right) <\infty$, where $A_{\Phi}\left( \infty\right) :=\int\nolimits_{1}^{\infty}%
\frac{\varphi\left( t\right) }{t}dt$. All the concave Young-functions possessing the density-level property will be grouped in a set $\mathcal{A}$.
In Theorems \[theo1\] and \[theo2\] (cf. [@AGB2005]), we showed that the composition of any two concave Young-functions satisfies the density-level property if and only if at least one of them satisfies it. These two theorems show that concave Young-functions with the density-level property behave like left and right ideal with respect to the composition operation.
We also proved ([@AGB2005], Lemma 5, page 12) that if $\Phi\in
\mathcal{Y}_{\mathrm{conc}}$, then there are constants $C_{\Phi}>0$ and $B_{\Phi}\geq0$ such that$$A_{\Phi}\left( \infty\right) -B_{\Phi}\leq%
%TCIMACRO{\dint _{0}^{\infty}}%
%BeginExpansion
{\displaystyle\int_{0}^{\infty}}
%EndExpansion
\frac{\Phi\left( t\right) }{\left( t+1\right) ^{2}}dt\leq C_{\Phi}%
+A_{\Phi}\left( \infty\right) .$$ This led us to the idea to search for a Lebesgue measure (described here below) with respect to which every concave Young-function turns out to be square integrable ([@AGB2005], Lemma 6, page 13), i.e. $\mathcal{Y}%
_{\mathrm{conc}}\subset L^{2}:=L^{2}\left( \left[ 0,~\infty\right)
,~\mathcal{M},~\mu\right) $, where $\mathcal{M}$ is a $\sigma$-algebra (of $\left[ 0,~\infty\right) $) containing the Borel sets and $\mu
:\mathcal{M}\rightarrow\left[ 0,~\infty\right) $ is a Lebesgue measure defined by $\mu\left( \left[ 0,~x\right) \right) =\frac{1}{3}\left(
1-\frac{1}{\left( x+1\right) ^{3}}\right) $ for all $x\in\left[
0,~\infty\right) $. The mapping $d:L^{2}\times L^{2}\rightarrow\left[
0,~\infty\right) $, defined by$$\operatorname*{d}\left( f,g\right) =\sqrt{%
%TCIMACRO{\dint _{\left[ 0,~\infty\right) }}%
%BeginExpansion
{\displaystyle\int_{\left[ 0,~\infty\right) }}
%EndExpansion
\left( f-g\right) ^{2}d\mu}=\sqrt{%
%TCIMACRO{\dint _{0}^{\infty}}%
%BeginExpansion
{\displaystyle\int_{0}^{\infty}}
%EndExpansion
\frac{\left( f\left( x\right) -g\left( x\right) \right) ^{2}}{\left(
x+1\right) ^{4}}dx}, \label{dist}%$$ is known to be a semi-metric.
Further on, we proved in ([@AGB2005], Theorem 8, page 16) that $\mathcal{A}$ is a dense set in $\mathcal{Y}_{\mathrm{conc}}$.
Throughout this communication $\Phi_{\operatorname{id}}$ will denote the identity function defined on the half line $\left[ 0,\text{ }\infty\right) $ and we write $\left\Vert \Phi\right\Vert :=\sqrt{%
%TCIMACRO{\dint _{\left[ 0,\text{ }\infty\right) }}%
%BeginExpansion
{\displaystyle\int_{\left[ 0,\text{ }\infty\right) }}
%EndExpansion
\Phi^{2}d\mu}$ whenever $\Phi\in\mathcal{Y}_{\mathrm{conc}}$.
We intend to generate a decreasing sequence of subsets $\left( \mathcal{Y}%
_{b}^{\left( n\right) }\right) \subset\mathcal{Y}_{\mathrm{conc}}$ depending on $b$ such that whenever $n\in\mathbb{N}$, then $\mathcal{A}%
\cap\mathcal{Y}_{b}^{\left( n\right) }$ is dense in $\mathcal{Y}%
_{b}^{\left( n\right) }$ and the following four sets $\mathcal{Y}%
_{b}^{\left( n\right) }$, $\mathcal{Y}_{b}^{\left( n\right) }%
\backslash\left( \mathcal{A}\cap\mathcal{Y}_{b}^{\left( n\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
We report spectroscopic observations of the 2.63 day, detached, F-type main-sequence eclipsing binary . We use our observations together with existing $uvby$ photometric measurements to derive accurate absolute masses and radii for the stars good to better than 1.5%. We obtain masses of $M_1 = 1.269 \pm 0.017~M_{\sun}$ and $M_2 =
0.7542 \pm 0.0059~M_{\sun}$, radii of $R_1 = 1.477 \pm 0.012~R_{\sun}$ and $R_2 = 0.7232 \pm 0.0091~R_{\sun}$, and effective temperatures of $6770 \pm 150$ K and $5020 \pm 150$ K for the primary and secondary stars, respectively. Both components appear to have their rotations synchronized with the motion in the circular orbit. A comparison of the properties of the primary with current stellar evolution models gives good agreement for a metallicity of ${\rm [Fe/H]} = -0.17$, which is consistent with photometric estimates, and an age of about 2.2 Gyr. On the other hand, the K2 secondary is larger than predicted for its mass by about 4%. Similar discrepancies are known to exist for other cool stars, and are generally ascribed to stellar activity. The system is in fact an X-ray source, and we argue that the main site of the activity is the secondary star. Indirect estimates give a strength of about 1 kG for the surface magnetic field on that star. A previously known close visual companion to is shown to be physically bound, making the system a hierarchical triple.
author:
- 'Jane C. Bright and Guillermo Torres,'
title: 'Absolute dimensions of the F-type eclipsing binary V2154 Cygni'
---
Introduction {#sec:introduction}
============
(also known as HD 203839, HIP 105584, BD+47 3386, and TYC 3594-1060-1; $V = 7.77$) is a 2.63 day eclipsing binary discovered by the [*Hipparcos*]{} team [@Perryman:1997], and found independently in 1996 by [@Martin:2003] in the course of a search for variable stars in the open cluster M39. Light curves in the $uvby$ Strömgren system were published by [@Rodriguez:2001], but the physical properties of the components were not derived by them because spectroscopy was lacking. The only spectroscopic work we are aware of are brief reports by [@Kurpinska:2000] listing preliminary values for the velocity amplitudes, and by [@Oblak:2004] giving preliminary masses and radii, though details of those analyses are unavailable. The very unequal depths of the eclipses ($\sim$0.3 mag for the primary and $\sim$0.05 mag for the secondary) suggest stars of rather different masses, making it an interesting object for followup because of the increased leverage for the comparison with stellar evolution models. This motivated us to carry out our own high-resolution spectroscopic observations of this star, which we report here. is known from [*Tycho-2*]{} observations to have a close, 047 visual companion about two magnitudes fainter than the binary [$\Delta B_T = 2.18$ mag, $\Delta V_T = 2.15$ mag; @Fabricius:2000]. We show below that it is physically associated, making a hierarchical triple system.
While the primary of the eclipsing pair is an early F star, the secondary is a much smaller K star in the range where previous observations have shown discrepancies with models [see, e.g., @Torres:2013]. The measured radii of such stars are sometimes larger than predicted, and their temperatures cooler than expected, both presumably due to the effects of magnetic activity and/or spots [e.g., @Chabrier:2007; @Morales:2010]. therefore presents an opportunity to determine accurate physical properties of the stars in a system with a mass ratio significantly different from unity, and to investigate any discrepancies with theory in connection with measures of stellar activity.
The layout of our paper is as follows. Our new spectroscopic observations are reported in Section \[sec:spectroscopy\], followed by a brief description in Section \[sec:photometry\] of the [@Rodriguez:2001] photometric measurements we incorporate into our analysis. The light curve fits are presented in Section \[sec:analysis\], along with consistency checks to support the accuracy of the results. With the spectroscopic and photometric parameters we then derive the physical properties of the system, given in Section \[sec:dimensions\], and compare them with current models of stellar structure and stellar evolution (Section \[sec:models\]). We discuss the results in the context of available activity measurements in Section \[sec:discussion\], and conclude with some final thoughts in Section \[sec:conclusions\].
Spectroscopic observations and analysis {#sec:spectroscopy}
=======================================
was placed on our spectroscopic program in October of 2001, and observed through June of 2007 with two nearly identical echelle instruments [Digital Speedometer; @Latham:1992] on the 1.5m telescope at the Oak Ridge Observatory in the town of Harvard (MA), and on the 1.5m Tillinghast reflector at the Fred L. Whipple Observatory on Mount Hopkins (AZ). Both instruments (now decommissioned) used intensified photon-counting Reticon detectors providing spectral coverage in a single echelle order 45 Å wide centered on the b triplet at 5187 Å. The resolving power delivered by these spectrographs was $R \approx 35,\!000$, and the signal-to-noise ratios achieved for the 80 usable observations of range from about 20 to 67 per resolution element of 8.5 . Wavelength solutions were carried out by means of exposures of a thorium-argon lamp taken before and after each science exposure, and reductions were performed with a custom pipeline. Observations of the evening and morning twilight sky were used to place the observations from the two instruments on the same velocity system and to monitor instrumental drifts [@Latham:1992].
Visual inspection of one-dimensional cross-correlation functions for each of our spectra indicated the presence of a star much fainter than the primary that we initially assumed was the secondary in . However, subsequent analysis with the two-dimensional cross-correlation algorithm TODCOR [@Zucker:1994] showed those faint lines to be stationary, while a third set of even weaker lines was noticed that moved in phase with the orbital period. This is therefore the secondary in the eclipsing pair, and the stationary lines correspond to the visual companion mentioned in the Introduction, as we show later, which falls within the 1 slit of the spectrograph. Consequently, for the final velocity measurements we used an extension of TODCOR to three dimensions [referred to here as TRICOR; @Zucker:1995] that uses three different templates, one for each star. In the following we refer to the binary components as stars 1 and 2, and to the tertiary as star 3. The templates were selected from a large library of synthetic spectra based on model atmospheres by R. L. Kurucz [see @Nordstrom:1994; @Latham:2002], computed for a range of temperatures ($T_{\rm eff}$), surface gravities ($\log g$), rotational broadenings ($v \sin i$, when seen in projection), and metallicities (\[m/H\]).
We selected the optimum parameters for the templates as follows, adopting solar metallicity throughout. For the primary star we ran a grid of one-dimensional cross-correlations against synthetic spectra over a wide range of temperatures and $v \sin i$ values [see @Torres:2002], for a fixed $\log g$ of 4.0 that is sufficiently close to our final estimate presented later. The best match, as measured by the cross-correlation coefficient averaged over all exposures, was obtained for interpolated values of $T_{\rm eff} =
6770 \pm 150$ K and $v \sin i = 26 \pm 2$ . The secondary and tertiary stars are faint enough (by factors of 25 and 9, respectively; see below) that they do not affect these results significantly. For the secondary the optimal $v \sin i$ from grids of TRICOR correlations was $12 \pm 2$ . However, due to its faintness we were unable to establish its temperature from the spectra themselves, so we relied on results from the light curve analysis described later in Section \[sec:analysis\]. The central surface brightness ratio $J$ provides an accurate measure of the temperature ratio between stars 1 and 2. Using the primary temperature from above, the $J$ value for the $y$ band, and the visual flux calibration by [@Popper:1980], we obtained $T_{\rm eff} = 5020 \pm 150$ K. The surface gravity was adopted as $\log g = 4.5$, appropriate for a main-sequence star of this temperature. For the tertiary we again adopted $\log g = 4.5$, and grids of correlations with TRICOR for a range of temperatures indicated a preference for a value of 5500 K, to which we assign a conservative uncertainty of 200 K. Similar correlation grids varying $v \sin i$ indicated no measurable line broadening for the tertiary, so we adopted $v \sin i = 0$ kms, with an estimated upper limit of 2 .
Radial velocities were then measured with TR
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Avalanche control by explosion is a widely applied method to minimize the avalanche risk to infrastructure in snow-covered mountain areas. However, the mechanisms involved leading from an explosion to the release of an avalanche are not well understood. Here we test the hypothesis that weak layers fail due to the stress caused by propagating acoustic waves. The underlying mechanism is that the stress induced by the acoustic waves exceeds the strength of the snow layers. We compare field measurements to a numerical simulation of acoustic wave propagation in a porous material. The simulation consists of an acoustic domain for the air above the snowpack and a poroelastic domain for the dry snowpack. The two domains are connected by a wave field decomposition and open pore boundary conditions. Empirical relations are used to derive a porous model of the snowpack from density profiles of the field experiment. Biot’s equations are solved in the poroelastic domain to obtain simulated accelerations in the snowpack and a time dependent stress field. Locations of snow failure were identified by comparing the principal normal and shear stress fields to snow strength which is assumed to be a function of snow porosity. One air pressure measurement above the snowpack was used to calibrate the pressure amplitude of the source in the simulation. Additional field measurements of air pressure and acceleration measurements inside the snowpack were compared to individual field variables of the simulation. The acceleration of the air flowing inside the pore space of the snowpack was identified to have the highest correlation to the acceleration measurements in the snowpack.'
address:
- 'Department of Earth Sciences, Simon Fraser University, 8888 University Drive, BC V5A1S6 Burnaby, Canada'
- 'WSL Institute for Snow and Avalanche Research SLF, Flüelastrasse 11, 7260 Davos Dorf, Switzerland'
- 'Institute of Mechanical Systems, ETH Zurich, 8092 Zürich, Switzerland'
author:
- Rolf Sidler
- Stephan Simioni
- Jürg Dual
- Jürg Schweizer
bibliography:
- '0-references.bib'
title: Numerical simulation of wave propagation and snow failure from explosive loading
---
snow ,acoustic wave propagation ,explosives ,avalanche control ,porous medium ,field experiments
Introduction
============
During the last decades the number of people living and recreating in, or travelling through mountainous terrain has substantially increased. To ensure the reliability of infrastructure extensive engineering works such as supporting structures and snow sheds have been built to prevent damages due to large avalanches. Whereas these permanent protection measures are highly effective, they are also costly. Therefore, less expensive temporary preventive measures have become increasingly popular over the last decade. In particular, artificial avalanche release by explosion is among the key preventive measures. The aim is to trigger avalanches when their size is still small enough to not cause any damage and no people are exposed in the path of the avalanche [@mcclung:2006].
Releasing avalanches with explosives by hand or helicopter charging is, however, limited to locations or weather conditions allowing tolerably save access for avalanche control personnel. This limitation has been overcome by fixed avalanche control installations which trigger avalanches by the effect of explosions and allow remote operation even under most adverse weather conditions or during nighttime. The basic physical mechanisms that cause slab avalanches to release from explosives, and other causes, are well known and have been used to choose optimal locations of blast installations for years. What is lacking is a quantitative model incorporating the “known” physics associated with initiating failure of slab avalanches that can be used to examine the processes, improve understanding of the physical processes and make predictions that can be tested in the future.
Historically, research on avalanche control has been focused on experimental evidence of waveforms, charge type and placement to support the work of avalanche control operations [@gubler:1976; @mellor:1973; @ueland:1993; @bones:2012; @binger:2015]. The most extensive measuring campaigns were performed by @gubler:1977. However, many of the more recent studies focused on small range effects [@bones:2012; @wooldridge:2012; @johnson:1993]. A more detailed review of the past research within snow and explosions is given by @simioni:2015. A model considering the porous character of snow based on Biot’s [-@biot:1962] equations has been proposed by @johnson:1982, but has rarely been applied to snow since [@albert:2013]. A mixed stress-energy failure criterion including simplified effects of explosive loading was developed by @cardu:2008. It is only recently that numerical tools are used to support a theoretical framework on the physical mechanisms that lead to the release of an avalanche. @miller:2011 considered the non-linear effects of an explosion and non-linear compaction of the snowpack for close ranges using the finite element method.
Here we compare the measurements from field experiments on the wave propagation caused by an explosion to the results of a numerical simulation considering the porous character of snow. We tested the hypothesis that the stresses induced by the acoustic wave propagating through the snowpack locally exceed the snow strength and lead to failure. In the winter 2013-2014 we performed multiple field experiments with avalanche control explosives triggered at different elevations above the snow surface and measured the air pressure above the snowpack as well as acceleration at different depths within the snowpack and distances from the point of explosion [@simioni:2015]. In addition, we recorded weak layer failure with cameras.
In the following we describe the numerical model that was used to perform the simulations. We focus on a specific experiment as a showcase for the test series, build a layered porous model for the prevailing snowpack, and evaluate the numerical results toward measured air pressure and acceleration. Finally, we compare locations where the stress in the numerical simulation exceeds the strength of the snowpack to the observed locations in the field.
Methods
=======
Field experiment
----------------
We chose the first experiment from a day with eight experiments on 27. February, 2015 as a showcase to compare with the numerical results. The geometry of the experiment is shown in Figure \[fig:layout\]. A 4.3 kg explosive charge was taped to a wood stick and placed 1 m above the snow surface. Three snow pits were excavated 12.3 m, 17.3 m, and 22.5 m horizontal distance from the point of the explosive charge. Microphones were placed 0.05 m above the undisturbed snow surface next to the snow pits. Three accelerometers were installed in cavities at 0.13 m, 0.48 m, and 0.83 m below the snow surface in each snow pit[@simioni:2015]. Special care was given to fit the diameters of the horizontal holes exactly with the diameters of the accelerometers to warrant the coupling of the sensors to the snowpack. Snowpack failure was recorded with compact cameras [@simioni:2015]
![\[fig:layout\] Longitudinal section of the measuring layout of the experiments from 27 February 2014 [@simioni:2015].](measuring_layout_paper_revP3){width="100.00000%"}
The snowpack on the investigated day was 187 cm deep and consisted of a 45 cm thick layer of recently deposited snow (consisting of decomposing and fragmented precipitation particles) including two melt-freeze crusts above a well-consolidated base. The base was composed of layers of small rounded grains interspersed with several melt-freeze crusts and ice layers above hard layers of faceted crystals near the bottom of the snowpack. A potential weak layer was identified at a height of 85 cm from the ground. The snowpack was still dry but relatively warm with a minimum temperature of -1C. The point snow stability based on the snow profile was rated as good [@schweizer:2001]. An extended column test [@Simenhois:2009] indicated that the potential weak layer was very hard to trigger as it was buried below a 1 m thick well consolidated slab. The densities obtained by capacitive measurements in the three snow pits are shown in Figure \[fig:snow-profile\] [@denoth:1989; @eller:1996]. To localize weak layer failure during the experiments, compact cameras were installed in each snow pit and recorded the pit wall during the explosion. The single video stills allowed to visually identify weak layer failure due to movement of the snowpack overlaying the weak layer [@simioni:2015].
![\[fig:snow-profile\] Density profiles measured with the capacitive Denoth probe in the three pits at distances of 12, 17 and 22 m from the point of triggering.](140227_densities){width="100.00000%"}
Numerical model
---------------
Seasonal snow is a highly porous material with air often taking up the larger part of the volume. @johnson:1982 showed that Biot’s [-@biot:1956] theory for wave propagation in porous materials can be successfully applied to snow. Acoustic wave propagation in such porous materials is characterized by the presence of a compressional and a shear wave in the ice skeleton and an additional second compressional wave that is propagating in the pore fluid that is also called the “slow” wave. Due to the high porosity and the proportions between material properties this second compressional wave mode is propagating in snow [@oura:1952; @ishida:1965] and can be recorded. This stands in contrast to other natural porous materials as, for example, sediments where the wave is diffusive and cannot be recorded. The energy dissipation mechanism in Biot’s theory is physically modeled by the viscosity of the pore fluid which is moved against the skeleton as acoustic waves propagate through the material. Biot’
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We highlight differences in spectral types and intrinsic colors observed in pre-main sequence (pre-MS) stars. Spectral types of pre-MS stars are wavelength-dependent, with near-infrared spectra being 3-5 spectral sub-classes later than the spectral types determined from optical spectra. In addition, the intrinsic colors of young stars differ from that of main-sequence stars at a given spectral type. We caution observers to adopt optical spectral types over near-infrared types, since Hertzsprung-Russell (H-R) diagram positions derived from optical spectral types provide consistency between dynamical masses and theoretical evolutionary tracks. We also urge observers to deredden pre-MS stars with tabulations of intrinsic colors specifically constructed for young stars, since their unreddened colors differ from that of main sequence dwarfs. Otherwise, $V$-band extinctions as much as $\sim$0.6 mag erroneously higher than the true extinction may result, which would introduce systematic errors in the H-R diagram positions and thus bias the inferred ages.'
bibliography:
- 'pecaut.bib'
title: Anomalous Spectral Types and Intrinsic Colors of Young Stars
---
Introduction
============
Two of the most fundamental parameters of a star – the effective temperature () and luminosity – are based on simple, easy-to understand data such as the spectral type, extinction, and bolometric corrections. Determining these should be relatively error-free, right? Experience has taught observers that special care must be taken when characterizing pre-main sequence (pre-MS) stars, as these young stars require more attention than that of main-sequence dwarfs.
When observers desire to characterize a star’s properties, they normally start with the spectral type. The spectral type of the star is determined by comparing characteristics of the spectrum with spectral standard stars. In doing this we can obtain an estimate of the temperature and a gross estimate of the surface gravity of the star. To quantify the extinction and reddening, the observer will compare the target star’s observed colors to tabulated intrinsic colors of stars of the same spectral type to determine a color excess and use a total-to-selective extinction ratio value to estimate the extinction. Once the extinction has been quantified, a distance can be estimated (if not known), by assuming an age and consulting a theoretical isochrone, or assuming the star is on the main-sequence and calculating a main-sequence distance. Alternatively, if a trigonometric or kinematic parallax is known, we may compute the luminosity of the star. Normally, the point of these calculations is to place the star on the Hertzsprung-Russell (H-R) diagram and compare it to theoretical evolutionary models to estimate an age and mass.
Though this process seems fairly straightforward, there are many assumptions and systematic effects which can creep in and result in systematic errors in the fundamental parameters, such as temperature and luminosity (which are relatively model-free), and therefore the parameters derived from the evolutionary models. Assuming we are able to determine the spectral type with perfect fidelity, there are uncertainties in the tables which relate the spectral type, , intrinsic color and bolometric correction. There are also many different varieties of such tables, with slight variations in their intrinsic colors, underlying temperature scale, and bolometric corrections. Some tables even contain self-inconsistent values for the bolometric corrections and the bolometric luminosity of the Sun [@torres2010][^1]!
For populations of young stars this presents many problems because systematic errors in the fundamental properties of young, pre-MS stars will propagate into systematic errors in masses and ages (see @soderblom2014 for a full discussion of ages of young stars). This can skew the inferred evolutionary lifetime of gas-rich disks. Since gas giant planets can only form when gas is present in the circumstellar disk, these ages are also used to constrain giant planet formation timescales. Problems are also present when individual stars are mischaracterized. If a young star hosts a directly-imaged substellar object, the mass of the substellar object is estimated by comparing the luminosity and assumed age of the object with evolutionary models. Systematic effects in assumed ages can then propagate to wrong assumptions about the model-derived masses (e.g., $\kappa$ And b; @hinkley2013 [@bonnefoy2014]), which may misdirect planet formation theories. Thus, systematic errors in individual parameters, such as spectral types and intrinsic colors, can propagate down and fundamentally limit our ability to test star and planet formation theories.
Spectral Types
==============
The spectral type of a target is one of the most useful measurements, since many other stellar properties are usually derived with some dependence on the spectral type. Young stars, like most stars, are typed by comparing their spectra with that of spectral standards, and this has historically been performed using optical spectra. However, many low-mass stars are brighter in the near-infrared (NIR) and so it seems completely reasonable to perform this same measurement with NIR spectra as well. Two interesting cases are that of TW Hya and V4046 Sgr.
TW Hya, one of the most well-studied classical T-Tauri stars and a member of the youngest nearby moving group that bears its name (the TW Hydra Association), has typically been assigned a temperature type of K7 using optical spectra (K8IVe, @pecaut2013; K6Ve, @torres2006; K6e, @hoff1998; K7e, @delareza1989; K7 Ve, @herbig1978). However, @vacca2011 assigned a type of M2.5V using NIR spectra from SpeX, which implied a very young age of $\sim$3 Myr instead of the much older, more often quoted age of $\sim$ 10 Myr [@barrado2006]. This discrepancy is a source of great confusion – which spectral type should one adopt when characterizing young stars?
V4046 Sgr, a young binary member of the $\beta$ Pictoris moving group harboring a gas-rich disk of its own, has also been typed in both the optical and NIR and the same effect is observed - the NIR spectral type is about 3-5 subtypes later than the optical spectral type [@kastner2014]. However, unlike TW Hya, V4046 Sgr has dynamical mass constraints from radial velocities and gas dynamics [@rosenfeld2012]. @kastner2014 have placed these two young stars on the H-R diagram assuming the optical spectral types in one case and the NIR spectral types in another case. Comparing these two sets of H-R diagram positions with theoretical evolutionary models, the NIR spectral types are inconsistent with the dynamical mass constraints and @kastner2014 thus urged caution against using the NIR spectral types on young stars.
@stauffer2003 have studied this wavelength-dependent spectral type effect among the zero-age main sequence K dwarfs in the Pleiades [$\sim$135 Myr; @bell2014]. The @stauffer2003 study found that the spectral type of Pleiades K-type stars were systematically $\sim$1 subtype later in the red optical spectra than the blue optical spectra. Furthermore, they did not observe this spectral type anomaly in members of the older Praesepe cluster [$\sim$650-800 Myr; @gaspar2009; @bell2014; @brandt2015]. @stauffer2003 argued that spots were a major factor in this effect, and concluded that there must be more than one photospheric temperature present in the Pleiades K-dwarfs. Pre-main sequence stars are magnetically very active as well, and observations indicate large filling factors on their surfaces [@berdyugina2005], consistent with this effect.
Intrinsic Colors
================
Many studies in the past two decades have pointed out that stellar intrinsic colors of young stars are different than that of typical main sequence dwarfs. @gullbring1998 had noted this for both classical and weak-lined T-Tauri stars in Taurus. @dario2010 have noted this in the Orion Nebula Cluster (ONC) and dereddened the very young ONC cluster members using a specially constructed spectral-type color sequence for young stars. However, their spectral type-color sequence was not published as part of their study. @luhman1999b and @luhman2010 [@luhman2010e] went much further and constructed a color-spectral type sequence for pre-MS late K- and M-type stars and brown dwarfs. Most recently, the @pecaut2013 study released a comprehensive tabulation of the intrinsic colors for pre-MS stars from F-type to late M-type with the most popular photometric bands – Johnson–Cousins $BVI_C$, 2MASS $JHK_S$ and the recently available WISE $W1$, $W2$, $W3$ and $W4$ bands. In addition, we fit the observed spectral energy distributions to Phoenix–NextGen synthetic spectra [@allard2012] to infer an effective temperature () and bolometric correction (BC) scale for pre-MS stars. This young spectral type–color––BC scale was constructed using all the known (as of July 2013) members of the nearest moving groups – the $\eta$ Cha Cluster, the TW Hydra Association (TWA), the $\beta$ Pictoris moving group and the Tucana–Horologium (Tuc-Hor) moving group.
Two color-color plots for young stars are shown in Figure \[fig:color-color\]. We plot $V$–$K_S$ on the horizontal axis as a proxy for against $J$–$H$ (left) and $K_S$–$W
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We have measured radial velocities and metallicities of 16 RR Lyrae stars, from the QUEST survey, in the Sagittarius tidal stream at 50 kpc from the galactic center. The distribution of velocities is quite narrow ($\sigma=25$ km/s) indicating that the structure is coherent also in velocity space. The mean heliocentric velocity in this part of the stream is 32 km/s. The mean metallicity of the RR Lyrae stars is \[Fe/H\]$=-1.7$. Both results are consistent with previous studies of red giant stars in this part of the stream. The velocities also agree with a theoretical model of the disruption of the Sagittarius galaxy.'
author:
- 'A. Katherina Vivas'
- Robert Zinn
- Carme Gallart
title: Velocities of RR Lyrae Stars in the Sagittarius Tidal Stream
---
Introduction
============
Numerous observations have shown that the Sagittarius dwarf spheroidal galaxy (Sgr) is being disrupted by the tidal forces of the Milky Way. A long stream of its tidal debris has been observed multiple times in different parts of the sky. Many of these observations are described elsewhere in these proceedings. They include an all-sky view of M giant stars (Majewski et al. 2003), RR Lyrae stars (Vivas et al. 2001, Ivezic et al. 2000), A stars (Yanny et al. 2000), halo turnoff stars (Newberg et al. 2002) and main-sequence stars in color-magnitude diagrams (Mart[í]{}nez-Delgado et al. 2001). Each of these observations detects the stream as an over-density of the tracer above the halo background. Simulations of the disruption of satellite galaxies by the Milky Way show that tidal streams should be seen not only as over-densities but also as coherent structures in velocity space (e.g., Harding et al. 2001).
The QUEST survey for RR Lyrae stars (Vivas et al. 2001, 2003, see also Zinn et al. in this volume) has observed part of the Sgr stream in a long, $2.3^\circ$-wide strip near the celestial equator. We present here a study of the radial velocities of a sub-sample of 16 RR Lyrae stars in the Sgr tidal stream. RR Lyrae stars stand out as one of the best tracers of the old halo stellar population because they are bright standard candles. Thus, they can provide excellent views of the stream in both the three-dimensional spatial distribution and the radial velocity distribution.
The Data
========
The 16 RR Lyrae stars belong to the clump located at $\sim50$ kpc from the galactic center which has been related to the leading arm of the Sgr tidal stream. The QUEST survey found 84 stars in the Sgr stream, a factor of 10 higher than the background of halo stars. The spatial distribution of the clump indicates that is quite wide in right ascension, about $36^\circ$, from $13\fh 0$ to $15\fh 4$. We included in this study stars along all the stream in order to confirm its true size. All stars have mean magnitudes of $V \sim 19.2$.
Because RR Lyrae stars are pulsating stars with periods of $\sim0.5$ days, exposure times of spectra should be kept short ($\la 30$ min) in order to avoid excessive broadening of the spectral lines by the changing pulsational velocity. Given the faintness of the stars in the clump, a large telescope was needed. Spectra of the 16 stars were taken with FORS2 at the VLT-Yepun in Paranal, Chile, during June-Aug 2002. We used grating 600B which gives a resolution of $\sim 6$Å, and covers a spectral range from 3400-6300Å. Exposure times varied between 20 and 30 minutes. For each star we obtained two spectra taken at random times on different nights. This allowed us to make measurements at two different phases during the pulsation cycle. A few radial velocity standards were also observed with the same instrumental setup.
Radial Velocities
=================
Radial velocities of RR Lyrae stars change during the pulsation cycle by up to $\sim 100$ km/s. Thus, it is important to know the exact phase at which the spectrum was taken in order to separate the systemic velocity of the star from the velocity due to the pulsations. We obtained phase information from the QUEST RR Lyrae catalog (Vivas et al 2003) which provides accurate ephemerides for all the stars in our sample. We determined the radial velocities by cross-correlation with each of the observed radial velocity standards. The error in a single measurement is estimated to be $\sim 20$ km/s. For each star we fitted a radial velocity curve template following the procedure described in Layden (1994). In a few cases, the spectra was taken near the phase of maximum brightness of the light curve. We did not use these observations since there is a strong discontinuity in the radial velocity curve at this point.
The results are shown in Figure 1a. The histogram shows the distribution of the heliocentric radial velocities of the 16 stars. Taking out the two obvious outliers, the distribution is quite narrow, with a mean of 32 km/s and a standard deviation of only 25 km/s. The distribution does not resemble the one expected for a random sample of halo stars, which is the dashed, Gaussian curve in Figure 1a. For comparison we also show the distribution of velocities of four red giant stars from the Spaghetti survey (Dohm-Palmer, et al. 2001) which seem to be also associated with the Sgr stream in a region of the sky very close to ours. The presence of two outliers is not surprising. If the Sgr stream lies within a smooth distribution of halo stars following a $r^{-3}$ power-law, we expect 1-2 halo RR Lyrae stars in this volume of the sky.
There is also good agreement between our observations and the predictions of the models of the disruption of Sgr. We compare with one of these models (Mart[í]{}nez-Delgado et al 2003) in Figure 1b. The red giants from the Spaghetti survey are also included.
Metal Abundances
================
The metal abundances of the stars were measured using the modified $\Delta S$ technique described by Layden (1994), which is based on the equivalent widths of the Ca II K line and the Balmer lines. The error in a single measurement of \[Fe/H\] is 0.2 dex. The distribution of metallicities of the 16 stars of our sample is shown in Fig 2. Our results are in very good agreement with the 4 red giants from the Spaghetti survey. The mean metallicity of the RR Lyrae stars is \[Fe/H\]$= -1.7$. Notice however that the mean metallicity of the stream is significantly lower than in the core of the Sgr galaxy. This could be explained if Sgr once had a radial gradient in metallicity, an age-metallicity relation (since the RR Lyrae variables are exclusively very old stars) or a combination of both.
Conclusions
===========
We have measured VLT spectra for 16 RR Lyrae stars belonging to a part of the Sgr tidal stream, located at 50 kpc from the galactic center. The distribution of radial velocities is quite narrow, indicating that the Sgr clump is a coherent structure in velocity space. We do not find significant gradients of radial velocities or metallicities along the stream.
This work is based on observations collected at the European Southern Observatory, Chile. The data were obtained as part of an ESO Service Mode run. This research project was partially supported by the National Science Foundation under grant AST-0098428.
Dohm-Palmer, R. C. et al. 2001, , 555, 37 Harding, P. et al. 2001, , 122, 1397 Ivezic, Z. et al. 2000, , 120, 9631 Layden, A. C. 1994, , 108, 1016 Layden, A. C. & Sarajedini, A. 2000, , 119, 1760 Majewski, S. R. et al. 2003, , submitted Mart[í]{}nez-Delgado, D. et al. 2001, , 549, L199 Mart[í]{}nez-Delgado, D. et al. 2003, , submitted Newberg, H. J. et al. 2002, , 569, 245 Vivas, A. K. et al. 2001, , 554, L33 Vivas, A. K. et al. 2003, , submitted Yanny, B. et al. 2000, , 540, 825
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We will describe radiative corrections to bremsstrahlung, focusing on applications to luminosity, fermion pair production, and radiative return at high-energy $e^+ e^-$ colliders. A precise calculation of the Bhabha luminosity process was essential at SLC and LEP, and will be equally important in ILC physics. We will review the exact results for two-photon radiative corrections to Bhabha scattering which led to the precision estimates for the BHLUMI MC. We will also compare the implementation of the virtual photon correction to bremsstrahlung for fermion pair production in the ${\cal KK}$ MC to similar exact expressions developed for other purposes, and discuss applications to radiative return in high energy $e^+ e^-$ colliders.'
author:
- 'S. Yost, S. Majhi, and B. F. L. Ward'
title: |
\
Virtual Corrections to Bremsstrahlung with Applications to Luminosity Processes and Radiative Return
---
INTRODUCTION
============
In the 1990’s S. Jadach, B.F.L. Ward and S.A. Yost calculated the two real photon[@2real] and with M. Melles, the real plus virtual photon corrections[@real+virt] to the small angle Bhabha scattering process. These corrections were used to bring the theoretical uncertainty in the luminosity measurement, as calculated by the BHLUMI Monte Carlo (MC) program [@bhlumi], to within a 0.06% precision level for LEP1 parameters and 0.122% for LEP2 parameters[@precision].
A key component of the two-photon radiative corrections calculated for BHLUMI was the virtual photon contribution to hard photon bremsstrahlung. This radiative correction is also an important contribution to the fermion pair production process in $e^+ e^-$ annihilation implemented in the ${\cal KK}$ MC[@kkmc]. Comparisons of these results[@JMWY] to similar expressions obtained by other authors are reviewed. In particular, we focus attention on recent results for the virtual correction to hard photon radiation in radiative return applications[@KR; @phokhara; @epiphany2].
BHABHA LUMINOSITY PROCESS
=========================
BHLUMI was developed into a high-precision tool for calculating the Bhabha luminosity process at SLC and LEP, and it can continue to be developed to meet the requirements of a future linear collider such as the ILC. A key advantage of the program is its exact treatment of the multi-photon emission phase space using a YFS-exponentiation procedure[@yfs], so that IR singularities are canceled exactly to all orders and the leading soft photon effects are exponentiated, leaving only well-behaved YFS residuals to be calculated exactly to the order needed.
Table 1 shows a summary of the contributions to the theoretical uncertainty of the Bhabha luminosity process calculated by BHLUMI4.04, which includes the complete second order leading log (${\cal O}(\alpha^2 L^2)$) photonic radiative corrections[@precision]. The LEP1 CMS energy is taken to be the $Z$ mass, with an angular range between $1^\circ$ and $3^\circ$, while the LEP2 result is calculated at a CMS energy of 176 GeV and an angular range of $3^\circ - 6^\circ$. The portion of the error budget of interest here is the missing photonic ${\cal O}(\alpha^2 L)$ contribution, which is due to all two-photon radiative corrections at next-to-leading log (NLL) order. For ILC physics, it is desirable to reach $0.01\%$ precision. This is in reach for BHLUMI. Here, we will concentrate on the photonic contributions.
The photonic part of the error estimate in Table 1 comes from comparing to an exact ${\cal O}(\alpha^2)$ calculation. There are three contributions: a double real emission term[@2real] which can reach 0.012%, a real plus virtual photon term[@real+virt] which is bounded by 0.02%, and a two-loop pure virtual correction[@precision; @BVNB] making up 0.014% in the LEP1 case. Adding these in quadrature gives the error quoted in Table 1. Since all of the exact ${\cal O}(\alpha^2)$ photonic radiative corrections are available, adding them to BHLUMI would remove almost all of the error quoted for these effects.
The only remaining exact two photon contribution would then be “up-down interference.” The entire ${\cal O}(\alpha)$ up-down interference effect was 0.011% at $3^\circ$ and 0.099% at $9^\circ$[@updown], and the ${\cal O}(\alpha^2)$ contribution to up-down interference would be supressed by an additional factor on the order of ${\alpha\over\pi}L
\approx 0.04.$ This contribution may be neglected for small-angle Bhabha scattering. However, a complete calculation of ${\cal O}(\alpha^2)$ effects, including the full up-down interference contribution, could prove useful if the ILC requires a wider angular acceptance for the luminosity monitor. Some new ${\cal O}(\alpha^2)$ computational tools and results on Bhabha scattering have appeared recently[@dixon; @penin; @italian; @riemann; @lorca]. Comparisons to these results will be useful in gauging the precision of different approaches to the higher order radiative corrections in Bhabha scattering.
(6.0,2.7) (0,1.5)
**Source of Uncertainty** **LEP1** **LEP2**
------------------------------------------ ---------- ----------
Missing Photonic ${\cal O}(\alpha^2 L)$ 0.027% 0.04%
Missing Photonic ${\cal O}(\alpha^3L^3)$ 0.015% 0.03%
Vacuum Polarization 0.04% 0.10%
Light Pairs 0.03% 0.05%
$Z$ Exchange 0.015% 0.0%
[**Total**]{} 0.061% 0.122%
(-0.25,0) (3.0,0.5)[{width="3.0in"}]{} (3.0,0)
PAIR PRODUCTION AND RADIATIVE RETURN
====================================
Another important process at electron-positron colliders is fermion pair production. This process is calculated, for example, by the ${\cal KK}$ MC[@kkmc], and again, photonic radiative corrections are essential. In particular, we have presented explicit results real plus virtual photon emission from the initial or final state fermion line.
In the case of initial state radiation, emitting a single hard photon permits the final fermion pair creation process to be investigated over a wide range of effective CM energies $s' = s(1-v)$, where $v$ is the energy fraction carried away by the hard photon. This is known as the “radiative return” method, and it can be used at a high energy collider to probe the $Z$ resonance over a range of energies, or at a lower energy collider to measure the pion form factor.
Virtual photon emission is the most important radiative correction to radiative return. We have compared our results for this process in the context of the ${\cal KK}$ MC with several other results, including Ref. [@IN] (IN), which is fully differential, but lacking mass corrections, Ref. [@BVNB] (BVNB), which is differential only in $v$, but includes mass corrections, and Ref. [@KR] (KR), which is fully differential and includes mass corrections. The KR result was developed for calculating radiative return in the PHOKHARA MC, and is the newest available comparison.
We have compared these results by using them to calculate the YFS residual $\overline\beta^{(2)}_1$, which includes the IR-finite part of single hard bremsstrahlung including virtual photon corrections. We have shown earlier that our result (JMWY) agrees with the IN and BVNB results analytically to NLL order ($({\alpha\over\pi})^2L$). We have recently shown similar analytical agreement for the KR result at NLL order[@ICHEP04; @compare-long].
The NLL result is in fact very compact, and represents the exact results to high accuracy over most of the range of hard photon energy fraction $v$. Without mass corrections, $$\overline\beta^{(2)}_{1\ \rm NLL} = L - 1 + 3\ln(1-r_1) + 2\ln r_2 \ln(1-r_1)
- \ln^2(1-r_1) + 2\;{\rm Li}_2(r_1) + {r_1(1-r_1)\over 1 + (1-r_1)^2}
+ (r_1 \leftrightarrow r_2)$$ where $L = \ln(s/m_e^2)$, $r_i = 2p_i\cdot k/s$ with $p_i$ the incoming $e^{\pm}$ momenta and $k$ the hard photon momentum. This expression is taken as a baseline in comparing all of the exact expressions in a run of the ${\cal KK}$ MC shown in Fig. 1.
Fig. 1 compares
|
{
"pile_set_name": "ArXiv"
}
| null |
0[\_0]{}
amssym.tex
****
**Matts Roos and S. M. Harun-or-Rashid**
Department of Physics, Division of High Energy Physics,
University of Helsinki, Finland
**ABSTRACT**
The results of different analyses of the dynamical parameters of the Universe are converging towards agreement. Remaining disagreements reflect systematic errors coming either from the observations or from differences in the methods of analysis. Compiling the most precise parameter values with our estimates of such systematic errors added, we find the following best values: the baryonic density parameter $\Ombh =0.019\pm 0.02$, the density parameter of the matter component $\Omm =0.29\pm 0.06$, the density parameter of the cosmological constant $\Oml = 0.71\pm
0.07$, the spectral index of scalar fluctuations $n_s =1.02 \pm
0.08$, the equation of state of the cosmological constant $w_{\lambda} < -0.86$, and the deceleration parameter $q_0 = -0.56
\pm 0.04$. We do not modify the published best values of the Hubble parameter $H_0 = 0.73\pm 0.07$ and the total density parameter $\Om0\thinspace ^{+0.03}_{-0.02}$.\
INTRODUCTION
============
=5.0mm
Our information on the dynamical parameters of the Universe describing the cosmic expansion comes from three different epochs. The earliest is the Big Bang nucleosynthesis which occurred a little over 2 minutes after the Big Bang, and which left its imprint in the abundances of the light elements affecting the baryonic density parameter $\Omega_b$. The discovery of anisotropic temperature fluctuations in the cosmic microwave background radiation at large angular scales (CMBR) by COBE-DMR [@smot], followed by small scale anisotropies measured in the balloon flights BOOMERANG [@dber] and MAXIMA [@ba-ha], by the radio telescopes Cosmic Background Imager (CBI) [@pe-ma], Very Small Array (VSA) [@scot] and Degree Angular Scale Interferometer (DASI) [@halv] testify about the conditions in the Universe at the time of last scattering, about 350000 years after Big Bang. The analyses of the CMBR power spectrum give information about every dynamical parameter, in particular $\Omega_0$ and its components $\Omega_b,\ \Omega_m$ and $\Omega_{\lambda}$, and the spectral index $n_s$. For an extensive review of CMBR detectors and results, see Bersanelli et al. [@bersa]. Very recently, also the expected fluctuations in the CMBR polarization anisotropies has been observed by DASI [@kova].
The third epoch is the time of matter structures: galaxy clusters, galaxies and stars. Our view is limited to the redshifts we can observe which correspond to times of a few Gyr after Big Bang. This determines the Hubble constant, successfully done by the Hubble Space Telescope (HST) [@free], and the difference $\Omega_{\lambda}-\Omega_m$ in the dramatic supernova Ia observations by the High-z Supernova Search Team [@ries] and the Supernova Cosmology Project [@perl]. The large scale structure (LSS) and its power spectrum has been studied in the SSRS2 and CfA2 galaxy surveys [@daco], in the Las Campanas Redshift Survey [@shec], in the Abell-ACO cluster survey [@retz], in the IRAS PSCz Survey [@saun] and in the 2dF Galaxy Redshift Survey [@peac],[@coll]. Various sets of CMBR data, supernova data and LSS data have been analyzed jointly. We shall only refer to global analyses of the now most recent CMBR power spectra and large scale distributions of galaxies.
The list of other types of observations is really very long. To mention some, there have been observations on the gas fraction in X-ray clusters [@evrd], on X-ray cluster evolution [@ba-ek], on the cluster mass function and the Ly$\alpha$ forest [@wein], on gravitational lensing [@c-h-i], on the Sunyaev-Zel’dovich effect [@bi-ca], on classical double radio sources [@guer], on galaxy peculiar velocities [@zeha], on the evolution of galaxies and star creation versus the evolution of galaxy luminosity densities [@tota].
In this review we shall cover briefly recent observations and results for the dynamical parameters $H_0,\ \Omega_b,\ \Omega_m,\
\Omega_{\lambda},\ \Omega_0,\ n_s,\ w_\lambda$ and $q_0$. In Section 2 these parameters are defined in their theoretical context, in Section 3 we turn to the Hubble parameter, and in Section 4 to the baryonic density. The other parameters are discussed in Sections 5 and 6, which are organized according to observational method: supernovæ in Section 5, CMBR and LSS in Section 6. Section 7 summarizes our results.\
THEORY
======
The currently accepted paradigm describing our homogeneous and isotropic Universe is based on the Robertson–Walker metric
$$\hbox{d}s^2=c^2\hbox{d}t^2-\hbox{d}l^2=c^2\hbox{d}t^2-R(t)^2\left({{\hbox{d}\sigma ^2}\over {1-k\sigma ^2}}
+\sigma ^2\hbox{d}\theta^2+\sigma ^2\hbox{sin}^2\theta\ \hbox{d}
\phi^2\right)\ \eqno(1)$$
and Einstein’s covariant formula for the law of gravitation,
$$G_{\mu\nu}={{8\pi G}\over {c^4}}T_{\mu\nu}\ .\eqno(2)$$
In Eq. (1) d$s$ is the line element in four-dimensional spacetime, $t$ is the time, $R(t)$ is the cosmic scale, $\sigma$ is the comoving distance as measured by an observer who follows the expansion, $k$ is the curvature parameter, $c$ is the velocity of light, and $\theta,\ \phi$ are comoving angular coordinates. In Eq. (2) $G_{\mu\nu}$ is the Einstein tensor describing the curved geometry of spacetime, $T_{\mu\nu}$ is the energy-momentum tensor, and $G$ is Newton’s constant.
From these equations one derives Friedmann’s equations which can be put into the form
$${{\dot{R}^2 + kc^2}\over {R^2}}=
{{8\pi G}\over {3}}(\rho_m+\rho_{\lambda})\ ,\eqno(3)$$
$${{2\ddot{R}}\over {R}}+{{\dot{R}^2 + kc^2}\over {R^2}}=
-{{8\pi G}\over {c^2}}(p_m+p_\lambda)\ .\eqno(4)$$
Here $\rho$ are energy densities, the subscripts $m$ and $\lambda$ refer to matter and cosmological constant (or dark energy), respectively; $p_m$ and $p_{\lambda}$ are the corresponding pressures of matter and dark energy, respectively. Using the expression for the critical density today,
$$\rho_c={3\over{8\pi G}}H_0^2\ ,\eqno(5)$$
where $H_0$ is the Hubble parameter at the present time, one can define density parameters for each energy component by
$$\Omega=\rho/\rho_c\ .\eqno(6)$$
The total density parameter is
$$\Omega_0=\Omega_m+\Omega_r+\Omega_{\lambda}\ .\eqno(7)$$
In what follows we shall ignore the very small radiation density parameter $\Omega_r$. The matter density parameter $\Omega_m$ can further be divided into a cold dark matter (CDM) component $\Omega_{CDM}$, a baryonic component $\Omega_b$ and a neutrino component $\Omega_{\nu}$.
The pressure of matter is certainly very small, otherwise one would observe the galaxies having random motion similar to that of molecules in a gas under pressure. Thus one can set $p_m=0$ in Eq. (4) to a good approximation. If the expansion is adiabatic so that the pressure of dark energy can be written in the form
$$p_\lambda=w_\lambda \rho_\lambda c^2\ ,\eqno(8)$$
and if dark energy and matter do not transform into one another, conservation of dark energy can be written
$$\dot{\rho_\lambda}+3H\rho_\lambda(1 + w_\lambda)=0\ .\eqno(9)$$
One further parameter is the deceleration parameter $q_0$, defined by
$$q=-{{R\ddot R}\over {\dot{R}^2}}=-{{\ddot R}\over {RH^2}}\ .\eqno(10)$$
Eliminating $\ddot R$ between Eqs. (4) and (10) one can see that $q_0$ is not an independent parameter.
The curvature parameter $k$ in Eqs. (1), (3) and (4) describes the geometry of space: a spatially open universe is defined by $k=-1$, a closed universe by $k=+1$ and a
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We present preliminary results of follow-up optical observations, both photometric and spectroscopic, of stellar X-ray sources, selected from the cross-correlation of ROSAT All-Sky Survey (RASS) and TYCHO catalogues. Spectra were acquired with the E[lodie]{} spectrograph at the 193-cm telescope of the Haute Provence Observatory (OHP) and with the REOSC echelle spectrograph at the 91-cm telescope of the Catania Astrophysical Observatory (OAC), while $UBV$ photometry was made at OAC with the same telescope. In this work, we report on the discovery of six late-type binaries, for which we have obtained good radial velocity curves and solved for their orbits. Thanks to the OHP and OAC spectra, we have also made a spectral classification of single-lined binaries and we could give first estimates of the spectral types of the double-lined binaries. Filled-in or pure emission H$\alpha$ profiles, indicative of moderate or high level of chromospheric activity, have been observed. We have also detected, in near all the systems, a photometric modulation ascribable to photospheric surface inhomogeneities which is correlated with the orbital period, suggesting a synchronization between rotational and orbital periods. For some systems has been also detected a variation of H$\alpha$ line intensity, with a possible phase-dependent behavior.'
author:
- 'A. Frasca, P. Guillout, E. Marilli, R. Freire Ferrero, K. Biazzo'
title: 'Newly discovered active binaries in the RasTyc sample of stellar X-ray sources'
---
Introduction {#sec:Intro}
============
The cross-correlation between the ROSAT All-Sky Survey ($\simeq$ 150000 sources) and the TYCHO mission ($\simeq$ 1000000 stars) catalogues has selected about 14000 stellar X-ray sources (RasTyc sample, [@Guillout99]). Although most of these soft X-ray sources are expected to be the youngest stars in the solar neighborhood, neither the contamination by older RS CVn systems nor the fraction of BY Dra binaries are actually known. This information is, however, of fundamental importance for studying the recent local star formation history and, for instance, for putting constrains on the scale height of the spatial distribution of nearby young stars around the galactic plane. We thus started a spectroscopic observation campaign aimed at a deep characterisation of a representative sub-sample of the RasTyc population. In addition to derive chromospheric activity levels (from H$\alpha$ emission) and rotational velocities (from Doppler broadening), high resolution spectroscopic observations allow to infer ages (by means of Lithium abundance) and to single out spectroscopic and active binaries. In this work we present some preliminary results of follow-up observations, both photometric and spectroscopic, of some RasTyc stars performed with the 193-cm telescope of OHP and the 91-cm telescope of the Catania Astrophysical Observatory (OAC).
In particular, we analyse six new late-type binaries, for which we have obtained good radial velocity curves and orbital solutions. An accurate spectral classification for the single-lined binaries has been also performed and the projected rotational velocity $v\sin i$ has been measured for all stars. The chromospheric activity level and the lithium content have been also investigated using as diagnostics the H$\alpha$ emission and the Li[i]{}$\lambda\,6708$ line, respectively.
[llcccccccr]{}\
RasTyc & Name & P$_{\rm orb}$ & $\gamma$ & $k$ (P/S) & $M\sin^3i$ & $v\sin i$ (P/S) & Sp. Type & $B-V$ & W$_{\rm LiI}$\
& & (days) & (kms$^{-1}$) & (kms$^{-1}$) & $M_{\odot}$ & (kms$^{-1}$) & & & (mÅ)\
\
193137 & HD 183957 & 7.954 & $-$29.0 & 57.5/63.1 & 0.758/0.691 & 4.0/4.4 & K0-1V/K1-2V & 0.84 & $< 10$\
215940 & OT Peg & 1.748 & $-$27.0 & 16.6/23.2 & 0.007/0.005 & 9.2/9.4 & K0V/K3-5V & 0.79 & 50\
221428 & BD+334462 & 10.12 & $-$20.9 & 59.2/60.4 & 0.905/0.887 & 16.1/32.6 & G2 + K & 0.70 & 15:\
040542 & DF Cam & 12.60 & $-$19.5 & 22.8 & SB1 & 35 & K2III & 1.14 & —\
072133 & V340 Gem & 36.20 & +37.0 & 42.1 & SB1 & 40 & G8III & 0.83 & 70\
102623 & BD+382140 & 15.47 & +47.4 & 31.3 & SB1 & 11.5 & K1IV & 1.03 & 40\
\
\
Observations and reduction {#sec:Obs}
==========================
Spectroscopy
------------
Spectroscopic observations have been obtained at the [*Observatoire de Haute Provence*]{} (OHP) and at the [*M.G. Fracastoro*]{} station (Mt. Etna, 1750 m a.s.l.) of Catania Astrophysical Observatory (OAC).
At OHP we observed in 2000 and 2001 with the E[lodie]{} echelle spectrograph connected to the 193-cm telescope. The 67 orders recorded by the CCD detector cover the 3906-6818 Å wavelength range with a resolving power of about 42000 ([@Bar96]). The E[lodie]{} spectra were automatically reduced on-line during the observations and the cross-correlation with a reference mask was produced as well.
The observations carried out at Catania Observatory have been performed in 2001 and 2002 with the REOSC echelle spectrograph at the 91-cm telescope. The spectrograph is fed by the telescope through an optical fiber (UV - NIR, $200\,\mu m$ core diameter) and is placed in a stable position in the room below the dome level. Spectra were recorded on a CCD camera equipped with a thinned back-illuminated SITe CCD of 1024$\times$1024 pixels (size 24$\times$24 $\mu$m). The échelle crossed configuration yields a resolution of about 14000, as deduced from the FWHM of the lines of the Th-Ar calibration lamp. The observations have been made in the red region. The detector allows us to record five orders in each frame, spanning from about 5860 to 6700 Å.
The OAC data reduction was performed by using the [echelle]{} task of IRAF[^1] package following the standard steps: background subtraction, division by a flat field spectrum given by a halogen lamp, wavelength calibration using the emission lines of a Th-Ar lamp, and normalization to the continuum through a polynomial fit.
Photometry
----------
The photometric observations have been carried out in 2001 and 2002 in the standard $UBV$ system also with the 91-cm telescope of OAC and a photon-counting refrigerated photometer equipped with an EMI 9789QA photomultiplier, cooled to $-15\degr$C. The dark noise of the detector, operated at this temperature, is about $1$ photon/sec.
For each field of the RasTyc sources, we have chosen two or three stars with known $UVB$ magnitudes to be used as local standards for the determination of the photometric instrumental “zero points". Additionally, several standard stars, selected from the list of Landolt ([@Lan92]), were also observed during the run in order to determine the transformation coefficients to the Johnson standard system.
A typical observation consisted of several integration cycles (from 1 to 3, depending on the star brightness) of 10, 5, 5 seconds, in the $U$, $B$ and $V$ filter, respectively. A 21$\arcsec$ diaphragm was used. The data were reduced by means of the photometric data reduction package PHOT designed for photoelectric photometry of Catania Observatory ([@LoPr93]). Seasonal mean extinction coefficient for Serra La Nave Observatory were adopted for the atmospheric extinction correction.
Results
=======
Radial velocity and photometry {#sec:RV}
------------------------------
The radial velocity (RV) measurements for the E[lodie]{} data have been performed onto the cross-correlation functions (CCFs) produced on-line during the data acquisition.
Radial velocities for OAC spectra were obtained by cross-correlation of each echelle spectral order of the RasTyc spectra with that of bright radial velocity standard stars. For this purpose the IRAF task [fxcor]{}, that computes RVs by means of the cross-correlation technique, was used.
The wavelength ranges for the cross-correlation were selected to exclude the H$\alpha$ and Na[I]{} D$_2$ lines, which are contaminated by chromospheric emission and have very broad wings. The spectral regions heavily affected by telluric lines (e.g. the O$_2$ lines in the $\lambda~6276-\lambda~6315$ region)
|
{
"pile_set_name": "ArXiv"
}
| null |
---
address: |
Theoretical Division, Los Alamos National Laboratory\
Los Alamos, New Mexico 87545, USA\
E-mail: nix@t2nix.lanl.gov [and]{} moller@moller.lanl.gov
author:
- and PETER MÖLLER
title: 'MASSES AND DEFORMATIONS OF NEUTRON-RICH NUCLEI'
---
=cmr8
1.5pt \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{}
\#1 \#2 \#3 \#4
\#1 \#2 \#3 \#4
psfig
Introduction {#intro}
============
The accurate calculation of the ground-state mass and deformation of a nucleus far from stability, such as one of the neutron-rich nuclei considered in this conference, remains one of the most fundamental challenges of nuclear theory. Toward this goal, two major approaches—which also allow the simultaneous calculation of a wide variety of other nuclear properties—have been developed (along with numerous semi-empirical formulas for masses alone).
At the most fundamental level, fully selfconsistent microscopic theories, starting with an underlying nucleon-nucleon interaction, have seen progress in both the nonrelativistic Hartree-Fock approximation and more recently the relativistic mean-field approximation. Although microscopic theories offer great promise for the future, their current accuracies are typically a few Mwhich is insufficient for most practical applications. At the next level of fundamentality, the macroscopic-microscopic method—where the smooth trends are obtained from a macroscopic model and the local fluctuations from a microscopic model—has been used in several recent global calculations that are useful for a broad range of applications.
We will concentrate here on the 1992 version of the finite-range droplet model,$\,$[@MNMS; @MNK]with particular emphasis on its reliability for extrapolations to new regions of nuclei, but will also briefly discuss two other models of this type.$\,$[@APDT; @MS]
Finite-Range Droplet Model {#frdms}
==========================
In the finite-range droplet model, which takes its name from the macroscopic model that is used, the microscopic shell and pairing corrections are calculated from a realistic, diffuse-surface, folded-Yukawa single-particle potential by use of Strutinsky’s method.$\,$[@S]In 1992 we made a new adjustment of the constants of an improved version of this model to 28 fission-barrier heights and to 1654 nuclei with $N,Z \ge 8$ ranging from $^{16}$O to $^{263}$106 whose masses were known experimentally in 1989.$\,$[@A]The resulting microscopic enhancement to binding for even-even nuclei throughout the periodic system is shown in Fig. \[enhab\].
enhab fig1 201 [Calculated additional binding energy of even-even nuclei relative to the macroscopic energy of spherical nuclei, illustrating the crucial role of microscopic corrections.]{}
This model has been used to calculate the ground-state mass, deformation, microscopic correction, odd-proton and odd-neutron spins and parities, proton and neutron pairing gaps, binding energy, one- and two-neutron separation energies, quantities related to $\beta$-delayed one- and two-neutron emission probabilities, $\beta$-decay energy release and half-life with respect to Gamow-Teller decay, one- and two-proton separation energies, and $\alpha$-decay energy release and half-life for 8979 nuclei with $N,Z \ge 8$ ranging from $^{16}$O to $^{339}136$ and extending from the proton drip line to the neutron drip line.$\,$[@MNMS; @MNK]These tabulated quantities are available electronically on the World Wide Web at the Uniform Resource Locator [http://t2.lanl.gov/publications/publications.html]{}.
quad fig2 201 [Calculated quadrupole deformations of even-even nuclei, illustrating the transitions from spherical to deformed nuclei as one moves away from magic numbers.]{}
Ground-State Deformations {#def}
=========================
In our calculations, we specify a general nuclear shape in terms of deviations from a spheroidal shape by use of Nilsson’s $\epsilon$ parameterization.$\,$[@Ni]The ground-state shape is determined by initially minimizing the nuclear potential energy of deformation with respect to the two symmetric shape coordinates $\epsilon_2$ and $\epsilon_4$. During this minimization, we include a prescribed smooth dependence of the higher symmetric deformation $\epsilon_6$ on the two independent coordinates $\epsilon_2$ and $\epsilon_4$. This dependence is determined by minimizing the macroscopic potential energy of $^{240}$Pu with respect to $\epsilon_6$ for fixed values of $\epsilon_2$ and $\epsilon_4$. We then vary separately $\epsilon_6$ and the mass-asymmetric, or octupole, deformation $\epsilon_3$, with $\epsilon_2$ and $\epsilon_4$ held fixed at their previously determined values, to calculate any additional lowering in energy from these two degrees of freedom.
For presentation purposes, it is sometimes more convenient to express the nuclear ground-state shape in terms of the $\beta$ parameterization, where the shape coordinates represent the coefficients in an expansion of the radius vector to the nuclear surface in a series of spherical harmonics. Figures \[quad\] and \[hex\] show our calculated quadrupole and hexadecapole deformations, respectively, in terms of $\beta_2$ and $\beta_4$, which are determined by transforming our calculated shapes from the $\epsilon$ parameterization.
hex fig3 201 [Calculated hexadecapole deformations of even-even nuclei, illustrating the transitions from bulging to indented equatorial regions as one moves from smaller to larger magic numbers.]{}
The inclusion of the $\epsilon_6$ and $\epsilon_3$ shape degrees of freedom is crucial for the isolation of such physical effects as the Coulomb redistribution energy, which arises from a central density depression.$\,$[@MNMS2]As illustrated in Fig. \[eps6\], an independent variation of the symmetric deformation $\epsilon_6$ is important for several regions of nuclei. For even-even nuclei, the maximum reduction in energy relative to that for a prescribed smooth $\epsilon_6$ dependence is 1.28 Mand occurs for $^{252}$Fm. As illustrated in Fig. \[eps3\], the mass-asymmetric deformation $\epsilon_3$ is important for nuclei in a few isolated regions. For even-even nuclei, the maximum reduction in energy relative to that for a symmetric shape is 1.29 Mand occurs for the neutron-rich nucleus $^{194}$Gd. For even-even nuclei close to the valley of $\beta$-stability, the maximum reduction in energy relative to that for a symmetric shape is 1.20 Mand occurs for $^{222}$Ra.
eps6 fig4 198 [Calculated reduction in energy of even-even nuclei arising from an independent variation in $\epsilon_6$, relative to that for shapes with a prescribed smooth $\epsilon_6$ dependence. Note that the sign of the $\epsilon_6$ correction is reversed in this plot for clarity of display.]{}
eps3 fig5 198 [Calculated reduction in energy of even-even nuclei arising from the inclusion of $\epsilon_3$ deformations, relative to that for symmetric shapes. Note that the sign of the $\epsilon_3$ correction is reversed in this plot for clarity of display.]{}
Reliability for Extrapolations to New Regions of Nuclei {#extraps}
=======================================================
For the original 1654 nuclei included in the adjustment, the theoretical error, determined by use of the maximum-likelihood method with no contributions from experimental errors,$\,$[@MNMS; @MNK]is 0.669 MAlthough some large systematic errors exist for light nuclei, they decrease significantly for heavier nuclei.
Between 1989 and 1996, the masses of 371 additional nuclei heavier than $^{16}$O have been measured,$\,$[@AW]$^{\sen}\,$[@H]which provides an ideal opportunity to test the ability of mass models to extrapolate to new regions of nuclei whose masses were not included in the original adjustment. Figure \[frdm\] shows as a function of the number of neutrons from $\beta$-stability the individual deviations between these newly measured masses and those predicted by the 1992 finite-range droplet model. The new nuclei fall into three categories, with the first category corresponding to 273 nuclei lying on both sides of the valley of $\beta$-stability.$\,$[@AW]The second category corresponds to 91 proton-rich nuclei produced by fragmentation of $^{209}$Bi projectiles incident on a thick Be target in the experimental storage ring (ESR) at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany.$\,$[@K]The third category corresponds to seven proton-rich superheavy nuclei discovered in the separator for heavy-ion reaction products (SHIP) at GSI whose masses are estimated by adding the highest $\alpha$-decay energy release at each step in the decay chain to known masses.$\,$[@H]This procedure could seriously overestimate the experimental masses of some of the heavier nuclei because different energy releases have been observed in some cases.$\,$[@H]To account for this uncertainty, we have assigned a mass error of 0.5 Mfor each of these seven nuclei. Also, to account for errors of unknown origin, we have included an additional 0.076 Mcontribution$\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
address:
- 'The Alan Turing Institute, 96 Euston Road, London NW1 2DB'
- 'London School of Economics and Political Science, Houghton Street, London, WC2A 2AE'
author:
- Nikki Sonenberg
- Edward Wheatcroft
- Henry Wynn
bibliography:
- 'References.bib'
title: Majorisation as a theory for uncertainty
---
Introduction {#sec:intro}
============
Majorisation, also called rearrangement inequalities, yields a type of stochastic ordering in which two or more distributions are rearranged in decreasing order of their probability mass (discrete case) or probability density (continuous case) and then compared. When methods of majorisation are applied to probabilities and probability distributions, they provide a concept of the peakedness. This is independent of the ‘location’ of the probabilities, i.e., of the support of the distribution. This geometry-free property makes majorisation a good candidate as a foundation for the idea of uncertainty which is the focus of this paper. Majorisation is a partial, not total, ordering and implies that one or more of the class of order-preserving functions with respect to the ordering might be used as an entropy or ‘uncertainty metric’. Many well-known metrics fall into this category, one of which is Shannon entropy, widely used in information theory.
Consider the question, ‘is uncertainty geometric?’. If we think our friend is in London, Birmingham or Edinburgh with probabilities $p_1,p_2, p_3$ respectively (where $p_1+p_2+p_3=1$), does it make any difference to our uncertainty, however we measure it, if the locations are changed to Reading, Manchester and Glasgow with the same probabilities, respectively? In fact, could we just permute the order of the first three cities? If our answer is no, i.e., that there is no difference, then we are in the realm of entropy and information. In the above cases, the Shannon entropy is $-\{p_1 \log(p_1) + p_2 \log(p_2) + p_3 \log(p_3)\}$ and we see that the subscript simply serves as a way to collect the probabilities, not to locate them in the geography of the UK.
Another element of the majorisation approach is that it is, in a well-defined sense, dimension-free. In this paper, we show how this approach enables us to create, for a multivariate distribution, a univariate decreasing rearrangement (DR) by considering a decreasing threshold and ‘squashing’ all of the multivariate mass for which the density is above the threshold to a univariate mass adjacent to the origin. This creates the possibility of comparing multivariate distributions with different numbers of dimensions.
We introduce a set of operations that can be applied to study uncertainty in a range of settings and illustrate these with examples. We see this work as a merging of methods used in applied mathematics and statistics with general methodology for the study of uncertainty. The methods discussed provide a foundation for the extension to Bayesian probabilities, a topic for further work.
There is a large literature on majorisation. The classical results of Hardy, Littlewood and Polya [@Hardy1988] led to developments in a wide variety of fields. Marshall and Olkin’s [@Marshall2011] key volume on majorisation (later extended in [@Marshall2011]) built on these results. Applications in mathematical economics can be found in portfolio theory and income distributions built on classical work by Lorenz [@Lorenz1905] and Gini [@Gini1914] (see recent work by Arnold and Sarabia [@Arnold2018]). Majorisation has also been used in chemistry for mixing liquids and powders [@Klein1997] and in quantum information [@Partovi2011]. Statistical applications include experimental design [@Giovagnoli1987; @Pukelsheim1987], and in application to testing [@eaton1974monotonicity; @tong1988some]. Majorisation has been employed in the area of proper scoring rules by considering the partial and total ordering in the class of well-calibrated experts [@Degroot1986; @Degroot1985; @Degroot1988]. The common feature of these studies is the need to compare and quantify the degree of variation between distributions. We note that the theory of the decreasing rearrangement of functions, which we have used to a limited extent for probability densities, can be considered an area of functional analysis particularly in the area of inequalities of the kind which say that a rearrangement of a function increases or decreases some special functional [@lieb2001graduate].\
This paper is organised as follows. In the remainder of this section we introduce the concept of majorisation of probabilities and present related concepts and previous work. In Section \[sec:cont\_major\], we present results for the continuous case. In Section \[sec:multivariate\], we present the key idea of reducing multivariate distributions into a one dimensional decreasing rearrangement and illustrate this with examples. In Section \[sec:operations\], we collect together operations for the study of uncertainty and, in Section \[sec:algebra\], a lattice and an algebra for uncertainty. In Section \[sec:empirical\], we discuss empirical applications. Concluding remarks are given in Section \[sec:conclusion\].
Discrete majorisation and related work {#sec:discrete}
--------------------------------------
We introduce majorisation for discrete distributions following Marshall *et al.* [@Marshall2011].
Consider two discrete distributions with $n$-vectors of probabilities $$p_1=(p^{(1)}_1,\ldots, p_n^{(1)}) \quad \text{ and } \quad p_2=(p^{(2)}_1,\ldots, p_n^{(2)}),$$ where $\sum_i p_i^{(1)}=\sum_i p_i^{(2)}=1$. Placing the probabilities in decreasing order: $$\tilde{p}^{(1)}_1 \geq \ldots \geq \tilde{p}_n^{(1)}\quad \text{ and } \quad \tilde{p}^{(2)}_1 \geq \ldots \geq \tilde{p}_n^{(2)},$$ it is then said that $p_2$ majorises $p_1$, written $p_1 \preceq p_2$ when, for all $n$, $$\sum_{i=1}^n \tilde{p}_i^{(1)} \leq \sum_{i=1}^n \tilde{p}_i^{(2)}.$$
This definition of majorisation is a partial ordering, that is, not all pairs of vectors are comparable. As argued by Partovi [@Partovi2011], this is not a shortcoming of majorisation, rather a consequence of its rigorous protocol for ordering uncertainty. Marshall *et al* [@Marshall2011] provide several equivalent conditions to $p_1\preceq p_2$. We consider three (A1-A3) of the best known in detail below.\
(A1) There is a doubly stochastic $n\times n$ matrix $P$, such that $$\begin{aligned}
\label{equiv:doubly}
p_1 = P p_2.\end{aligned}$$ This is a well known result by Hardy, Littlewood and Pólya [@Hardy1988]. The intuition of this result is that a probability vector which is a mixture of the permutations of another is more disordered. The relationship between a stochastic matrix $P$ and the stochastic transformation function in the refinement concept was presented by DeGroot [@Degroot1988].\
(A2) Schur [@Schur1923] demonstrated that, if (A1) holds for some stochastic matrix $P$, this leads to the following equivalent condition. For all continuous convex functions $h( \cdot )$, $$\begin{aligned}
\label{condition3}
\sum_{i=1}^n h(\tilde{p}_i^{(1)}) \leq \sum_{i=1}^n h(\tilde{p}_i^{(2)}),\end{aligned}$$ for all $n$.
The sums in (\[condition3\]) are special cases of the more general Schur-convex functions on probability vectors. Details on the characteristics and properties of Schur-convex functions are provided by Marshall *et al.* [@Marshall2011]. In particular, entropy functions such as Shannon information, for which $h(y)=y\log(y)$, are Schur-convex. We also highlight a special case of the Tsallis information for which $$h(y)=\frac{y^{\gamma}-1}{\gamma}, \quad\gamma>0,$$ where, in the limit $\gamma\rightarrow 0$, Shannon information is obtained. The condition (A2) is equivalent to the majorisation ordering for distributions, and we consider it as a continuous extension to Equation (\[condition3\]) (see Section \[sec:cont\_major\] for details). The condition (A2) indicates that the ordering imposed by majorisation is stronger than the ordering by any single entropic measure and, in a sense, is equivalent to all such (entropic) measures taken collectively [@Partovi2011].\
(A3) Let $\pi(p) = (p_{\pi(1)}, \ldots, p_{\pi(n)})$ be the vector whose entries are a permutation $\pi$ of the entries of a probability vector $p$, with symmetric group $S$, then $$\begin{aligned}
p_1 \in \mbox{conv}_{\pi \in S} (\{\pi(p_2)\}).\end{aligned}$$ That is to say, $p_1$ is in the convex hull of all permutations of entries of $p_2$. Majorisation is a special case of group-majorisation (G-majorisation) for the symmetric (per
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Given a directed graph $D=(V,A)$ we define its intersection graph $I(D)=(A,E)$ to be the graph having $A$ as a node-set and two nodes of $I(D)$ are adjacent if their corresponding arcs share a common node that is the tail of at least one of these arcs. We call these graphs facility location graphs since they arise from the classical uncapacitated facility location problem. In this paper we show that facility location graphs are hard to recognize and they are easy to recognize when the underlying graph is triangle-free. We also determine the complexity of the vertex coloring, the stable set and the facility location problems on that class.'
author:
- Mourad Baïou
- Laurent Beaudou
- Zhentao Li
- Vincent Limouzy
bibliography:
- 'paper-FLG.bib'
title: On a class of intersection graphs
---
Introduction
============
In this paper we study the following class of intersection graphs. Given a directed graph $D=(V,A)$, we denote by $I(D)=(A,E)$ the [*intersection graph of $D$*]{} defined as follows:
- the node-set of $I(D)$ is the arc-set of $D$,
- two nodes $a=(u,v)$ and $b=(w,t)$ of $I(D)$ are adjacent if one of the following holds: $u=w$ or $v=w$ or $t=u$ or $(u,v)=(t,w)$ (see Figure \[adjac\]).
\(a) at (0,0); (b) at ($ (a) + (2,0) $); (d) at ($ (a) + (6,0) $); (e) at ($ (d) + (0,-1) $); (c) at ($ (d) + (0,1) $); (g) at ($ (d) + (2,0) $); (h) at ($ (g) + (0,-1) $); (f) at ($ (g) + (0,1) $); (i) at ($ (g) + (3,.5) $); (j) at ($ (i) + (-.75,-1) $); (k) at ($ (i) + (.75,-1) $); (l) at ($ (i) + (2,.5) $); (m) at ($ (l) + (0,-2) $); (n) at ($ (c) + (1,1) $); (o) at ($ (i) + (0,1.5) $); (p) at ($ (l) + (0,1) $); (q) at ($ (a) + (1,-3) $); (r) at ($ (i) + (0,-3.5) $); (a1) at ($ (a) + (2,0.5) $); (b1) at ($ (a1) + (-.75,-1) $); (c1) at ($ (a1) + (+.75,-1) $); (l1) at ($ (g) + (6,0) $); (m1) at ($ (l1) + (-2,0) $);
(a1) circle (1.5pt); (b1) circle (1.5pt); (c1) circle (1.5pt);
\(c) circle (1.5pt); (d) circle (1.5pt); (e) circle (1.5pt); (f) circle (1.5pt); (g) circle (1.5pt); (h) circle (1.5pt); (l) circle (1.5pt); (m) circle (1.5pt);
\(e) – (d) node\[midway, left\] [$a$]{}; (d) – (c) node\[midway, left\] [$b$]{};
\(f) – (g) node\[midway, left\] [$b$]{}; (g) – (h) node\[midway, left\] [$a$]{};
(a1) – (b1) node\[midway, left\] [$a$]{}; (a1) – (c1) node\[midway, right\] [$b$]{};
\(l) to\[bend right\] (m); (l1) node\[left\] [$a$]{}; (m1) node\[right\] [$b$]{}; (m) to\[bend right\] (l) ;
We focus on two aspects: the recognition of these intersection graphs and some combinatorial optimization problem in this class. De Simone and Mannino [@Simone] considered the recognition problem and provided a characterization of these graphs based on the structure of the (directed) neighborhood of a vertex. Unfortunately this characterization does not yield a polynomial time recognition algorithm. Intersection graphs we consider arise from the [*uncapacitated facility location problem*]{} (UFLP) defined as follows. We are given a directed graph $D=(V,A)$, costs $f(v)$ of opening a facility at node $v$ and cost $c(u,v)$ of assigning $v$ to $u$ (for each $(u,v) \in A$). We wish to select a subset of facilities to open and an assignment of each remaining nodes to a selected facility so as to minimize the cost of opening the selected facilities plus the cost of arcs used for assignment.
This problem can be formulated as a linear integer program as follows.
$$\mbox{min } \sum_{(u,v)\in A}c(u,v)x(u,v)+\sum_{v\in V}f(v)y(v)$$
$$\left\{ \begin{aligned}
\sum_{(u,v)\in A}x(u,v) + y(u) = 1 \quad & \forall u\in V,\\
x(u,v)\leq y(v) \quad & \forall (u,v)\in A,\\
x(u,v)\geq 0 \quad & \forall (u,v)\in A, \\
y(v)\geq 0 \quad & \forall v\in V,\\
x(u,v)\in \{0,1\} \quad & \forall (u,v)\in A,\\
y(v)\in \{0,1\} \quad & \forall v\in V.
\end{aligned} \right.$$
If we remove the variables $y(v)$ for all $v$ from the formulation above, we get
$$\mbox{min } \sum_{(u,v)\in A}(c(u,v)-f(u))x(u,v)+\sum_{v\in V}f(v)$$
$$\left\{ \begin{aligned}
\sum_{(u,v)\in A}x(u,v) \leq 1 \quad & \forall u\in V,\\
x(u,v)+\sum_{(v,w)\in A}x(v,w) \leq 1 \quad & \forall (u,v)\in A,\\
x(u,v)\geq 0 \quad & \forall (u,v)\in A,\\
x(u,v)\in \{0,1\} \quad & \forall (u,v)\in A.
\end{aligned} \right.$$
This is exactly the maximal clique formulation of the [*maximum stable set problem*]{} associated with $I(D)$, where the weight of each node $(u,v)$ of $I(D)$ is $f(u)-c(u,v)$. This correspondence is well known in the literature (see in [@AVS; @CorTh; @Simone]). We may consider several combinatorial optimization problems on directed graph that may be reduce to the maximum stable set problem on an undirected graph. For example in [@Chv-Eben], Chvátal and Ebenegger reduce the max cut problem in a directed graph $D=(V,A)$ to the maximum stable set problem in the following intersection graph called the [ *line graph of a directed graph*]{}: we assign a node to each arc $a\in
A$ and two nodes are adjacent if the head of one (corresponding) arc is the tail of the other. They prove that recognizing such graphs is <span style="font-variant:small-caps;">np</span>-complete. Balas [@Balas] considered the asymmetric assignment problem. He defined an intersection graph of a directed graph $D$ where nodes are arcs of $D$ and two nodes are adjacent if the two corresponding arcs have the same tail, the same head or the same extremities without being parallel. Balas uses this correspondence to develop new facets for the asymmetric assignment polytope.
We may generalize the notion of line graphs to directed graphs in many ways. The simplest involves deciding
1. if arcs that share a head are adjacent,
2. if arcs that share a tail are adjacent, and
3. if two arcs are adjacent if the head of one arc is the tail of the other.
It is not too difficult to show the recognition problem is easy if we choose non-adjacency for (3).
Choosing non-adjacency for (3) means that we could separate all vertices $v$ of
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We prove several results on the lifespan, regularity, and uniqueness of solutions of the Cauchy problem for the homogeneous complex and real equations (HCMA/HRMA) under various a priori regularity conditions. We use methods of characteristics in both the real and complex settings to bound the lifespan of solutions with prescribed regularity. In the complex domain, we characterize the $C^3$ lifespan of the HCMA in terms of analytic continuation of Hamiltonian mechanics and intersection of complex time characteristics. We use a conservation law type argument to prove uniqueness of solutions of the Cauchy problem for the HCMA. We then prove that the Cauchy problem is ill-posed in $C^3$, in the sense that there exists a dense set of $C^3$ Cauchy data for which there exists no $C^3$ solution even for a short time. In the real domain we show that the HRMA is equivalent to a Hamilton–Jacobi equation, and use the equivalence to prove that any differentiable weak solution is smooth, so that the differentiable lifespan equals the convex lifespan determined in our previous articles. We further show that the only obstruction to $C^1$ solvability is the invertibility of the associated Moser maps. Thus, a smooth solution of the Cauchy problem for HRMA exists for a positive but generally finite time and cannot be continued even as a weak $C^1$ solution afterwards. Finally, we introduce the notion of a “leafwise subsolution" for the HCMA that generalizes that of a solution, and many of our aforementioned results are proved for this more general object.'
address:
- 'Department of Mathematics, Stanford University, Stanford, CA 94305, USA'
- 'Department of Mathematics, Northwestern University, Evanston, IL 60208, USA'
author:
- 'Yanir A. Rubinstein'
- Steve Zelditch
title: 'The Cauchy problem for the homogeneous Monge–Ampère equation, III. Lifespan'
---
Introduction
============
This article is the third in a series [@RZ1; @RZ2] whose aim is to study existence, uniqueness, and regularity of solutions of the initial value problem (IVP) for geodesics in the space $$\label{HoEq}
\textstyle\calH_\o
=
\{\vp\in C^{\infty}(M) \,:\, \omega_\vp:= \omega+\i{\partial{\bar\partial}}\vp>0\}$$ of [Kähler ]{}metrics on a compact [[K]{}\^]{}manifold $(M, \omega)$ in the class of $\omega$, where $\calH_\o$ is equipped with the metric [@M; @S; @D1] $$\gM(\zeta,\eta)_{\vp}:= \int_M
\zeta\eta\, {\omega_{\vp}^m},\quad \vp \in {\mathcal{H}}_{\o},\quad
\zeta,\eta \in T_{\vp} {\mathcal{H}}_{\o}\isom C^\infty(M).$$ This initial value problem is a special case of the Cauchy problem for the homogeneous complex/real Monge–Ampère equation (HCMA/HRMA). The IVP is long believed to be ill-posed, and a motivating problem is to prove that this is indeed the case, to determine which initial data give rise to solutions, especially those of relevance in geometry (‘geodesic rays’), to construct the solutions, and to determine the lifespan $T_\span$ of solutions for general initial data.
In this article, we prove a number of results on the lifespan, regularity, and uniqueness of solutions of the Cauchy problem for the HCMA and the HRMA equations under various a priori regularity conditions. The results are based on a study of the ‘characteristics’ of the HCMA/HRMA equations, or more precisely on the relations between solutions of these equations and Hamiltonian mechanics, and to solutions of related Hamilton–Jacobi equations.
First, we characterize the $C^3$ lifespan of the HCMA and prove uniqueness of classical solutions. We then introduce the notion of a leafwise subsolution of the HCMA that generalizes the notion of a solution, and derive obstructions to its existence. This can be considered as a method of ‘complex characteristics’. Combining these results we establish that the IVP for the HCMA is locally ill-posed in $C^3$. This puts a restriction on Cauchy data, and addresses questions about the Cauchy problem raised by the work of Mabuchi, Semmes, and Donaldson [@M p. 238],[@S],[@D1 p. 27]. We then study the notion of a leafwise subsolution for the HRMA, and prove its uniqueness. This allows us to characterize the Legendre transform subsolution of the prequels [@RZ1; @RZ2], and determine the $C^1$ lifespan of the HRMA. A key ingredient here is an apparently new connection between HRMA and Hamilton–Jacobi equations.
Obstructions to solvability, uniqueness, and the smooth lifespan of the HCMA
----------------------------------------------------------------------------
We begin in the complex domain, where Semmes and Donaldson [@S; @D1] gave a formal solution of the IVP in terms of holomorphic characteristics. Namely, the Cauchy data $(\omega_{\vp_0}, \dot{\vp}_0)$ of the IVP determines a Hamiltonian flow $\exp t X_{\dot{\vp}_0}^{\omega_{\vp_0}}$. If the orbits $\exp t X_{\dot{\vp}_0}^{\omega_{\vp_0}} z$ of the flow admit analytic continuations in time up to imaginary time $T$, one obtains a family of maps $$\label
{MAPS}
f_\tau(z) = \exp -\sqrt{-1} \tau X_{\dot{\vp}_0}^{\omega_{\vp_0}} z: S_T \times M \to M,$$ where $$S_T=[0,T]\times{\mathbb{R}},$$ with $\tau=s+\i t\in S_T$, $s\in[0,T]$ and $t\in{\mathbb{R}}$. The formal solution $\vp_s$ is then given by the formula, $$\label
{FORMAL}
(f_s^{-1})^\star \omega_{\vp_0} - \omega_{\vp_0} = \i {\partial{\bar\partial}}\vp_s, \quad s\in[0,T].$$
There are several obstructions to solving the IVP in this manner, which must vanish if there exists a $C^3$ solution. The most obvious one is that the Hamilton orbits need to possess analytic continuations to a strip $S_T$. This analytic extension problem for orbits should already be an ill-posed problem, and we say that the Cauchy data is “$T$-good" if the extension exists and $f_s$ is smooth (see Definitions \[TGoodDef\]–\[TGoodDef\]). This is a Cauchy problem for a holomorphic map into a nonlinear space, and we do not study it directly here; but in §\[1.2\] we describe some results on obstructions to closely related linear Cauchy problems.
In several settings, such as torus-invariant Cauchy data on toric varieties, the Hamilton orbits for smooth Cauchy data do possess analytic continuations (see Proposition \[ToricLifespanProp\] below). As the following theorem shows, the only additional obstruction to solving the HCMA smoothly is that the space-time complex Hamilton orbits may intersect. To state the result precisely, let $(M,J,\o)$ be a compact closed connected [[K]{}\^]{}manifold of complex dimension $n$. The IVP for geodesics is equivalent to the following Cauchy problem for the HCMA $$\label{HCMARayEq}
\begin{aligned}
(\pi_2^\star\omega + \i{\partial{\bar\partial}}\vp)^{n+1}
=
0,
\quad
(\pi_2^\star\omega + \i{\partial{\bar\partial}}\vp)^{n}
\ne
0,
\;\; &\mbox{on} \; S_{T} \times M,
\cr
\vp(0,t,\,\cdot\,)
=
\vp_0(\,\cdot\,), \quad
\partial_s\vp(0,t,\,\cdot\,)
=
\dot\vp_0(\,\cdot\,), \;\; &\mbox{on} \; \{0\}\times{\mathbb{R}}\times M.
\end{aligned}$$ where $\pi_2: S_{T} \times M \to M$ is the projection, and where $\vp$ is is required to be $\pi_2^\star\o$-plurisubharmonic (psh) on $S_T\times M$. The rest of the notions in the following theorem are defined in §\[HamFlowsHCMASection\].
\[HCMACauchyCthreeThm\] [(Smooth lifespan and uniqueness)]{} Let $(M,\omega_{\vp_0})$ be a compact [[K]{}\^]{}manifold. The Cauchy problem (\[HCMARayEq\]) with $\omega_{\vp_0}\in C^1$ and $\dot\vp_0\in C^3(M)$ has a solution in $C^3(S_T \times M)\cap PSH(S_T \times M,\pi_2^\star\o)$ if and only if the Cauchy data is $T$-good and the maps $f_s$ defined by are $C^1$ and admit a $C^1$ inverse for each $s\in[
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- |
Mangesh Gupte[^1]\
Google Inc\
<mangesh@cs.rutgers.edu>
- |
Darja Krushevskaja\
Rutgers University\
<darja@cs.rutgers.edu>
- |
S. Muthukrishnan\
Rutgers University\
<muthu@cs.rutgers.edu>
bibliography:
- 'algorithmica.bib'
title: Analyses of Cardinal Auctions
---
[^1]: This work was done while the author was graduate student at Rutgers University.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We establish exact relations between the winding of “energy” (eigenvalue of Hamiltonian) on the complex plane as momentum traverses the Brillouin zone with periodic boundary condition, and the presence of “skin modes” with open boundary condition in non-hermitian systems. We show that the nonzero winding with respect to any complex reference energy leads to the presence of skin modes, and vice versa. We also show that both the nonzero winding and the presence of skin modes share the common physical origin that is the non-vanishing current through the system.'
author:
- Kai Zhang
- Zhesen Yang
- Chen Fang
bibliography:
- 'NonHermitianBib.bib'
title: 'Correspondence between winding numbers and skin modes in non-hermitian systems'
---
[^1]
[^2]
\[s:intro\]Introduction
=======================
Some systems that are coupled to energy or particle sources or drains, or driven by external fields can be effectively modeled Hamiltonians having non-hermitian terms[@Persson2000; @Volya2003; @Rotter2009; @Choi2010; @Diehl2011; @Reiter2012]. For example, one may add a diagonal imaginary part in a band Hamiltonian for electrons to represent the effect of finite quasiparticle lifetime[@Shen2018; @Zhou2018; @Papaj2019; @Kozii2017]. One may also introduce an imaginary part to the dielectric constant in Maxwell equations to represent metallic conductivity in a photonic crystal[@Longhi2009; @Zloshchastiev2016; @Sounas2017; @El-Ganainy2018; @Bliokh2019]. As non-hermitian operators in general have complex eigenvalues, the eigenfunctions of Schrödinger equations are no longer static, but decay or increase exponentially in amplitude with time[@Brody2013; @Gong2018].
A topic in recent condensed-matter research is the study of topological properties in band structures, which are generally given by the wave functions, *not* the energy, of all occupied bands (or more generally, a group of bands capped from above and below by finite energy gaps)[@Hasan2010; @Qi2011; @Bernevig2013; @Chiu2016; @Armitage2018]. In non-hermitian systems, however, one immediately identifies a different type of topological numbers in bands, given by the phase winding of the “energy” (eigenvalue of Hamiltonian), *not* the wave functions, in the Brillouin zone (BZ)[@Shen2018a]. This *winding number*, together with several closely related winding numbers if other symmetries are present, give topological classification that is richer than that of their hermitian counterparts[@Lee2016; @Leykam2017; @Gong2018; @Yin2018; @Xiong2018; @Ghatak2019]. Besides winding in energy in complex plane, another unique phenomenon recently proposed in non-hermitian systems is the presence of “skin modes” in open systems[@Yao2018; @Yao2018a; @Kunst2018; @Kunst2019; @Lee2019]. A typical spectrum of open hermitian system consists of a large number of bulk states, and, if at all, a small number of edge states, and as the system increases in size $L$, the numbers of the bulk and of the edge states increase as $L^d$ and $L^{d-n}$ respectively, where $d$ is the dimension and $0<n\le{d}$. However, in certain non-hermitian systems, a finite fraction, if not all, of eigenstates are concentrated on one of the edges. These skin modes decay exponentially away from the edges just like edge states, but their number scales as the volume ($L^d$), rather than the area, of the system.
In this Letter, we show an exact relation between the new quantum number, *i. e.*, the winding number of energy with periodic boundary, and the existence of skin modes with open boundary, for any one-band model in one dimension. To do this, we first extend the one-band Hamiltonian with finite-range hopping $H(k)$ to a holomorphic function $H(z)=P_{n+m}(z)/z^m$ ($n,m>0$), where $P_{n+m}(z)$ is an $(n+m)$-polynomial, and the Brillouin zone maps to unit circle $|z|=1$ (or $z=e^{ik}$). The image of the unit circle under $H(z)$ is the spectrum of the system with periodic boundary, and generally, it forms a loop on the complex plane, $\mathcal{L}_{\mathrm{BZ}}\in\mathbb{C}$. Then we show that as long as $\mathcal{L}_{\mathrm{BZ}}$ has finite interior, or roughly speaking encloses finite area, skin modes appear as eigenstates with open boundary condition; but when $\mathcal{L}_{\mathrm{BZ}}$ collapses into a curve having no interior on the complex plane, the skin modes disappear. In other words, skin modes with open boundary appear if and only if there be $E_b\in\mathbb{C}$ with respect to which $\mathcal{L}_{\mathrm{BZ}}$ has nonzero winding. Finally, we show that the winding of the periodic boundary spectrum, and hence the presence of skin modes with open boundary, are related to the total direct current of the system. We prove that if the current vanishes for all possible state distribution functions $n(H,H^\ast)$, the winding and the skin modes also vanish, and vice versa. The relations we establish among nonzero winding, presence of skin modes and non-vanishing current are summarized in Fig. \[fig:1\]. These results are extended to higher dimensions and more bands.
![\[fig:1\]The reciprocal relations among the three phenomena unique to non-hermitian systems: the non-vanishing direct current, nonzero winding number of energy and the presence of skin modes. The validity of any one is the sufficient and necessary condition for the validity of the other two.](Fig1){width="1\linewidth"}
Hamiltonian as holomorphic function
===================================
We start with an arbitrary one-band tight-binding Hamiltonian in one dimension, only requiring that hoppings between $i$ and $j$-sites only exist within a finite range $-m\le{}i-j\le{}n$. $$H=\sum_{i,j}t_{i-j}|i\rangle\langle{j}|=\sum_{k\in\mathrm{BZ}}H(k)|k\rangle\langle{k}|,$$ where $H(k)=\sum_{r=-m,\dots,n}t_r(e^{ik})^r$ is the Fourier transformed $t_{r}$ ($t_0$ being understood as the onsite potential). For periodic boundary condition, we have $0\le{k}<2\pi$, and $e^{ik}$ moves along the unit circle on the complex plane. For future purposes, we define $z:=e^{ik}$, and consider $z$ as a general point on the complex plane. Therefore for each Hamiltonian $H(k)$, we now have a holomorphic function $$H(z)=t_{-m}z^{-m}+\dots+t_nz^n=\frac{P_{m+n}(z)}{z^m},$$ where $P_{m+n}(z)$ is a polynomial of order $m+n$. $H(z)$ has one composite pole at $z=0$, the order of which is $m$, and has $m+n$ zeros, *i. e.*, the zeros of the $(m+n)$-polynomial. Along any oriented loop $\mathcal{C}$ and any given reference point $E_b\in\mathbb{C}$, one can define the winding number of $H(z)$ $$\label{eq:winding}
w_{\mathcal{C},E_b}:=\frac{1}{2\pi}\oint_{\mathcal{C}}\frac{d}{dz}\arg[H(z)-E_b]dz.$$ Specially, for $\mathcal{C}=\mathrm{BZ}$, $w_{\mathcal{C},E_b}$ is the winding of the phase of $H(z)-E_b$ along BZ, considered as a new topological number unique to non-hermitian systems[@Lee2016; @Leykam2017; @Shen2018a; @Gong2018; @Yin2018; @Xiong2018; @Ghatak2019]. Complex analysis relates the winding number of any complex function $f(z)$ to the total number of zeros and poles enclosed in $\mathcal{C}$, that is, $$\label{eq:4}
w_{\mathcal{C},E_b}=N_{zeros}-N_{poles},$$ where $N_{zeros,poles}$ is the counting of zeros (poles) weighted by respective orders. See Fig. \[fig:2\](a,b) for the pole, the zeros and the winding of $\mathcal{L}_{\mathrm{BZ}}$ for a specific Hamiltonian. In fact, we always have $N_{poles}=m$, so that the winding number is determined by the number of zeros of $P_{m+n}(z)-z^mE_b$ that lie within the unit circle. As we will see later, the advantage of extending the Hamiltonian into a holomorphic function lies in exactly this relation between the winding numbers and the zeros.
Generalized Brill
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study nonequilibrium properties of an electronic Mach-Zehnder interferometer built from integer quantum Hall edge states at filling fraction $\nu{=}1$. For a model in which electrons interact only when they are inside the interferometer, we calculate exactly the visibility and phase of Aharonov-Bohm fringes at finite source-drain bias. When interactions are strong, we show that a lobe structure develops in visibility as a function of bias, while the phase of fringes is independent of bias, except near zeros of visibility. Both features match the results of recent experiments \[Neder *et al.* Phys. Rev. Lett. **96**, 016804 (2006)\].'
author:
- 'D. L. Kovrizhin$^{1,2}$ and J. T. Chalker$^{1}$'
title: 'Exactly Solved Model for an Electronic Mach-Zehnder Interferometer'
---
Questions about phase coherence in interacting quantum systems out of equilibrium are of fundamental and wide-ranging importance. Despite great progress over the past decade, many aspects of nonequilibrium problems remain poorly understood. One recent example of this situation is the unexpected behaviour observed in state-of-the-art experiments on electronic Mach-Zehnder interferometers (MZIs) [@heiblum2; @roulleau07; @bieri08] driven out of equilibrium by an applied bias voltage. In these experiments the visibility of Aharonov-Bohm (AB) fringes in the conductance shows a lobe–like structure as a function of bias, while the phase of oscillations is independent of bias even with different interferometer arm lengths, except at zeros of the visibility where it jumps by $\pi $.
These observations have attracted a lot of attention. It was immediately appreciated [@heiblum2] that they lie outside a single-particle description. Moreover, since integer quantum Hall edge states scale to non-interacting chiral Fermi gases at low energy, the [*finite-range*]{} of electron-electron interactions seems to be crucial. The effort to understand interaction effects in MZIs at integer filling is therefore linked with work on non-linear effects in non-chiral Luttinger liquids [@glazman], as well as to interferometry of fractional quantum Hall quasiparticles [@fractional]. The most obvious consequence anticipated from interactions is dephasing. This may arise from external noise [@marquardt04] or internally [@buttiker01; @chalker07], but in both cases is expected to suppress AB fringe visibility smoothly with increasing bias, in contrast to observations. It has been found, however, that zeros in visibility can arise if the edge channels that form the interferometer arms are coupled to another channel: such an extra channel may be a feature of sample design [@sukhorukov07], and is present intrinsically at $\nu{=}2$ [@sukhorukov08]. Although those results are encouraging, they do not seem sufficiently universal to explain all current experiments. In this context, two recent papers [@neder08; @sim08] that obtain visibility oscillations from calculations of interaction effects at $\nu{=}1$ represent an interesting advance. These papers contain illuminating physical insights, and similar phenomena have been shown to exist in another context [@marquardt08], but approximations used in [@neder08; @sim08] are not standard ones and their reliability is hard to judge.
In this Letter we present an exact calculation for a simplified model of an interferometer. It reproduces the main signatures observed experimentally [@heiblum2; @roulleau07; @bieri08] and shows that the lobe pattern is a many-body effect, which would not appear in any approximation that treats single particles moving in a static mean-field potential. The model is illustrated in the inset to Fig. \[fig2\]. As in previous studies, two quantum Hall edge channels, both with the same propagation direction, are coupled at two quantum point contacts (QPCs). The simplifying feature of the model is that electrons interact only when they are *inside* the interferometer. This allows us to combine a description of the contacts using fermion operators with a treatment of interactions using bosonization. Within the MZI we take interactions only between two electrons on the same arm and with fixed strength independent of distance, although it would be feasible to relax these restrictions. We consider an initial state in which Fermi seas in the two channels are filled to different chemical potentials, to represent the bias voltage, and evolve this state forward in time using the Schödinger equation. At long times the system reaches a stationary regime. In this regime we calculate current and differential conductance as a function of chemical potential difference and enclosed AB flux. Our main results are presented in Figs. \[fig2\] and \[fig3\], and discussed following an outline of their derivation; details will be presented elsewhere [@kovrizhin09prbmz].
The solution we describe is significant more broadly as a rare example of a solved non-equilibrium scattering problem. One earlier instance is that of tunneling between fractional quantum Hall edge states [@fendley], while another is the interacting resonant level model, treated recently by a form of Bethe Ansatz [@andrei], and using boundary field theory [@boulat]. The remarkable structure observed experimentally [@heiblum2; @roulleau07; @bieri08] makes the MZI particularly interesting in this context.
The Hamiltonian $\hat{H} = \hat{H}_{kin} + \hat{H}_{int} +
\hat{H}_{tun}$ for the model has three contributions, representing respectively: kinetic energy, interactions, and tunneling at contacts. We formulate $\hat{H}$ initially for edges of length $L$ with periodic boundary conditions, then take the limit $L\to\infty$. Then $$\hat{H}_{kin}=-i\hbar v_{F}\sum_{\eta =1,2}\int_{-L/2}^{L/2}\hat{\psi}_{\eta
}^{+}(x)\partial _{x}\hat{\psi}_{\eta }(x)dx, \label{H_kin}$$where $v_{F}$ is the Fermi-velocity and $\eta =1,\,2$ is the channel index. The Fermi field operators can be written as $\hat{\psi}%
_{\eta }(x)=L^{-1/2}\sum_{k}\hat{c}_{k\eta }e^{ikx}$, with $k=2\pi n_{k}/L$ and $n_{k}$ integer, and $\{\hat{c}_{k\eta },\hat{c}_{q\eta ^{\prime }}^{+}\}=\delta _{kq}\delta
_{\eta \eta ^{\prime }}$. Interactions are described by $$\hat{H}_{int}=\frac{1}{2}\sum_{\eta =1,2}\int_{-L/2}^{L/2}U_{\eta
}(x,x^{\prime })\hat{\rho}_{\eta }(x)\hat{\rho}_{\eta }(x^{\prime
})dxdx^{\prime }\,, \label{H_int}$$where $\hat{\rho}_{\eta }\left( x\right) =\hat{\psi}_{\eta }^{+}(x)\hat{\psi}
_{\eta }(x)$ is the electron density operator. In our model $U_{\eta
}(x,x^{\prime })=0$ for $x,x^{\prime }\notin (0,d_{\eta })$. Finally, the QPCs are represented by $$\hat{H}_{tun}=v_{a}e^{i\alpha }\hat{\psi}_{1}^{+}(0)\hat{\psi}%
_{2}(0)+v_{b}e^{i\beta }\hat{\psi}_{1}^{+}(d_{1})\hat{\psi}_{2}(d_{2})+%
\mathrm{h.c.} \label{H_tun}$$The AB-phase appears here as $\varphi _{AB}\equiv \beta - \alpha $.
The total current $I$ from channel 1 to 2 has contributions $I_a$ and $I_b$ arising from each QPC, which can be written in terms of expectation values of operators acting at points infinitesimally before the QPC. Each contribution can be separated into a term that is not sensitive to coherence between the edges, and another that is sensitive. We define $t_{a,b}=\sin \theta _{a,b}$ and $r_{a,b}=\cos \theta _{a,b}$ with $\theta
_{a,b}=v_{a,b}/\hbar v_{F}$, and denote expectation values by $\langle \ldots \rangle$. A straightforward calculation yields for QPC $b$ the expressions $I_b = I_b^{(1)} +
I_b^{(2)}$, with $$\begin{aligned}
I_{b}^{(1)}&=&ev_{F}t_{b}^{2}\langle \hat{\rho}_{1}(d_1) -\hat{\rho}
_{2}(d_2)\rangle \nonumber\\
%\begin{equation}
I_{b}^{(2)}&=&ev_{F}t_{b}r_{b}[ie^{i\beta }\langle \hat{G}_{12} \rangle +\mathrm{h.c.}]\;, \nonumber
%\end{equation}%\end{aligned}$$ where $\hat{G}_{12}=\hat{\psi}_{1}^{+}(d_{1})\hat{\psi}
_{2}(d_{2}) $. Terms in $I_a$ are obtained from these for $I_b$ by replacing
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We develop a new thermodynamic formalism to investigate the transient behaviour of maps on the real line which are skew-periodic ${\mathbb{Z}}$-extensions of expanding interval maps. Our main focus lies in the dimensional analysis of the recurrent and transient sets as well as in determining the whole dimension spectrum with respect to $\alpha$-escaping sets. Our results provide a one-dimensional model for the phenomenon of a dimension gap occurring for limit sets of Kleinian groups. In particular, we show that a dimension gap occurs if and only if we have non-zero drift and we are able to precisely quantify its width as an application of our new formalism.'
address:
- 'Faculty of Mathematics, University of Vienna, Oskar Morgensternplatz 1, 1090 Vienna, Austria'
- 'Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya, 464-8602 Japan'
- 'FB03 – Mathematik und Informatik, Universität Bremen, 28359 Bremen, Germany'
author:
- Maik Gröger
- Johannes Jaerisch
- Marc Kesseböhmer
title: Thermodynamic formalism for transient dynamics on the real line
---
[^1]
Introduction
============
The main motivation of this article is the connection between transient phenomena of dynamical systems and its manifestation in dimensional quantities. Since transience can impose major obstructions to an ergodic-theoretic description of (fractal-)geometric features its further understanding is vital and has attracted a lot of attention. For instance, it has a strong tradition in complex dynamics with landmark results like the ones obtained for complex quadratic polynomials in [@MR1626737] or [@MR2373353]. A closely related and paralleling line of research established corresponding results for families of transcendental functions, see for example [@MR2302520; @MR2465667; @MR2197375; @MR871679]. In both cases, a particular striking effect revealing the preponderance of transience is the occurrence of a so-called dimension gap. In fact, the origin of this phenomenon goes back to the rich field of geometric group theory which we explain in more detail further below.
In the framework of thermodynamic formalism, transient effects in topological Markov chains have been seminally studied by Sarig [@MR1738951; @MR1818392]. Directly related to this are fractal-geometric applications of thermodynamic formalism for infinite conformal graph directed Markov systems which have been systematically worked out by Mauldin and Urbanski in [@MR2003772]. In there, strong mixing conditions were introduced to guarantee that the recurrent behaviour governs the system. One main goal of this paper is to set up a new thermodynamic formalism in the absence of such strong mixing conditions to provide a systematic approach to the geometric phenomenon of a dimension gap. More precisely, we introduce the concept of fibre-induced pressure which allows us to express the occurrence and the width of a dimension gap for skew-periodic ${\mathbb{Z}}$-extensions of expanding interval maps exclusively in terms of this newly developed pressure. Furthermore, we obtain effective analytic relations between the fibre-induced pressure and the classical pressure of the base transformation allowing us to determine the crucial dimensional quantities in a number of examples explicitly.
Let us now illustrate the phenomenon of a dimension gap in the setting of actions of non-elementary Kleinian groups $G$ on the hyperbolic space $\mathbb{H}^{n}$. By a general result of Bishop and Jones [@MR1484767] we know that the Hausdorff dimension of both the radial limit set $\Lambda_{r}\left(G\right)$ and the uniformly radial limit set $\Lambda_{ur}\left(G\right)$ of $G$ are equal to the Poincaré exponent of $G$ given by $$\delta_{G}\coloneqq\inf\left\{ s\geq0:\sum_{g\in G}{\mathrm{e}}^{-s\cdot d_{H}\left(0,g0\right)}<\infty\right\} ,\label{eq:definition Poincare exponent group}$$ where $d_{H}$ denotes the hyperbolic distance on $\mathbb{H}^{n}$. Recall that $\Lambda_{r}\left(G\right)$ and $\Lambda_{ur}\left(G\right)$ represent recurrent dynamics of the geodesic flow on $\mathbb{H}^{n}/G$. Clearly, for a normal subgroup $N<G$ we have that $\delta_{G}\geq\delta_{N}$ and moreover, $$\dim_{H}\left(\Lambda_{r}\left(G\right)\right)=\delta_{G}>\delta_{N}=\dim_{H}\left(\Lambda_{r}\left(N\right)\right)\iff G/N\;\text{is non-amenable,}$$ with $\dim_{H}(\,\cdot\,)$ the Hausdorff dimension of the corresponding set. This was first proved by Brooks for certain Kleinian groups fulfilling $\delta_{G}>(n-1)/2$ in [@MR783536] and later generalised to a wider class of groups without this restriction by Stadlbauer [@Stadlbauer11]. Note that by a result of Falk and Stratmann $\delta_{N}\ge\delta_{G}/2$, see [@MR2097162]. If $G$ is additionally geometrically finite, then the strict inequality $\delta_{N}>\delta_{G}/2$ holds by a result of Roblin [@MR2166367] (see also [@MR3299281]). Furthermore, if $\Lambda\left(G\right)$ denotes the limit set of the Kleinian group $G$, then $\delta_{G}=\dim_{H}\left(\Lambda\left(G\right)\right)$ and, since $\Lambda\left(N\right)=\Lambda\left(G\right)$, this implies the following criterion for the occurrence of a *dimension gap*: $$\dim_{H}\left(\Lambda_{r}\left(N\right)\right)=\dim_{H}\left(\Lambda_{ur}\left(N\right)\right)<\dim_{H}\left(\Lambda\left(N\right)\right)\iff G/N\;\text{is non-amenable.}$$ In other words, a certain amount of transient behaviour causes a dimension gap from the dimension of the full limit set compared to the restriction of the limit set to certain recurrent parts. It is remarkable that for Kleinian groups the presence of a dimension gap depends only on the group-theoretic property of ** amenability. Accordingly, a natural example for the occurrence of a dimension gap is given by a Schottky group $G=N\rtimes\mathbb{F}_{2}$ where $\mathbb{F}_{2}$ denotes the free group with two generators. Nevertheless, only little is known in the literature concerning the concrete size of this dimension gap.
The occurrence of a dimension gap is closely related to the decay of certain return probabilities. In fact, Kesten [@MR0112053; @MR0109367] has shown for symmetric random walks on countable groups that exponential decay of return probabilities is equivalent to non-amenability. However, for amenable groups exponential decay can also be caused by non-symmetric random walks. To be more precise, for groups admitting a recurrent random walk (e.g. ${\mathbb{Z}}$) it is shown in [@MR3436756] that exponential decay of return probabilities is equivalent to a lack of certain symmetry condition on the thermodynamic potential related to the random walk (see also Remark \[rem:characterisation of recurrence\] for further details).
We are aiming at investigating these closely linked phenomena for a class of maps on ${\mathbb{R}}$ which can be considered as models of ${\mathbb{Z}}$-extensions of Kleinian groups. In fact, if the Kleinian group $G=N\rtimes{\mathbb{Z}}$ is a Schottky group, then the elements in $\Lambda_{r}\left(N\right)$ can be characterised as the limits of $G$-orbits for which the ${\mathbb{Z}}$-coordinate returns infinitely often to some point in $\mathbb{Z}$ (compare this with our definition of a recurrent set, see Section \[subsec:Recurrent-and-transient\]). Since ${\mathbb{Z}}$ is amenable, these limit points have full Hausdorff dimension by Brooks’ amenability criterion. We will see later (end of Section \[sec:Examples\]) that this property also follows from the fact that the ${\mathbb{Z}}$-coordinate has zero drift with respect to a canonical invariant measure obtained from the Patterson-Sullivan construction.
Our models witness drift behaviour and we show that indeed non-zero drift is equivalent to the occurrence of a dimension gap, see Theorem \[thm:-dimension gap\]. It is therefore also very natural to consider subsets of the transient dynamics with fixed drift in more detail. This motivates the definitions of various escaping sets in our one-dimensional models, see Section \[subsec:Escaping-sets\]. The related dimension spectra will allow us to determine the size of the dimension gap explicitly. Similar results will be shown in the forthcoming paper [@JKG20] on ${\mathbb{Z}}$-extensions with reflective boundaries allowing us to illuminate earlier results in [@MR1438267; @MR2959300; @MR3610938] which studied a family suggested by van Strien modelling induced maps of Fibonacci unimodal maps. Let us point out that drift arguments where also prominent in the proofs of [@MR2959300].
Our leading motivating example for which we obtain dimensional results on the transient behaviour stems from the family of (a-)symmetric random walks, see Example \[exa:Classical Random Walk\]. More precisely, let ${F}$ be an expanding interval map with finitely many $C^{1+\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
In this work we study arrangements of $k$-dimensional subspaces $V_1,\ldots,V_n \subset {\mathbb{C}}^\ell$. Our main result shows that, if every pair $V_{a},V_b$ of subspaces is contained in a dependent triple (a triple $V_{a},V_b,V_c$ contained in a $2k$-dimensional space), then the entire arrangement must be contained in a subspace whose dimension depends only on $k$ (and not on $n$). The theorem holds under the assumption that $V_a \cap V_b = \{0\}$ for every pair (otherwise it is false). This generalizes the Sylvester-Gallai theorem (or Kelly’s theorem for complex numbers), which proves the $k=1$ case. Our proof also handles arrangements in which we have many pairs (instead of all) appearing in dependent triples, generalizing the quantitative results of Barak et. al. [@BDWY-pnas].
One of the main ingredients in the proof is a strengthening of a Theorem of Barthe [@Bar98] (from the $k=1$ to $k>1$ case) proving the existence of a linear map that makes the angles between pairs of subspaces large on average. Such a mapping can be found, unless there is an obstruction in the form of a low dimensional subspace intersecting many of the spaces in the arrangement (in which case one can use a different argument to prove the main theorem).
author:
- 'Zeev Dvir[^1]'
- 'Guangda Hu[^2]'
bibliography:
- 'hidimSG.bib'
title: '**Sylvester-Gallai for Arrangements of Subspaces**'
---
Introduction
============
The Sylvester-Gallai (SG) theorem states that for $n$ points ${\boldsymbol{v}}_1,{\boldsymbol{v}}_2,\ldots,{\boldsymbol{v}}_n \in {\mathbb{R}}^\ell$, if for every pair ${\boldsymbol{v}}_i,{\boldsymbol{v}}_j$ there is a third point ${\boldsymbol{v}}_k$ on the line passing through ${\boldsymbol{v}}_i,{\boldsymbol{v}}_j$, then all points must lie on a single line. This was first posed by Sylvester [@Syl93], and was solved by Melchior [@Mel40]. It was also conjectured independently by Erd[ö]{}s [@Erd43] and proved shortly after by Gallai. We refer the reader to the survey [@BM90] for more information about the history and various generalizations of this theorem. The complex version of this theorem was proved by Kelly [@Kel86] (see also [@EPS06; @DSW12] for alternative proofs) and states that if ${\boldsymbol{v}}_1,{\boldsymbol{v}}_2,\ldots,{\boldsymbol{v}}_n \in {\mathbb{C}}^\ell$ and for every pair ${\boldsymbol{v}}_i,{\boldsymbol{v}}_j$ there is a third ${\boldsymbol{v}}_k$ on the same complex line, then all points are contained in some complex plane (over the complex numbers, there are planar examples and so this theorem is tight).
In [@DSW12] (based on earlier work in [@BDWY-pnas]), the following quantitative variant of the SG theorem was proved. For a set $S \subset {\mathbb{C}}^\ell$ we denote by $\dim(S)$ the smallest $d$ such that $S$ is contained in a $d$-dimensional subspace of ${\mathbb{C}}^\ell$.
\[thm:osg\] Given $n$ points ${\boldsymbol{v}}_1,{\boldsymbol{v}}_2,\ldots,{\boldsymbol{v}}_n \in {\mathbb{C}}^\ell$, if for every $i\in[n]$ there exists at least $\delta n$ values of $j\in[n]\setminus\{i\}$ such that the line through ${\boldsymbol{v}}_i$ and ${\boldsymbol{v}}_j$ contains a third point ${\boldsymbol{v}}_k$, then $\dim\{{\boldsymbol{v}}_1,{\boldsymbol{v}}_2,\ldots,{\boldsymbol{v}}_n\}\leq 10/\delta$.
(The dependence on $\delta$ is asymptotically tight). From here on, we will work with homogeneous subspaces (passing through zero) instead of affine subspaces (lines/planes etc). The difference is not crucial to our results and the affine version can always be derived by intersecting with a generic hyperplane. In this setting, the above theorem will be stated for a set of one-dimensional subspaces, each spanned by some ${\boldsymbol{v}}_i$ (and no two ${\boldsymbol{v}}_i$’s being a multiple of each other) and collinearity of ${\boldsymbol{v}}_i,{\boldsymbol{v}}_j,{\boldsymbol{v}}_k$ is replaced with the three vectors being linearly dependent (i.e., contained in a 2-dimensional subspace).
One natural high dimensional variant of the SG theorem, studied in [@Han65; @BDWY-pnas], replaces 3-wise dependencies with $t$-wise dependencies (e.g, every triple is in some coplanar four-tuple). In this work, we raise another natural high-dimensional variant in which the [*points*]{} themselves are replaced with $k$-dimensional subspaces. We consider such arrangements with many 3-wise dependencies (defined appropriately) and attempt to prove that the entire arrangement lies in some low dimensional space. We will consider arrangements $V_1,\ldots,V_n \subset {\mathbb{C}}^\ell$ in which each $V_i$ is $k$-dimensional and with each pair satisfying $V_{i_1} \cap V_{i_2} = \{{\boldsymbol{0}}\}$. A dependency can then be defined as a triple $V_{i_1},V_{i_2},V_{i_3}$ of $k$-dimensional subspaces that are contained in a single $2k$-dimensional subspace. The pair-wise zero intersections guarantee that every pair of subspaces defines a unique $2k$-dimensional space (their span) and so, this definition of dependency behaves in a similar way to collinearity. For example, we have that if $V_{i_1},V_{i_2},V_{i_3}$ are dependent and $V_{i_2},V_{i_3},V_{i_4}$ are dependent then also $V_{i_1},V_{i_2},V_{i_4}$ are dependent. This would not hold if we allowed some pairs to have non zero intersections. In fact, if we allow non-zero intersection then we can construct an arrangement of two dimensional spaces with many dependent triples and with dimension as large as $\sqrt{n}$ (see below). We now state our main theorem, generalizing Theorem \[thm:osg\] (with slightly worse parameters) to the case $k>1$. We use the standard $V + U$ notation to denote the subspace spanned by all vectors in $V \cup U$. We use big ‘O’ notation to hide absolute constants.
\[thm:sg\] Let $V_1,V_2,\ldots,V_n\subset {\mathbb{C}}^\ell$ be $k$-dimensional subspaces such that $V_{i}\cap V_{i'}=\{{\boldsymbol{0}}\}$ for all $i\neq i'\in[n]$. Suppose that, for every $i_1\in[n]$ there exists at least $\delta n$ values of $i_2\in[n]\setminus\{i_1\}$ such that $V_{i_1}+V_{i_2}$ contains some $V_{i_3}$ with $i_3 \not\in \{i_1,i_2\}$. Then $$\dim(V_1+V_2+\cdots+V_n)= O(k^4/\delta^2).$$
The condition $V_i \cap V_{i'} = \{{\boldsymbol{0}}\}$ is needed due to the following example. Set $k=2$ and $n=\ell(\ell-1)/2$ and let $\{{\boldsymbol{e}}_1,{\boldsymbol{e}}_2,\ldots,{\boldsymbol{e}}_\ell\}$ be the standard basis of ${\mathbb{R}}^\ell$. Define the $n$ spaces to be $V_{ij} = \operatorname{span}\{{\boldsymbol{e}}_i,{\boldsymbol{e}}_j\}$ with $1\leq i<j\leq\ell$. Now, for each $(i,j) \neq (i',j')$ the sum $V_{ij} + V_{i'j'}$ will contain a third space (since the size of $\{i,j,i',j'\}$ is at least three). However, this arrangement has dimension $\ell > \sqrt{n}$.
The bound $O(k^4/\delta^2)$ is probably not tight and we conjecture that it could be improved to $O(k/\delta)$, possibly with a modification of our proof. One can always construct an arrangement with dimension $2k/\delta$ by partitioning the subspaces into $1/\delta$ groups, each contained in a single $2k$ dimensional space.
##### Overview of the proof:
A preliminary observation is that it suffices to prove the theorem over ${\mathbb{R}}$. This is because an arrangement of $k$-dimensional Complex subspaces can be translated into an arrangement of $2k$-dimensional Real subspaces
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We investigate the dynamics of two tunnel-coupled two-dimensional degenerate Bose gases. The reduced dimensionality of the clouds enables us to excite specific angular momentum modes by tuning the coupling strength, thereby creating striking patterns in the atom density profile. The extreme sensitivity of the system to the coupling and initial phase difference results in a rich variety of subsequent dynamics, including vortex production, complex oscillations in relative atom number and chiral symmetry breaking due to counter-rotation of the two clouds.'
author:
- 'T.W.A. Montgomery, R.G. Scott, I. Lesanovsky, and T.M. Fromhold'
bibliography:
- 'biblio.bib'
date: '07/10/09'
title: 'Spontaneous creation of non-zero angular momentum modes in tunnel-coupled two-dimensional degenerate Bose gases'
---
Introduction
============
A major focus of cold-atom research is coupling multiple degenerate Bose gases (DBGs) using atom chips and optical lattices [@krugerRev; @reichelrev; @fortaghrev; @kasevich; @cornell; @kettnew; @kettnew2; @schmiedreview; @Hofferberth; @andreasdiff2; @mott; @JJExper; @kruger; @OberPRL]. The results provide stepping stones to future applications, such as interferometers or processors of quantum information. Since atom chips and optical lattices typically generate very high (kHz) trapping frequencies, there is growing interest in the dynamics of coupled one- and two-dimensional (1D and 2D) clouds [@savin; @annular; @bouchoule; @brandfluxons; @malomed]. This interest has also been fueled by the radically different physics that has been observed in lower-dimensional systems, such as the suppression of equilibration [@newtons], quasicondensation [@phasedefects], and the Kosterlitz-Thouless transition [@hello; @kruger07PRL; @kruger08NJP; @CornellPRL; @PhillipsPRL; @TapioPRL; @DalibardRev]. Since this body of work has uncovered such rich dynamics, it is natural to wonder how coupled lower-dimensional systems will behave, and whether they can reveal a crossover to 3D phenomena. Some recent work on 1D coupled rings has shown that the reduced dimensionality leads to unexpected effects, such as spontaneous population of rotating excitations and chiral symmetry breaking [@annular]. Further work is now needed to explore symmetry breaking in Josephson junctions, the boundaries of lower-dimensional physics, and to establish how double-well interferometers will perform in reduced dimensions.
In this paper, we investigate the dynamics of coupled 2D disk-shaped DBGs. We find that the reduced dimensionality of the clouds has profound implications for the dynamics, because the instabilities of excited states observed in three dimensions are suppressed [@RScottInter; @RScottInter2]. Starting from an irrotational stationary state, and without introducing any stirring, we observe spontaneous occupation of low-lying excitations with non-zero angular momentum, mediated by the interatomic interactions [@annular; @spinchiral; @saito]. We use a linear stability analysis to predict which rotating Bogoliubov modes will be excited, and hence identify regimes where we can excite specific modes *alone* by tuning the coupling strength. This targeted selection of a single rotating mode creates striking oscillatory patterns in the atomic density profile. As we excite different rotating modes, we uncover a rich parameter space. Modes with a high angular momentum periodically grow and decay exponentially. The growth rates of these modes are extremely sensitive to changes in the coupling and interaction strengths, and in the relative phase of the two DBGs. The growth of modes with lower angular momentum becomes unstable due to collisions between rotating and non-rotating atoms, disrupting the internal structure of the clouds. This leads to a variety of subsequent dynamics such as vortex production, oscillations in relative atom number, and chiral symmetry breaking due to counter-rotation of the two clouds.
System and Methodology
======================
![(Color online) Schematic constant density surfaces of the upper (blue/light grey) and lower (red/dark grey) DBGs. Arrows show co-ordinate axes.[]{data-label="f0"}](fig1.eps){width="0.91\columnwidth"}
The system consists of two 2D DBGs, referred to as the upper and lower DBGs, in the $z = + z_{0}$ and $- z_{0}$ planes respectively, as shown in Fig \[f0\]. The two DBGs are coupled by a tunnel junction created by the symmetric double well potential $V_{\text{dw}}(z)$. In the $x-y$ plane, the DBGs are contained by the harmonic trapping potential $V(r) = \frac{1}{2} m\omega^{2} r^{2}$, where $m$ is the mass of a single atom, $\omega$ is the trap frequency and $r=\sqrt{x^2+y^2}$. For $\left|z\right|\approx z_{0}$, $V_{\text{dw}}(z)$ can be approximated by the tight harmonic potential $V_{\text{sw}}^{\alpha/\beta}(z) = \frac{1}{2} m \lambda^2 \omega^2(z \mp z_{0})^2$, where $\lambda \gg 1$ to ensure that $\lambda\hbar\omega > \mu$. Consequently, atomic motion in the $z$ direction is frozen into the single-particle groundstate, and the DBG wavefunction becomes 2D [@2dness]. Hence, in the weak coupling limit [@JJTheo], we may represent the order parameter for the two 2D DBGs by the scalar complex field $$\label{eq_ansatz1}
\psi = \zeta(z-z_{0})\chi^{\alpha}(\rho,\phi,\tau) + \zeta(z+z_{0})\chi^{\beta}(\rho,\phi,\tau)$$ where the superscripts $\alpha$ and $\beta$ refer to the upper and lower DBGs, $\zeta$ is the normalized single-particle harmonic groundstate of $V_{\text{sw}}^{\alpha/\beta}(z)$, $\phi$ is the azimuthal angle, and we have introduced the dimensionless time $\tau = \omega t$ and the dimensionless length $\rho = r/a_{\text{ho}}$, in which $a_{\text{ho}} = \sqrt{\hbar/m\omega}$. Substituting Eq. into the Gross-Pitaevskii equation results in two coupled equations for $\chi^{\alpha/\beta}(\rho,\phi,\tau)$ [@foot2] $$\label{eq_gp2d}
\begin{array}{c c}
i\partial_{\tau}\chi^{\alpha/\beta} & = -\left[\partial^2_{\rho} + \frac{1}{\rho}\partial_{\rho} + \frac{1}{\rho^2}\partial^2_{\phi} - \rho^2 +\mu\right]\chi^{\alpha/\beta}\\ & \ \ \ + \ \gamma|\chi^{\alpha/\beta}|^2\chi^{\alpha/\beta} - |\kappa|\chi^{\beta/\alpha}
\end{array}$$ where the dimensionless quantities $\gamma = \left(8\pi\lambda\right)^{1/2}a_{0}/a_{\text{ho}}$ and $\kappa = \frac{1}{2} a^2_{\text{ho}}\int{\zeta(z+z_{0}) \left[\partial^2_{z}-2m V_{\text{dw}}(z)/\hbar\right] \zeta(z-z_{0})dz}$ represent the interaction and coupling energy respectively, and $a_{0}$ is the s-wave scattering length. At $\tau=0$, there are an equal number of atoms, $N_{0}$, in each well, and $\chi^{\alpha/\beta}$ is the non-rotating groundstate of $V(r)$, with chemical potential $\mu_{0}$. For this initial configuration, and finite $\kappa$, there are two possible stationary states of Eq. . These are the ground state, defined by $\chi^{\alpha}(r,\phi,0) = \chi^{\beta}(r,\phi,0)$, with chemical potential $\mu_{0} - |\kappa|$, and the excited asymmetric stationary state, henceforth referred to as the $\pi$-state, defined by $\chi^{\alpha}(r,\phi,0) = -\chi^{\beta}(r,\phi,0)$, with chemical potential $\mu_{0} + |\kappa|$ [@brandcomment]. Unsurprisingly, the ground state is stable for all coupling strengths. However, the $\pi$-state shows much richer behavior. In three dimensions, the phase discontinuity may bend, creating vortices via the well known snake instability [@RScottInter; @RScottInter2; @dutton; @anderson]. This process cannot occur in our system, because the reduced dimensionality of the disks precludes the movement of the phase discontinuity. In the following section, we perform a stability analysis to identify what excitations may occur.
Stability analysis of the $\pi$-state
=====================================
We perform a stability analysis of the stationary states by calculating the excitations of the system using the Bogoliubov ansatz [@stability] $$\label{eq_BogAns}
\begin{array}{c c}
\chi^{\alpha/\beta}(\rho,\phi,\tau) =
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In a social network, agents are intelligent and have the capability to make decisions to maximize their utilities. They can either make wise decisions by taking advantages of other agents’ experiences through learning, or make decisions earlier to avoid competitions from huge crowds. Both these two effects, social learning and negative network externality, play important roles in the decision process of an agent. While there are existing works on either social learning or negative network externality, a general study on considering both these two contradictory effects is still limited. We find that the Chinese restaurant process, a popular random process, provides a well-defined structure to model the decision process of an agent under these two effects. By introducing the strategic behavior into the non-strategic Chinese restaurant process, in Part I of this two-part paper, we propose a new game, called Chinese Restaurant Game, to formulate the social learning problem with negative network externality. Through analyzing the proposed Chinese restaurant game, we derive the optimal strategy of each agent and provide a recursive method to achieve the optimal strategy. How social learning and negative network externality influence each other under various settings is also studied through simulations.'
author:
- '\'
bibliography:
- 'crg.bib'
title: 'Chinese Restaurant Game - Part I: Theory of Learning with Negative Network Externality'
---
Introduction
============
How agents in a network learn and make decisions is an important issue in numerous research fields, such as social learning in social networks, machine learning with communications among devices, and cognitive adaptation in cognitive radio networks. Agents make decisions in a network in order to achieve certain objectives. For example, one customer goes to the supermarket for an orange juice. He may need to choose one from dozens of brands. However, the agent’s knowledge on the market may be very limited due to the limited ability in observations or the external uncertainty in the market, which means that the customer may not know the quality of all orange juice in different brands. This limitation reduces the accuracy of the agent’s decision for his objective, e.g., to get the best orange juice of his taste.
The limited knowledge of one agent can be expanded through learning. One agent may learn from some information sources, such as the decisions of other agents, the advertisements from some brands, or his experience in previous purchases. All the information can help the agent to construct a belief, which is mostly probabilistic, on the unknown state. In most cases, the accuracy of the agent’s decision can be greatly enhanced by taking into account the belief. A general learning and decision making process in a network can be described as follows. First, an agent collects information through available communication or observation methods and updates his belief on the uncertain states based on the collected information. Then, the agent estimates the expected rewards of certain actions according to the belief he constructed. Finally, the agent chooses the action that maximizes his reward.
Let us consider a social network in an uncertain system state. The state has an impact on the agents’ rewards. When the impact is differential, i.e., one action results in a higher reward than other actions in one state but not in all states, the state information becomes critical for one agent to make the correct decision. In most social learning literatures, the state information is unknown to agents. Nevertheless, some signals related to the system state are revealed to the agents. These signals may be preserved in private or revealed to others. Then, the agents make their decisions sequentially, while their actions/signals may be fully or partially observed by other agents. Most of existing works [@bala1998learning; @golub2007naive; @acemoglu2011bayesian; @acemoglu2010opinion] study how the believes of agents are formed through learning in the sequential decision process, and how accurate the believes will be when more information is revealed. One popular assumption in traditional social learning literatures is that there is no network externality, i.e., the actions of subsequent agents do not influent the reward of the former agents. In such a case, agents will make their decisions purely based on their own believes without considering the actions of subsequent agents. This assumption greatly limits the potential applications of these existing works.
The network externality, i.e., the influence of other agents’ behaviors on one agent’s reward, is a classic topic in economics. How the relations of agents influence an agent’s behavior is one of the major problems in coordinate game theory [@cooper1999coordination]. When the network externality is positive, the problem can be modeled as a coordination game, where agents seek the best common decisions to cooperate with others. When the externality is negative, it becomes an anti-coordination game, where agents try to avoid making the same decisions with others [@katz1986technology; @sandholm2005negative; @fagiolo2005endogenous].
In the literature, there are some works on combining the positive network externality with social learning, such as voting game [@wit1999social; @battaglini2005sequential; @ali2010observ] and investment game [@gale1995dynamic; @dasgupta2000social; @dasgupta2007coordination; @choi2011network]. In the voting game, an election with several candidates is hold, where voters have their own preferences on the candidates. The preference of a voter on the candidates is constructed by the voter’s belief on how the candidates can benefit him if winning the election. However, since the candidate can make efforts only when he wins the election, a voter’s vote depends not only on his own preference but also on the probability that the candidate wins the election. In such a case, the estimation and prediction on the decisions of other voters become critical in the voting game. A learning process is involved when the voting game is sequential, i.e., voters vote the candidates sequentially and the vote of each voter is known by others. In the sequential voting game, voters learn from the previous votes to update their believes on the candidates and the probability that the candidates win the election.
In the investment game, there are multiple projects and investors, where each project has different probability of success and different payoff. One investor may invest one or several projects if his budget allows. If the project succeeds, he receives a payoff from the project. When more investors invest in the same project, the succeeding probability of the project increases, which benefits all investors investing this project. Note that in both voting and investment games, the agent’s decision has a positive effect on ones’ decisions. When one agent makes a decision, the subsequent agents are encouraged to make the same decision in two aspects: the probability that this action has the positive outcome increases due to this agent’s decision, and the potential reward of this action may be significantly large according to the belief of this agent.
The combination of negative network externality with social learning, on the other hand, is difficult to analyze. When the network externality is negative, the game becomes an anti-coordination game, where one agent seeks the strategy that differs from others’ to maximize his own reward. Nevertheless, in such a scenario, the agent’s decision also contains some information about his belief on the uncertain system state, which can be learned by subsequent agents through social learning algorithms. Thus, subsequent agents may then realize that his choice is better than others, and make the same decision with the agent. Since the network externality is negative, the information leaked by the agent’s decision may impair the reward the agent can obtain in the game. Therefore, rational agents should take into account the possible reactions of subsequent players to maximize their own rewards.
The negative network externality plays an important rule in many applications in different research fields, such as spectrum access in cognitive radio, storage service selection in cloud computing, and deal selection on Groupon in online social networking. In spectrum access problem, for instance, secondary users access the same spectrum need to share with each others. The more secondary users access the same channel, the less available access time for each of them. In storage service selection problem, the reliability and availability are affected by the number of subscribers. The more subscribers using the same service, the lower the service quality of the cloud storage platform. For the deal selection on Groupon website, some businesses may receive overwhelming number of customers under the discounted deal. The overwhelming number of customers has a negative network externality on the quality of the products. In these examples, the negative network externality degrades the utility of the agents making the same decision. Therefore, the agents should take into account the possibility of degraded utility, e.g., less access time, lower reliability, or lower service quality, when making the decisions.
The aforementioned social learning approaches are mostly strategic, where agents are considered as players with bounded or unbounded rationality in maximizing their own rewards. Machine learning, which is another class of approaches for the learning problem, focuses on designing algorithms for making use of the past experience to improve the performance of similar tasks in the future [@mitchell1997machine]. Generally there exists some training data and the devices follow a learning method designed by the system designer to learn and improve the performance of some specific tasks. Most learning approaches studied in machine learning are non-strategic without the rationality on considering their own benefit. Such non-strategic learning approaches may not be applicable to the scenario where devices are rational and intelligent enough to choose actions to maximize their own benefits instead of following the rule designed by the system designer.
Chinese restaurant process, which is introduced in non-parametric learning methods in machine learning [@aldous1985exchangeable], provides an interesting non-strategic learning method for unbounded number of objects. In Chinese restaurant process, there exists infinite number of tables, where each table has infinite number of seats. There are infinite number of customers entering the restaurant sequentially. When one customer enters the restaurant, he
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- 'M. H. D. van der Wiel'
- 'D. A. Naylor'
- 'G. Makiwa'
- 'M. Satta'
- 'A. Abergel'
bibliography:
- '../../literature/allreferences.bib'
title: 'Three-dimensional distribution of hydrogen fluoride gas toward NGC6334I and I(N)[^1]'
---
[ The HF molecule has been proposed as a sensitive tracer of diffuse interstellar gas, while at higher densities its abundance could be influenced heavily by freeze-out onto dust grains. ]{} [ We investigate the spatial distribution of a collection of absorbing gas clouds, some associated with the dense, massive star-forming core NGC6334I, and others with diffuse foreground clouds elsewhere along the line of sight. For the former category, we aim to study the dynamical properties of the clouds in order to assess their potential to feed the accreting protostellar cores. ]{} [ We use far-infrared spectral imaging from the SPIRE iFTS to construct a map of HF absorption at 243 in a 6$\times$35 region surrounding NGC6334 I and I(N). ]{} [ The combination of new, spatially fully sampled, but spectrally unresolved mapping with a previous, single-pointing, spectrally resolved HF signature yields a three-dimensional picture of absorbing gas clouds in the direction of NGC6334. Toward core I, the HF equivalent width matches that of the spectrally resolved observation. At angular separations $\gtrsim$20 from core I, the HF absorption becomes weaker, consistent with three of the seven components being associated with this dense star-forming envelope. Of the remaining four components, two disappear beyond $\sim$1 distance from the NGC6334 filament, suggesting that these clouds are spatially associated with the star-forming complex. Our data also implies a lack of gas phase HF in the envelope of core I(N). Using a simple description of adsorption onto and desorption from dust grain surfaces, we show that the overall lower temperature of the envelope of source I(N) is consistent with freeze-out of HF, while it remains in the gas phase in source I. ]{} [ We use the HF molecule as a tracer of column density in diffuse gas ($n_\mathrm{H}$$\approx$$10^2$–$10^3$ ), and find that it may uniquely trace a relatively low density portion of the gas reservoir available for star formation that otherwise escapes detection. At higher densities prevailing in protostellar envelopes ($\gtrsim$$10^4$ ), we find evidence of HF depletion from the gas phase under sufficiently cold conditions. ]{}
Introduction {#sec:intro}
============
The hydrogen fluoride molecule, HF, was first observed in the interstellar medium by @neufeld1997b with the [*Infrared Space Observatory*]{} [[*ISO*]{}, @kessler1996]. While [*ISO*]{} had a wavelength range that encompassed only the $J$=2–1 rotational transition of HF, the next observatory able to observe HF – the [*Herschel*]{} Space Observatory [@pilbratt2010] – covered longer far-infrared wavelengths, and it thus opened up access to the ground-state rotational transition, $J$=1–0, at 1232.48 GHz (243.24 ). [*Herschel*]{} has observed HF in absorption along many lines of sight, both inside the Galaxy [@neufeld2010b; @sonnentrucker2010; @sonnentrucker2015; @philips2010; @kirk2010; @monje2011a; @emprechtinger2012; @lopez-sepulcre2013a; @goicoechea2013] and in nearby extragalactic objects [@rangwala2011; @kamenetzky2012; @rosenberg2014a; @monje2014]. HF absorption has even been detected with ground-based observatories: @monje2011c have made use of the substantial redshift of the Cloverleaf quasar at $z$=2.56 shifting the HF 1–0 line into the submillimeter window attainable with the CSO on Mauna Kea, and @kawaguchi2016 detect it in the $z$=0.89 absorber toward PKS1830$-$211, using ALMA in the Chilean Atacama desert. Because of its large dipole moment and high Einstein $A$ coefficient for radiative decay, rotational states $J$$\neq$0 of HF only become significantly populated in highly energetic conditions. It is for this reason that HF has been clearly detected in emission in a mere handful of cases: in the inner region of an AGB star’s envelope [IRC+10216, @agundez2011], in the Orion Bar photodissociation region [@vandertak2012a], and in an external galaxy harboring an actively accreting black hole [Mrk231, @vanderwerf2010]. Atomic fluoride, F, has a unique place in the interstellar chemistry of simple molecules. It is the only element which, simultaneously, (1) is mainly neutral because of its ionization potential $>$13.6 eV, (2) reacts exothermically with – unlike *any* other neutral atom – to form its neutral diatomic hydride HF, and (3) lacks an efficient chemical pathway to produce its hydride cation HF$^+$ due to the strongly endothermic nature of the reaction with H${_3}^+$. We refer to @neufeld2009b, references therein, and the comprehensive review by @gerin2016 for more details on the chemistry of HF and a comparison with other hydride molecules. For the reasons listed above, chemical models predict that essentially all interstellar F is locked in HF molecules [@zhu2002; @neufeld2005], which has been confirmed by observations across a wide range of atomic and molecular ISM conditions [e.g., @sonnentrucker2010; @sonnentrucker2015]. With recent experimental results by @tizniti2014 showing that, especially at low temperatures approaching 10 K, the reaction F + $\rightarrow$ HF + H proceeds somewhat slower than earlier assumptions, chemical models are now able to reproduce HF/ ratios of $\sim$, measured most directly by @indriolo2013a, and observed to be rather stable across different sightlines. Interferometric observations show that CF$^+$, the next most abundant F-bearing species after HF, has an abundance roughly two orders of magnitude lower than HF, both inside our Galaxy [@liszt2015b] and in an extragalactic absorber [@muller2016]. As for destruction of HF, the most efficient processes are UV photodissociation and reactions with C$^+$, but both of these are unable to drive the majority of fluoride out of HF, due to shielding, already at modest depths of $A_V>0.1$ [@neufeld2005]. Because of the constant HF/ abundance ratio and the high probability that HF molecules are in the rotational ground state, measurements of HF $J$=0$\rightarrow$1 absorption provide a straightforward proxy of column density. This has led to the suggestion that, at least in diffuse gas, HF absorption is a more reliable tracer of total gas column density than the widely used carbon monoxide (CO) rotational *emission* lines, and is more sensitive than CH or absorption [e.g., @gerin2016]. Apart from the uncertain and variable CO abundance, local excitation conditions have a profound effect on the level populations of CO, complicating the conversion from observed line strength of a particular CO transition to column density [@bolatto2013]. The greatest gas-phase CO abundance variations occur in dense, cold regions where CO freezes out onto surfaces of dust grains, proven by observed CO abundances decreasing in the gas phase and increasing in the ice phase as conditions get colder [e.g., @jorgensen2005a; @pontoppidan2005a]. In addition, the particular fraction of the neutral ISM that is in the diffuse/translucent phase is inconspicuous in CO [@bolatto2013], but is detectable using hydride absorption lines. Of course, for absorption line studies, one relies on lines of sight with sufficiently strong continuum background, for example those toward dense star-forming clouds. Such restrictions do not apply for emission line tracers. Besides CO rotational lines, fine structure line emission due to atomic C and the C$^+$ and N$^+$ ions has been used as a tracer of (diffuse) gas throughout the Galaxy [e.g., @langer2014a; @velusamy2014b; @gerin2015a; @goicoechea2015b; @goldsmith2015]. For all these tracers, however, the conversion to column density depends strongly on physical properties such as ionization fraction and excitation conditions.
Based on the above arguments, HF absorption measurements are a good tracer of overall gas column density. However, as addressed for example by @philips2010 and @emprechtinger2012, HF itself may suffer from freeze-out effects as occurs with other interstellar molecules. While studies have been done on the interaction of with HF as a polluting agent in the Earth’s atmosphere [@girardet2001], the density and temperature conditions needed for HF adsorption onto dust grains have not been studied in astrophysical contexts so far. Any freeze-out of interstellar HF will obfuscate the direct connection between HF absorption depth and column density described above. The well-known progression of pre- and protostellar stages for stars with masses similar to the Sun [@shu1977] is not
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'A versatile and efficient variational approach is developed to solve in- and out-of-equilibrium problems of generic quantum spin-impurity systems. Employing the discrete symmetry hidden in spin-impurity models, we present a new canonical transformation that completely decouples the impurity and bath degrees of freedom. Combining it with Gaussian states, we present a family of many-body states to efficiently encode nontrivial impurity-bath correlations. We demonstrate its successful application to the anisotropic and two-lead Kondo models by studying their spatiotemporal dynamics and universal behavior in the correlations, relaxation times and the differential conductance. We compare them to previous analytical and numerical results. In particular, we apply our method to study new types of nonequilibrium phenomena that have not been studied by other methods, such as long-time crossover in the ferromagnetic easy-plane Kondo model. The present approach will be applicable to a variety of unsolved problems in solid-state and ultracold-atomic systems.'
author:
- Yuto Ashida
- Tao Shi
- Mari Carmen Bañuls
- 'J. Ignacio Cirac'
- Eugene Demler
bibliography:
- 'reference.bib'
title: Solving quantum impurity problems in and out of equilibrium with variational approach
---
Understanding out-of-equilibrium dynamics of quantum many-body systems has become one of the central problems in physics. Recent experimental developments in diverse fields such as ultracold atoms [@EM11; @CM12; @FuT15; @KAM16; @RL17], mesoscopic physics [@DFS02; @THE11; @LC11; @IZ15; @MMD17], molecular electronics [@BDN11], and carbon nanotubes [@MA07; @CSJ12] have posed new theoretical questions for studying many-body dynamics driven by external fields or fast changes in the Hamiltonian. Quantum spin-impurity models (SIM), such as the famous Kondo model [@KJ64], constitute a paradigmatic class of many-body systems which lie at the heart of many strongly correlated systems. Their nonequilibrium dynamics underly transport phenomena in mesoscopic systems [@GL88; @NTK88; @LW02; @SF99; @WWG00; @RMP07; @KAV11] and non-Fermi liquid behavior in heavy fermion materials [@HAC97; @LH07; @SQ10], and give theoretical foundation for the real-time formulation of dynamical mean-field theory (DMFT) [@GA96].
The ground-state properties of SIM are now well established by perturbative renormalization group (RG) [@APW70], numerical renormalization group (NRG) [@WKG75] and the Bethe ansatz [@PBW80; @AND80; @ANL81; @NK81; @AN83; @SP89]. The challenging and fascinating question of out-of-equilibrium dynamics has recently come under active investigations in theory [@AFB05; @AFB06; @AFB07; @AFB08; @RD08; @JE10; @LB14; @WSR04; @SP04; @AHKA06; @DSLG08; @SH08; @WA09; @HMF09; @HMF10; @NHTM17; @NM15; @DB17; @STL08; @WP09; @SM09; @WP10; @CG13; @NP99; @KA00; @AR03; @HA08; @KM01; @PM10; @HA09L; @HA09B; @CT11; @BS14; @FS15; @BCZ17; @LF96; @LF98; @SA98; @LD05; @VR13; @SG14; @MM13; @BCJ16] and experiments [@RL17; @DFS02; @THE11; @LC11; @IZ15; @MMD17]. Examples include time-dependent NRG [@AFB05; @AFB06; @AFB07; @AFB08; @RD08; @JE10; @LB14], density-matrix renormalization group (DMRG) [@WSR04; @SP04; @AHKA06; @DSLG08; @SH08; @WA09; @HMF09; @HMF10; @NHTM17], time evolving block decimation (TEBD) [@NM15; @DB17], real-time Monte Carlo [@STL08; @WP09; @SM09; @WP10; @CG13], perturbative RG [@NP99; @KA00; @AR03; @HA08; @KM01; @PM10], flow equation method [@HA09L; @HA09B; @CT11], coherent-state expansion [@BS14; @FS15; @BCZ17], and exact analyses [@LF96; @LF98; @SA98; @LD05; @VR13; @SG14; @MM13; @BCJ16]. Despite the rich variety of methods, they often become increasingly costly at long times due to, e.g., artifacts of the logarithmic discretization [@RA12] or large entanglement in the time-evolved state [@SU11]. Some of them can only determine the dynamics of the impurity but not that of the bath, or are restricted to particular parameter regimes. Moreover, it remains a major challenge to apply them to generic SIM beyond the simplest Kondo models. These challenges motivate the search for new approaches to quantum impurity systems.
![\[fig\_aniso\] (a) Ground-state impurity-bath spin correlation $\chi^{z}_{l}$ of the anisotropic Kondo model. (a,inset) The RG phase diagram and the parameters $(j_\parallel,j_\perp)$ corresponding to I $(-0.5,0.2)$ (blue square) in the ferromagnetic phase (FM), II $(0.5,0.2)$ (red triangle) and III $(-1.85,2)$ (brown circle) in the antiferromagnetic phase (AFM). (b) Quench dynamics of the impurity magnetization $\langle\hat{\sigma}_{\rm imp}^{z}(t)\rangle$. (c) The corresponding spatiotemporal dynamics of correlations $\chi^{z}_{l}(t)$ in I FM phase, II AFM phase, III easy-plane FM regime and IV the same as in III but on a different scale. System size is $L=400$. ](fig_aniso_arxiv.pdf){width="86mm"}
In this Letter, introducing a new canonical transformation, we present a widely applicable variational approach to study in- and out-of-equilibrium properties of generic SIM. Besides the ability to efficiently capture the correct impurity-bath correlations and the conductance behavior, it reveals previously unexplored nonequilibrium dynamics such as ferromagnetic (FM) to antiferromagnetic (AFM) crossover (see the panels III and IV in Fig. \[fig\_aniso\]c) in the FM easy-plane Kondo model. Such long-time spatiotemporal dynamics is difficult (if not impossible) to obtain in other approaches. Our versatile variational approach will pave the way towards solving interesting novel problems in both solid-state and ultracold-atomic systems. [*Canonical transformation.—*]{} We first formulate our approach in the most general way as it is applicable to a wide class of SIM. The difficulty in SIM stems from the need to treat the strong entanglement between the impurity and bath. Here we introduce a new canonical transformation that completely decouples the impurity spin and bath degrees of freedom. We consider the Hamiltonian [ $$\begin{aligned}
\label{totalH}
\hat{H}=\hat{H}_{\rm bath}+\hat{H}_{\rm int}+\hat{H}_{\rm imp},\end{aligned}$$ ]{} where $\hat{H}_{\rm bath}=\sum_{lm\alpha}\hat{\Psi}^{\dagger}_{l\alpha}h_{lm}\hat{\Psi}_{m\alpha}$ describes an arbitrary single-particle Hamiltonian, with fermionic or bosonic creation (annihilation) operator $\hat{\Psi}^{\dagger}_{l\alpha}$ ($\hat{\Psi}_{l\alpha}$) for the $l$-th bath mode with spin $\alpha$. For simplicity, we consider a noninteracting spin-1/2 bath with $\alpha=\uparrow,\downarrow$ [^1]. The Hamiltonian $\hat{H}_{\rm int}=\hat{\bf s}_{\rm imp}\cdot\hat{\bf \Sigma}$ represents a generic interaction between the impurity and the bath with $\hat{\bf s}_{\rm imp}=\hat{\boldsymbol \sigma}_{\rm imp}/2$ being the impurity spin-1/2 operator. We define the bath-spin operator including couplings as $\hat{\Sigma}^{\gamma}=\sum_{l}g_{l}^{\gamma}\hat{\sigma}_{l}^{\gamma}/2$ with $\hat{\sigma}_{l}^{\gamma}=\sum_{\alpha\beta}\hat{\Psi}_{l\alpha}^{\dagger}\sigma^\gamma_{\alpha\beta}\hat{\Psi}_{l\beta}$. The interaction strengths $g_{l}^{\gamma}$ are arbitrary and can be anisotropic and long-range. We also include the impurity Hamiltonian as $\hat{H}_{\rm imp}=-h_z\hat{s}_{\rm imp}^{z}$. Paradigmatic examples having the interaction form $\hat{H}_{\rm int}$ include the Kondo-type Hamiltonians [@KJ64] where the coupling $g_{l}^\gamma$ is local, and the central spin model [@JS03] where an interaction is long-range while $\hat{H}_{\rm bath}$ is frozen.
To construct the canonical
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'An initial-boundary value problem for the time-fractional diffusion equation is discretized in space using continuous piecewise-linear finite elements on a polygonal domain with a re-entrant corner. Known error bounds for the case of a convex polygon break down because the associated Poisson equation is no longer $H^2$-regular. In particular, the method is no longer second-order accurate if quasi-uniform triangulations are used. We prove that a suitable local mesh refinement about the re-entrant corner restores second-order convergence. In this way, we generalize known results for the classical heat equation due to Chatzipantelidis, Lazarov, Thomée and Wahlbin.'
author:
- Kim Ngan Le
- William McLean
- Bishnu Lamichhane
bibliography:
- 'nonconvexrefs.bib'
title: 'Finite element approximation of a time-fractional diffusion problem in a non-convex polygonal domain[^1]'
---
Introduction
============
In a standard model of subdiffusion [@KlafterSokolov2011], each particle undergoes a continuous-time random walk with a common waiting-time distribution that obeys a power law. Consequently, the mean-square displacement of a particle is proportional to $t^\alpha$ with $0<\alpha<1$, and the macroscopic concentration $u(x,t)$ of the particles satisfies the time-fractional diffusion equation $$\label{eq: fpde}
\partial_t u-\partial_t^{1-\alpha}K\nabla^2u=f(x,t).$$ Here, $\partial_t=\partial/\partial t$ and $\nabla^2$ denotes the spatial Laplacian. The fractional time derivative is of Riemann–Liouville type: $$\partial_t^{1-\alpha}v(x,t)=\frac{\partial}{\partial t}\int_0^t
\omega_\alpha(t-s)v(x,s)\,ds,\qquad
\omega_\alpha(t)=\frac{t^{\alpha-1}}{\Gamma(\alpha)}
\quad\text{for~$t>0$.}$$ If no sources or sinks are present, then the inhomogeneous term $f$ is identically zero. We assume for simplicity that the generalized diffusivity $K$ is a positive constant, and that the fractional PDE holds for $x$ in a polygonal domain $\Omega\subseteq{\mathbb{R}}^2$ subject to homogeneous Dirichlet boundary conditions, with the initial condition $$\label{eq: ic}
u(x,0)=u_0(x)\quad\text{for $x\in\Omega$.}$$ In the limiting case when $\alpha\to1$, the fractional PDE reduces to the classical heat equation that arises when the diffusing particles instead undergo Brownian motion.
Consider a spatial discretization of the preceding initial-boundary value problem using continuous piecewise-linear finite elements to obtain a semidiscrete solution $u_h$. The behaviour of $u_h$ is well understood if $\Omega$ is convex [@JinLazarovZhou2013; @McLeanThomee2010]: in this case, for general initial data $u_0\in L_2(\Omega)$ and an appropriate choice of $u_h(0)$, $$\|u_h(t)-u(t)\|\le Ct^{-\alpha}h^2\|u_0\|,\qquad0<t\le T,$$ whereas for smoother initial data $u_0\in H^2(\Omega)$, $$\|u_h(t)-u(t)\|\le Ch^2\|u_0\|_{H^2(\Omega)},\qquad0\le t\le T,$$ where $\|\cdot\|=\|\cdot\|_{L_2(\Omega)}$. The error analysis establishing these bounds relies on the $H^2$-regularity property of the associated elliptic equation in $\Omega$, namely, that if $$\label{eq: elliptic bvp}
-K\nabla^2u=f\quad\text{in~$\Omega$,}\quad
\text{with $u=0$ on~$\partial\Omega$,}$$ then $u\in H^2(\Omega)$ with $\|u\|_{H^2(\Omega)}\le C\|f\|$.
In the present work, our aim is to study $u_h$ in the case when $\Omega$ is not convex. Since the above $H^2$-regularity breaks down, we can no longer expect $O(h^2)$ convergence if the finite element mesh is quasi-uniform. Our results generalize those of Chatzipantelidis, Lazarov, Thomée and Wahlbin [@ChatzipantelidisEtAl2006] for the heat equation (the limiting case $\alpha=1$) to the fractional-order case ($0<\alpha<1$). Our method of analysis relies on Laplace transformation, extending the approach of McLean and Thomee [@McLeanThomee2010] for the fractional order problem on a convex domain.
(0,0) – (0.4,0) arc \[radius=0.4, start angle=0, end angle=296.565\] – (0,0); (0,0) – (1,0) – (1,1) – (-1,1) – (-1,-1) – (0.5,-1) – (0,0); (-1.25,0) – (1.25,0); (0,-1.25) – (0,1.25); (0.4,-0.025) – (0.4,0.025); at (0.4,0) [$r_0$]{};
(0,0) circle \[radius=[0.015]{}\]; at (0,0) [$p_0=p_6$]{}; (1,0) circle \[radius=[0.015]{}\]; at (1,0) [$p_1$]{}; (1,1) circle \[radius=[0.015]{}\]; at (1,1) [$p_2$]{}; (-1,1) circle \[radius=[0.015]{}\]; at (-1,1) [$p_3$]{}; (-1,-1) circle \[radius=[0.015]{}\]; at (-1,-1) [$p_4$]{}; (0.5,-1) circle \[radius=[0.015]{}\]; at (0.5,-1) [$p_5$]{}; at (0.5,0) [$\Gamma_0$]{}; at (1,0.5) [$\Gamma_1$]{}; at (0,1) [$\Gamma_2$]{}; at (-1,0) [$\Gamma_3$]{}; at (-0.25,-1) [$\Gamma_4$]{}; at (0.25,-0.5) [$\Gamma_5$]{};
To focus on the essential difficulty, we assume that $\Omega$ has only a single re-entrant corner with angle $\pi/\beta$ for $1/2<\beta<1$. Without loss of generality, we assume that this corner is located at the origin and that, for some $r_0>0$, the intersection of $\Omega$ with the open disk $|x|<r_0$ is described in polar coordinates by $$\label{eq: 0 nbhd}
0<r<r_0\quad\text{and}\quad 0<\theta<\pi/\beta,$$ as illustrated in Figure \[fig: Omega\]. We denote the vertices of $\Omega$ by $p_0=(0,0)$, $p_1$, $p_2$, …, $p_J=p_0$, and the $j$th side by $$\Gamma_j=(p_j,p_{j+1})
=\{\,(1-\sigma)p_j+\sigma p_{j+1}: 0<\sigma<1\,\}
\quad\text{for $0\le j\le J-1$.}$$
Section \[sec: elliptic\] summarizes some key facts about the singular behaviour of the solution to the elliptic problem . In Section \[sec: FEM\], we describe a family of shape-regular triangulations ${\mathcal{T}}_h$ (indexed by the mesh parameter $h$) that depend on a local refinement parameter $\gamma\ge1$. The elements near the origin have sizes of order $h^\gamma$, so the ${\mathcal{T}}_h$ are quasi-uniform if $\gamma=1$ but become more highly refined with increasing $\gamma$. Our error bounds will be stated in terms of the quantity $$\label{eq: epsilon}
\epsilon(h,\gamma)=\begin{cases}
h^{\gamma\beta}/\sqrt{\gamma^{-1}-\beta},&1\le\gamma<1/\beta,\\
h\sqrt{\log(1+h^{-1})},&\gamma=1/\beta,\\
h/\sqrt{\beta-\gamma^{-1}},&\gamma>1/\beta,
\end{cases}$$ which ranges in size from $O(h^\beta)$ when $\gamma=1$ (the quasiuniform case) down to $O(h)$ when $\gamma>1/\beta$. We briefly review results for the finite element approximation of the elliptic problem, needed for our subsequent analysis: the error in $H^1(\Omega)$ is of order $\epsilon(h,\gamma)$, and the error in $L
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Recent empirical studies have confirmed the key roles of complex contagion mechanisms such as memory, social reinforcement, and decay effects in information diffusion and behavior spreading. Inspired by this fact, we here propose a new agent–based model to capture the whole picture of the joint action of the three mechanisms in information spreading, by quantifying the complex contagion mechanisms as stickiness and persistence, and carry out extensive simulations of the model on various networks. By numerical simulations as well as theoretical analysis, we find that the stickiness of the message determines the critical dynamics of message diffusion on tree-like networks, whereas the persistence plays a decisive role on dense regular lattices. In either network, the greater persistence can effectively make the message more invasive. Of particular interest is that our research results renew our previous knowledge that messages can spread broader in networks with large clustering, which turns out to be only true when they can inform a non-zero fraction of the population in the limit of large system size.'
author:
- Pengbi Cui
- Ming Tang
- Zhixi Wu
title: 'Message spreading in networks with stickiness and persistence: Large clustering does not always facilitate large-scale diffusion'
---
Over the last few years, many empirical works [@contagion; @decay3; @science; @attention; @contagion2; @origin1; @origin2] or practical model [@zhou; @model2] have identified the strong relevance of complex contagion mechanisms such as memory effect, social reinforcement and decay effects to information diffusion or behavior spreading. On account of memory effect, the previous contact activities can affect the current spreading process [@memory; @attention]. Specifically, individual’s selection of message items can be naturally expedited by the increasing frequencies of the same choices of other people if they find the items interesting or crucial enough [@science; @zhou; @decay]. This is usually interpreted as the results of social reinforcement [@rein1; @model2; @theory2]. On the other hand, there are an increasing amount of new messages an individual is facing every day in modern real life, whereas the attention and processing abilities of people are finite and saturated [@attention; @decay; @attention1]. The novelty of a message usually trend to fade with time and hence the attention people pay to it, which is normally described as decay effects [@burst2; @attention; @contagion; @decay]. It is shown that the social reinforcement effect could be weakened or even counterbalanced by decay effects [@attention; @decay; @contagion2].
Although the competition between social reinforcement and decay effects has been emphasized and used as a guideline to measure the natural time scale that attention fades away [@decay], to our best knowledge few works have been attempted to model explicitly the competition and memory effect, and study deeply how it shapes the spreading of information on complex networks. Here we want to point out that the three mentioned effects in information spreading are quite different from those have been considered in the studies on Naming Game (NG) and Category Game (CG), since either NG or CG is a two-step multi-state negotiation process [@ng1; @ng2; @cg1; @cg2], whereas information spreading is not. First, herein memory effect performs as the storing of the times of contact of people with recipients of information [@science; @zhou], rather than the possible words (or names) for the object (or a category) in NG (or CG) [@ng1; @ng2; @cg1; @cg2]. Second, decay effect in information spreading reflects the decay of people’s interest or attention in a message owing to the competition with other news or stories [@attention; @decay], contrary to the NG (CG) in which it means the decrease of the number of different words used in the system (or average number of words per category) [@ng1; @ng2; @cg1; @cg2]. Third, unlike the phenomenon that an hearing would have more opportunities to add (or remain) one word only if more selected speakers try to transmit the same one to it [@ng1; @ng2; @cg1; @cg2], the reinforcement effect in information diffusion indicates the more simple situation that the more neighbors adopting the message, the higher likelihood an individual following them [@science; @contagion2].
Next, the big challenge we are confronted with is the possibility of modeling and studying the message spreading along with both social reinforcement and decay effects based on the memory effect. Recent researches [@origin1] have shown that the variation in the ways that different information spread is attributed to not only the stickiness – the probability of information adoption is mainly dominated by the first few exposures [@origin2; @origin1], but also the persistence – the relative extent to which more repeated exposures to the message continue to have durative effect. Similar results especially the exposure response behaviors were also confirmed by a lot of empirical studies [@contagion2; @origin2; @attention]. The two mechanisms, stickiness and persistence, thus enable us to quantitatively study the joint action of the three effects together.
At the same time, the structures of complex social systems can be characterized by complex networks, on which many spreading activities may take place, ranging from the spreading of epidemics [@tang1; @tang2; @tang3; @tang4], the diffusion of behaviors and news [@science; @zhou], to the promotion of technique innovations [@innovation], etc. Consequently, motivated by the empirical studies [@science; @zhou; @attention; @contagion2; @origin1; @origin2; @decay] mentioned above, we propose a new agent-based model offering an opportunity to explore the impact of social reinforcement and decaying effects quantified by stickiness and persistence on the message (information) diffusion on various networks. In the presence of strong decay effects, we find that a message is more likely to outbreak (i.e., it can reach a non-zero fraction of the population in the thermodynamic limit) on the tree-like networks such as scale-free (SF) networks and Erd[ő]{}s-R[é]{}nyi (ER) random networks rather than on the regular lattices (RLs). Specifically, a message can spread broader in the RLs than that in the tree-like networks only if it can outbreak. The critical behaviors of the diffusion process can be reasonably estimated by the bond-percolation theory considering spatial correlations of the underlying networks through which message diffuses. In addition, we develop a verification approximation, whose solutions confirm well the non-negligible role of the dynamical correlations between transmission events in the RLs.
Results {#results .unnumbered}
=======
Here, we first carry out extensive simulations for the agent-based model of message diffusion on square lattice. We then compare the simulation results with the predictions from the analytical bond percolation theory and verification method involving time correlations of the spreading events. Finally we extend our model and analytical methods to other networks such as RLs, SF networks, and ER networks to validate the robustness of our findings.
**Message diffusion on square lattice.** \[square\] We first consider the message diffusion on a square lattice of size $N=L\times L$ with periodic boundary conditions. The message starts spreading from the center node (selected as the seed), while all the others are in the susceptible state (i.e., they hear nothing about the message).
![**The time evolution of spatial patterns,** for two different values of stickiness (a1) $a=0.40$, (a2) $a=0.45$ (bottom panel) where $b=0$; and for two values of persistence (b1) $b=-1.00$, (b2) $b=1.00$ (bottom panel) where $a=0.45$. Red sites represent recovered or alerted nodes, bright green sites represent infected ones, and blue sites denote susceptible nodes. Other parameters are chosen as $n_{s}=2$ and $N=101\times 101$.[]{data-label="spatial"}](spatial.jpeg){width="\textwidth"}
To intuitively grasp the roles of stickiness and persistence, we begin by presenting the time evolution of spatial patterns of message spreading in Fig. \[spatial\]. The message with stickiness $a=0.40$ (see the Methods for the precise definitions of $a$, $b$ and other parameters) spreads in an irregular manner (see Fig. \[spatial\](a1)). By comparison, the message with a slightly stronger stickiness $a=0.45$ diffuses outward to susceptible areas in a quasi-circular manner with a broader rim of informed (infected) individuals (see Fig. \[spatial\](a2)). This indicates that messages with different strength of stickiness could give rise quite different spreading patterns and behaviors. Figs. \[spatial\](b1) (b2) and Supplementary Fig. S1 show that the persistence $b$ also affects considerably the whole spreading size, by governing the number of the isolated susceptible islands (blue domains surrounded by red areas, the emergence of these islands arises from the fact that continually increasing number of infected neighbors fail to infect those individuals owing to small persistence). The above arguments suggest that the stickiness and persistence have great but different influences on the spreading of message.
![**The evolution of proportions of the transmission events.** The parameters are chosen at (a) subcritical point $a=0.20$, $b=0.20$; (b) critical point $a=0.35$, $b=0.20$; and (c) supercritical
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We consider a two dimensional colloidal dispersion of soft-core particles driven by a one dimensional stochastic flashing ratchet that induces a time averaged directed particle current through the system. It undergoes a non-equilibrium melting transition as the directed current approaches a maximum associated with a resonance of the ratcheting frequency with the relaxation frequency of the system. We use extensive molecular dynamics simulations to present a detailed phase diagram in the ratcheting rate- mean density plane. With the help of numerically calculated structure factor, solid and hexatic order parameters, and pair correlation functions, we show that the non-equilibrium melting is a continuous transition from a quasi-long ranged ordered solid to a hexatic phase. The transition is mediated by the unbinding of dislocations, and formation of compact and string-like defect clusters.'
author:
- Shubhendu Shekhar Khali
- Dipanjan Chakraborty
- Debasish Chaudhuri
title: 'Structure-dynamics relationship in ratcheted colloids: Resonance melting, dislocations, and defect clusters'
---
§ ł Ł ø
\#1
Introduction {#sec:introduction}
============
A class of non-equilibrium driven systems called pump models are particularly intriguing due to their following property. They involve periodic forces, in time and space, that vanish under spatio-temporal averaging but still drives an overall directed current [@Julicher1997a; @Astumian2002; @Hanggi2009; @Reimann2002; @Brouwer1998; @Citro2003; @Jain2007; @Chaudhuri2011; @Chaudhuri2015; @Chaudhuri2015f]. This is achieved via the breaking of time-reversal symmetry through, e.g., a phase lag between spatially non-local drives [@Brouwer1998; @Jain2007; @Chaudhuri2011], or breaking of space inversion symmetry of the external potential profile [@Julicher1997a; @Reimann2002; @Astumian2002; @Hanggi2009]. Most of the biological processes generating directed motion involve reaction cycles and utilize some variant of this principle. Natural examples involve ion-pumps, e.g., the Na$^+$, K$^+$-ATPase pumps, and molecular motors [@Gadsby2009], e.g., Kinesin or myosin moving on polymeric tracks of microtubules or F-actins, respectively [@Reimann2002]. The flashing ratchet model has been used to describe molecular motor locomotion [@Julicher1997a]. In experiments on colloids, ratcheting could be generated using optical [@Faucheux1995; @Lopez2008], magnetic [@Tierno2010; @Tierno2012] or electrical fields [@Rousselet1994; @Leibler1994; @Marquet2002]. Most of the studies on pump models focused on systems of non-interacting particles, restricted to one dimension, with a few exceptions that analyzed the impact of interaction on molecular motors [@Derenyi1995; @Derenyi1996], collective properties of particle pumps [@Jain2007; @Marathe2008; @Chaudhuri2011; @Chaudhuri2015; @Chaudhuri2015f], and in ratchet models [@Savelev2004; @Pototsky2010; @Savelev2003; @Hanggi2009].
In a recent study, we used an asymmetric periodic potential that switches between an [*on*]{} and [*off*]{} state in a stochastic manner to drive a directed current of particles in a two dimensional (2d) dispersion of sterically stabilized colloids [@Chakraborty2014], focusing on the frequency and density dependence of the ratcheted current. With the change in the rate of ratcheting, the time- averaged directed current carried by the colloids show a resonance with the system’s relaxation frequency [@Chakraborty2014]. The current shows a non-monotonic dependence on density as well. This change in the dynamical properties, as we show in this paper, is closely related to the associated structural changes, e.g., the solid melts near the resonance frequency.
In the limit of extremely high switching frequency, higher than the inherent relaxation time of the colloids, the system can only respond to essentially a time- averaged potential profile. In addition, if one considers the limit of vanishing asymmetry in the potential profile, the scenario becomes equivalent to that of the re-entrant laser induced melting transition (RLIM) [@Chowdhury1985; @Wei1998; @Frey1999; @Chaudhuri2006], in which a high- density colloidal liquid undergoes solidification followed by melting, as the strength of a commensurate external periodic potential is increased. This is an equilibrium phase transition of the Kosterlitz-Thouless type [@Frey1999; @Chaudhuri2006], and is described in terms of unbinding of a specific type of dislocations, allowed by the potential profile.
In this paper we consider an asymmetric ratcheting of soft-core particles, and investigate structural transitions associated with the change in dynamical behavior of the system, observed in terms of its current carrying capacity. Using a large scale molecular dynamics simulation, we obtain the phase diagram in the density- ratcheting rate plane, showing melting from a solid to hexatic phase. We find a re-entrant solid- hexatic- solid transition with changing ratcheting frequency. The transitions are associated with a non-monotonic variation of the mean directed current. As we demonstrate in detail, the non-equilibrium melting is a continuous transition from a quasi- long ranged ordered (QLRO) solid to a hexatic phase, and is mediated by the formation of topological defects. The dominant defect types generated at the solid melting are dislocations, and compact or string-like defect clusters.
{width="\linewidth"}
In \[sec:model\_simulation\] we present the model and details of numerical simulations. In \[sec:results\_discussion\] we discuss the detailed phase diagram, explaining the properties of the different non-equilibrium phases. The associated variation of driven directed current with driving frequency and density is shown in \[ssec:current\]. In this section we establish the relation of changing particle current to the non-equilibrium phase transitions. This is followed by a detailed analysis of the melting transitions in terms of the order parameters presented in \[ssec:reentrant\_transition\]. In the following three subsections, the phase- transitions are further characterized in terms of the distribution functions of order parameters, correlation functions, and formation of topological defects. We finally conclude presenting a discussion and outlook in \[sec:outlook\].
Model and Simulation Details {#sec:model_simulation}
============================
We consider a two dimensional system of a repulsively interacting colloidal suspension of $N$ particles in a volume $A=L_x L_y$. The mean inter-particle separation in this system $a^2=\sqrt{3} \rho/2$ is set by the particle density $\rho = N/A$. We assume that the colloids repel each other via a shifted soft-core potential $U(r)=\e \,[(\sigma/r)^{12}-2^{-12}]$ when the inter-particle separation $r< r_c r_c=2 \sigma$, and $U(r)=0$ otherwise. The units of energy and length scales are set by $\e$, $\s$ respectively. The system evolves under an asymmetric ratchet potential $U_{\rm ext}(x,y,t)=V(t) \left[\sin \left(2 \pi y/\lambda \right)+\alpha \sin \left(4 \pi y/\lambda \right) \right]$, where the time-dependent strength $V(t)$ switches between $\e$ and $0$ stochastically with a rate $f$. The two sinusoidal terms in the above expression of $U_{\rm ext}$ with $\a=0.2$ maintains the asymmetric shape of the potential profile. When it assumes a triangular lattice structure, the separation between consecutive lattice planes in the system is $a_y=\sqrt{3} a/2$. We have chosen the periodicity of the external potential $\lambda = a_y$, commensurate with the mean lattice spacing. In the absence of the external potential, the soft core solid is expected to undergo a two stage solid- hexatic- liquid transition [@Kosterlitz1973; @Halperin1978; @Young1979; @Kapfer:2015ca], with the solid melting point at $\r \s^2 \approx 1.01$. In the presence of a time- independent potential profile with $V(t)=U_0$ and $\a=0$, the system undergoes RLIM with increase in $U_0$ [@Wei1998; @Frey1999; @Chaudhuri2006]. At $U_0=\e$, the laser induced melting point of the soft-core solid is $\r \s^2 = 0.95$ [@Chaudhuri2006].
We perform molecular dynamics simulations of the system in the presence of an external ratcheting potential using the standard leap-frog algorithm [@Frenkel2002] with a time-step $\d t = 0.001\,\t$ where $\t= \s \sqrt{m/\e}$ is the characteristic time scale. We use $m=1$. The temperature of the system is kept constant at $T =1.0 \e/\kb$ using a Langevin thermostat characterized by an
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The two-dimensional $\mathcal{N}=(2,2)$ Wess–Zumino (WZ) model with a cubic superpotential is numerically studied with a momentum-cutoff regularization that preserves supersymmetry. A numerical algorithm based on the Nicolai map is employed and the resulting configurations have no autocorrelation. This system is believed to flow to an $\mathcal{N}=(2,2)$ superconformal field theory (SCFT) in the infrared (IR), the $A_2$ model. From a finite-size scaling analysis of the susceptibility of the scalar field in the WZ model, we determine $1-h-\Bar{h}=0.616(25)(13)$ for the conformal dimensions $h$ and $\Bar{h}$, while $1-h-\Bar{h}=0.666\dots$ for the $A_2$ model. We also measure the central charge in the IR region from a correlation function between conserved supercurrents and obtain $c=1.09(14)(31)$ ($c=1$ for the $A_2$ model). These results are consistent with the conjectured emergence of the $A_2$ model, and at the same time demonstrate that numerical studies can be complementary to analytical investigations for this two-dimensional supersymmetric field theory.'
address:
- 'Graduate School of Science, Rikkyo University, 3-34-1 Nishi-Ikebukuro, Toshima-ku, Tokyo 171-8501, Japan'
- 'Theoretical Research Division, RIKEN Nishina Center, Wako 2-1, Saitama 351-0198, Japan'
author:
- Syo Kamata
- Hiroshi Suzuki
bibliography:
- '<your-bib-database>.bib'
title: 'Numerical simulation of the $\mathcal{N}=(2,2)$ Landau–Ginzburg model'
---
Supersymmetry,Non-perturbative study,Landau–Ginzburg model,Nicolai map
Introduction
============
It is believed that the infrared (IR) limit of the two-dimensional $\mathcal{N}=(2,2)$ Wess–Zumino model[^1] (2D $\mathcal{N}=(2,2)$ WZ model) with a quasi-homogeneous superpotential[^2] is a non-trivial $\mathcal{N}=(2,2)$ superconformal field theory (SCFT) [@Kastor:1988ef; @Vafa:1988uu; @Lerche:1989uy; @Howe:1989qr; @Cecotti:1989jc; @Howe:1989az; @Cecotti:1989gv; @Cecotti:1990kz; @Witten:1993jg]. See Section 19.4 of Ref. [@Polchinski:1998rr] and Section 14.4 of Ref. [@Hori:2003ic] for reviews. This Landau–Ginzburg (LG) description [@Zamolodchikov:1986db] of $\mathcal{N}=(2,2)$ SCFT is a remarkable non-perturbative phenomenon in field theory and physically, for example, provides a basis for application of the gauged linear sigma model [@Witten:1993yc] to the Calabi-Yau compactification. Although the emergence of SCFT has been tested in various ways, it is very difficult to confirm this phenomenon directly in correlation functions, because the 2D WZ model is strongly coupled in low energies. Application of conventional numerical techniques (such as the lattice) would not be straightforward either, because supersymmetry (SUSY) must be essential in the above non-perturbative dynamics.
In a recent interesting paper [@Kawai:2010yj], Kawai and Kikukawa revisited this problem and they computed non-perturbatively some correlation functions in the 2D WZ model by employing a lattice formulation of Ref. [@Kikukawa:2002as]. They considered the 2D $\mathcal{N}=(2,2)$ WZ model with a massless cubic superpotential $$W(\Phi)=\frac{\lambda}{3}\Phi^3,
\label{eq:(1.1)}$$ which, according to the conjectured correspondence, should provide a LG description of a pair of the $\mathcal{N}=2$ $c=1$ minimal models, where one is left-moving and the other is right-moving (the so-called $A_2$ model). In the IR limit, the scalar field in the WZ model $A$ is identified with a chiral primary field in the $A_2$ model with the conformal dimensions $(h,\Bar{h})=(1/6,1/6)$ and $U(1)$ charges $(q,\Bar{q})=(1/3,1/3)$. (The complex conjugate $A^*$ is identified with an anti-chiral primary field with $(h,\Bar{h})=(1/6,1/6)$ and $(q,\Bar{q})=(-1/3,-1/3)$.) The authors of Ref. [@Kawai:2010yj] obtained finite-size scalings of scalar two-point functions which are remarkably consistent with the above SCFT correspondence, thus demonstrated the power of a lattice formulation of this supersymmetric field theory.[^3]
In this paper, motivated by the success of Ref. [@Kawai:2010yj], we study the 2D $\mathcal{N}=(2,2)$ WZ model with massless cubic superpotential numerically. We employ a non-perturbative formulation advocated in Ref. [@Kadoh:2009sp] that uses a simple momentum cutoff regularization. Although there is an issue concerning the locality in this formulation, the restoration of an expected locality property can be shown at least within perturbation theory [@Kadoh:2009sp]. This formulation possesses very nice symmetry properties: it exactly preserves full SUSY, translational invariance, and linear internal symmetries such as the $R$-symmetry. We believe that these nice symmetry properties are especially useful in defining Noether currents in the regularized framework. In fact, by defining conserved supercurrents and identifying the component of the superconformal currents in the IR limit, we numerically measure the central charge of the system in the IR region: Together with a measurement of the conformal dimension, this forms a main result of the present paper.
Throughout this paper, Greek indices from the middle of the alphabet, $\mu$, $\nu$, … run over $0$ and $1$. Greek indices from the beginning $\alpha$, $\beta$, … are for spinor indices and run over $1$ and $2$. Repeated indices are *not* summed over unless explicit summation symbol is indicated. We extensively use the complex coordinates defined by $$z\equiv x_0+ix_1,\qquad\Bar{z}\equiv x_0-ix_1,$$ and $$\partial_z\equiv\frac{1}{2}(\partial_0-i\partial_1),\qquad
\partial_{\Bar{z}}\equiv\frac{1}{2}(\partial_0+i\partial_1).$$ Conjugate momenta are defined by $$p_z\equiv\frac{1}{2}(p_0-ip_1),\qquad
p_{\Bar{z}}\equiv\frac{1}{2}(p_0+ip_1).$$ Two-dimensional gamma matrices are defined by $$\gamma_0\equiv\begin{pmatrix}0&1\\1&0\end{pmatrix},\qquad
\gamma_1\equiv\begin{pmatrix}0&i\\-i&0\end{pmatrix},$$ and $$\gamma_z\equiv\begin{pmatrix}0&1\\0&0\end{pmatrix},\qquad
\gamma_{\Bar{z}}\equiv\begin{pmatrix}0&0\\1&0\end{pmatrix}.$$
Supersymmetric formulation of the 2D $\mathcal{N}=(2,2)$ WZ model
=================================================================
We start by recapitulating the formulation of Ref. [@Kadoh:2009sp]. We suppose that the system is defined in a finite box with a physical size $L_0\times L_1$. The Fourier modes $\Tilde{f}(p)$ of a periodic function in the box are defined by $$f(x)=\frac{1}{L_0L_1}\sum_pe^{ipx}\Tilde{f}(p),\qquad
\Tilde{f}(p)=\int d^2x\,e^{-ipx}f(x),$$ where the momentum $p$ takes discrete values $$p_\mu=\frac{2\pi}{L_\mu}\,n_\mu,\qquad n_\mu=0,\pm1,\pm2,\dots,$$ and by the definition, $$\Tilde{f^*}(p)=\Tilde{f}(-p)^*,$$ where the left-hand side denotes the Fourier transformation of the complex conjugate of $f(x)$, $f(x)^*$. In the present formulation [@Kadoh:2009sp], we restrict the momentum $p$ by an ultraviolet (UV) cutoff $\Lambda$, $$-\Lambda\leq p_\mu\leq\Lambda,\qquad\text{for $\mu=0$, $1$}.
\label{eq:(2.4)}$$ We parametrize this UV cutoff by a “lattice spacing” $a
|
{
"pile_set_name": "ArXiv"
}
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.