text
stringlengths 0
12.5k
| meta
dict | change_metrics
dict |
|---|---|---|
---
abstract: 'In this paper is proposed a geometric solution to the dark energy, assuming that the space can be divided into regions of size $\sim L_{p}$ and energy $\sim E_{p}$. Significantly this assumption generate a energy density similar to the energy density observed for the vaccum energy, the correct solution for the coincidence problem and the state equation characteristic of quintessence in the comoving coordinates. Similarly is studied the ultraviolet and infrarred limits and the amount of dark energy in the Universe.'
author:
- 'Miguel A. García-Aspeitia'
title: 'About the Geometric Solution to the Problems of Dark Energy.'
---
Introduction.
=============
One of the most intriguing problem of the modern cosmology is the accelerated expansion of the Universe. The best explanation for this problem, is the existence of an unknown kind of energy not predicted by the standard model of particles nor by the general theory of relativity. This dark component is called dark energy (DE) with the property of accelerate the rate expansion of the Universe. Currently exist differents kind of models trying to explain the nature and the behavior of DE. For example the best models for DE are quintessence, phantom energy, cosmological constant and higher dimensional theories, each one of them try to mimic the behavior of DE. However the discordance between the theoretical predictions and the observations puts into question the validity of the models at large and Planck scales.
To understand the detail of DE we will enumerate the main problems in the following way
1. *The fine tuning problem.* Observational evidence show that the energy density today must be $\vert\rho^{obs}\vert\leqslant2\times10^{-10}erg/cm^{3}$ [@Carroll], this imply that the theoretical predictions must relate two quantities appear to be unrelated [@Bousso1]
$$t_{DE}\sim t_{obs} \label{time}$$
where $t_{DE}\sim\rho_{DE}^{-1/2}$ is the domination time of the DE and $t_{obs}$ is the time at which observer exist.
2. *The coincidence problem.* Another problem caused by DE is the coincidence where this problem can be summarized in the following question: Why the Universe starts the acceleration today ($\sim13.7\times10^{9} yrs$)? Any good model should address this question.
3. *The “central” Problems.* At this point we refer to the main features of DE as: the amount of DE in the Universe, the state equation and the convergence of $\langle\rho\rangle$ in the infrarred and ultraviolet limit.
In the following sections we will focuse on addressing one by one the above points with the aim of found some physical explanation to the problem of DE.
In the following, is used the CGS units, unless explicitly written.
The three main problems of dark energy.
=======================================
The fine tuning problem.
------------------------
Before to mention the proposal, it is important to stress that we work in a *physical coordinates* [@Liddle].
*The proposal.* The idea is assume that the Universe is full of minimal regions of equal size [@Miguel]. It is assumed that a region smaller that the one given by the equation will collapse into a Planckian black hole where all the physical information is lost. Then it is possible to write this minimal regions as
$$L_{p}=\frac{1}{\sqrt{2}}\sqrt{\frac{\hbar G}{c^3}}\approx\sqrt{\frac{\hbar G}{c^3}}, \label{Lp}$$
where $G$ is the gravitational Newton constant, $c$ is the light velocity and $\hbar$ is the reduced Planck constant. It is important to remark that the last expression in the equation is the well known Planckian longitude.
Then, intuitively we assume that inside of compact region of the Universe with size $L_{0}$ exist $n_{0}$ minimal regions of size $L_{p}$ immersed in the following way $$n_{0}=\left(\frac{L_{0}}{L_{p}}\right).$$ It is possible to observe that the we made a count of the number of Planck regions in a preferred direction and not in the three spatial directions of the space (see Figure \[fig:1\]). The answer to why this is done can have deeper implications, which will be discussed in the section IV.
![Sketch of the grid hypothesis. In the figure it is observed the minimal regions of the “bricks” $L_{p}$ and the minimal energy $E_{p}$ as well as the size of the Universe $L_{0}$ and the preferential direction of the counting $n_{0}$.[]{data-label="fig:1"}](Grid.pdf)
Returning to the idea, it is possible to assume that each region have a energy [@Miguel] written as
$$E_{p}=\frac{1}{2\sqrt{2}}\sqrt{\frac{\hbar c^{5}}{G}}\approx\sqrt{\frac{\hbar c^{5}}{G}}, \label{Ep}$$
where the last expression of the equation is the Planck energy. Then it is possible to obtain the total energy provided by all the regions in the Universe as $$\langle E_{T_{0}}\rangle\approx\sum_{i=1}^{n_{0}}E_{p_{i}}=n_{0}E_{p}.$$ On the other hand, observational evidence show that the Universe at large scales behaves as homogeneous and isotropic space expanding in time with a flat geometry. Flatness implies that the geometry of the hypersurface is Euclidean $\mathbb{R}^3$ then, it is possible to define the volumen of the Universe as [@Bousso] $$V_{0}\approx\frac{4}{3}\pi L^{3}_{0}.$$ Defining the energy density as $\langle\rho_{Y}\rangle=\langle E_{T_{0}}\rangle/V_{0}$ it is possible to write in the following way
$$\langle\rho_{Y}\rangle_{L_{0}}\approx\frac{3}{4\pi L_{0}^{3}}\sum_{i=1}^{L_{0}/L_{p}}E_{pi}\approx\left(\frac{3c^4}{8\pi G}\right)L_{0}^{-2}. \label{enrgdens}$$
Now, using the evidence that the Universe is finite it is possible to compactified in a hypersurface and assign an approximate size of the Universe actually as $L_{0}\approx ct_{0}$, at a fixed moment of time, where $t_{0}$ is the actual age of the Universe [@Holo]. Then the previous expression can be written as
$$\langle\rho_{Y}\rangle_{t_{0}}\approx\left(\frac{3c^2}{8\pi G}\right)t_{0}^{-2}, \label{t}$$
where it is shown the comparison between the present cosmological time and the energy density of the DE as $t_{0}\sim\rho_{Y}^{-1/2}$. Helping to solve the fine tuning problem where $\langle\rho_{Y}\rangle_{t_{0}}\approx8.62\times10^{-9}erg/cm^{3}$.
The Recent Acceleration.
------------------------
Another problem for the dark energy is the coincidence of the recent acceleration ($\sim4.32\times10^{17} s$). In this model, we explore a possible solution to the problem in the following way:
In fact, the Universe has a specific quantity of barions, radiation, neutrinos and dark matter with the possibility of colapse the Universe with its gravitational interactions. Then, it is possible to obtain with observations the total mass of the Universe as $M_{u}\sim10^{56}g$. With this amount of matter it is necessary a minimum of energy to accelerate the Universe which can be written as $$E_{min}=n_{acc}E_{p}\gtrsim G\frac{M_{u}^2}{L_{acc}},$$ where $n_{acc}=(L_{acc}/L_{p})$. Assuming that we know the Newtonian potential, it is straightforward to demonstrate the following equation
$$L_{acc}\gtrsim\sqrt{\frac{GL_{p}}{E_{p}}}M_{u}=\frac{\sqrt{2}GM_{u}}{c^{2}}\approx R_{s}, \label{accel}$$
where $L_{acc}$ is the size of the Universe in the moment of acceleration. It is possible to observe that the minimal longitude must be approximately the Schwarzschild radius $R_{s}$.
Adding numbers to the last equation is obtained $L_{acc}\gtrsim1.0482\times10^{28}$ $cm$ $\approx L_{0}$ this imply that $t_{acc}\approx3.49\times10^{17} s$ which coincides in a good way with the moment of acceleration.
The Universe start the acceleration at this age and never before while the mass is of $M_{u}\sim10^{56}g$.
The Amount of Dark Energy in the Universe.
------------------------------------------
In cosmology, the critical energy density relate the content in the Universe with its geometry and is defined as
$$\rho_{crit}(t_{0})=\frac{3H^2_{0}c^{2}}{8\pi G}, \label{crit}$$
where $H_{0}$ is the Hubble rate today. If $\rho>\rho_{crit}$ the geometry is $\mathbb{S}^{3}$, $\rho\sim\rho_{crit}$ the geometry is $\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The concepts of Feynman integrals in white noise analysis are used to realize the Feynman integrand for a charged particle in a constant magnetic field as a Hida distribution. For this purpose we identify the velocity dependent potential as a so called generalized Gauss kernel.'
address:
- |
Functional Analysis and Stochastic Analysis Group,\
Department of Mathematics,\
University of Kaiserslautern, 67653 Kaiserslautern, Germany
- |
Functional Analysis and Stochastic Analysis Group,\
Department of Mathematics,\
University of Kaiserslautern, 67653 Kaiserslautern, Germany
- |
Functional Analysis and Stochastic Analysis Group,\
Department of Mathematics,\
University of Kaiserslautern, 67653 Kaiserslautern, Germany
author:
- Wolfgang Bock
- Martin Grothaus
- Sebastian Jung
title: The Feynman integrand for the Charged Particle in a Constant Magnetic field as White Noise Distribution
---
Introduction
============
As an alternative approach to quantum mechanics Feynman introduced the concept of path integrals ([@F48; @Fe51; @FeHi65]), which was developed into an extremely useful tool in many branches of theoretical physics. In this article we use concepts for realizing Feynman integrals in the framework of white noise analysis. The Feynman integral for a particle moving from $0$ at time $0$ to $\mathbf{y} \in {\mathbb{R}}^d$ at time $t$ under the potential $V$ is given by $$\label{eqnfey}
{\rm N} \int_{\mathbf{x}(0)=0, \mathbf{x}(t)=\mathbf{y}} \int \exp\left(\frac{i}{\hbar} \int_0^t \frac{1}{2}m\dot{\mathbf{x}}^2 -V(\mathbf{x},\dot{\mathbf{x}}) \, d\tau \right) \prod_{0<\tau<t} d\mathbf{x}(\tau),\quad \hbar = \frac{h}{2\pi}.$$ Here $h$ is Planck’s constant, and the integral is thought of being over all paths with $\mathbf{x}(0)=0$ and $\mathbf{x}(t)=\mathbf{y}$.\
In the last fifty years there have been many approaches for giving a mathematically rigorous meaning to the Feynman integral by using e.g. analytic continuation,limits of finite dimensional approximations or Fresnel integrals. Instead of giving a complete list of publications concerning Feynman integrals we refer to [@AHKM08] and the references therein. Here we choose a white noise approach. white noise analysis is a mathematical framework which offers generalizations of concepts from finite-dimensional analysis, like differential operators and Fourier transform to an infinite-dimensional setting. We give a brief introduction to white noise analysis in Section 2, for more details see [@Hid80; @HKPS93; @Ob94; @BK95; @Kuo96]. Of special importance in white noise analysis are spaces of generalized functions and their characterizations. In this article we choose the space of Hida distributions, see Section 2.\
The idea of realizing Feynman integrals within the white noise framework goes back to [@HS83]. There the authors used exponentials of quadratic (generalized) functions in order to give meaning to the Feynman integral in configuration space representation $${\rm N}\int_{\mathbf{x}(0) =0, \mathbf{x}(t)=y} \exp\left(\frac{i}{\hbar} S(\mathbf{x}) \right) \, \prod_{0<\tau<t} \, d\mathbf{x}(\tau) ,\quad \hbar = \frac{h}{2\pi},$$ with the classical action $S(\mathbf{x})= \int_0^t \frac{1}{2} m \dot{\mathbf{x}}^2 -V(\mathbf{x})\, d\tau$. We use these concepts of quadratic actions in white noise analysis, which were further developed in [@GS98a] and [@BG10] to give a rigorous meaning to the Feynman integrand $$\begin{gathered}
\label{integrandpot}
I_V = {\rm Nexp}\left( \frac{i}{\hbar}\int_0^t \frac{m}{2} \dot{\mathbf{x}}(\tau)^2 d\tau +\frac{1}{2}\int_0^t \dot{\mathbf{x}}(\tau)^2 d\tau\right)\\
\times \exp\left(-\frac{i}{\hbar} \int_0^t V(\mathbf{x}(\tau),\dot{\mathbf{x}}(\tau),\tau) \, d\tau\right) \cdot \delta_0(\mathbf{x}(t)-y)\end{gathered}$$ as a Hida distribution. In this expression the sum of the first and the third integral in the exponential is the action $S(\mathbf{x},\mathbf{\dot{x}})$, and the delta function (Donsker’s delta) serves to pin trajectories to $\mathbf{y}$ at time $t$. The second integral is introduced to simulate the locally Lebesgue integral by a local compensation of the fall-off of the Gaussian reference measure $\mu$. Furthermore we use a two-dimensional Brownian motion starting in $0$ as the path i.e. $$\label{varchoice}
\mathbf{x}(\tau)=\sqrt{\frac{\hbar}{m}}\mathbf{B}(\tau).$$ The construction is done in terms of the $T$-transform (infinite-dimensional version of the Fourier transform w.r.t a Gaussian measure), which characterizes Hida distributions, see Theorem \[charthm\]. At the same time, the $T$-transform of the constructed Feynman integrands provides us with their generating functional. Finally using the generating functional, we can show that the generalized expectation (generating functional at zero) gives the Greens function to the corresponding Schrödinger equation.\
In this article we consider the potential given by the action of a constant magnetic field to a moving particle. From classical physics it is well-known, that a magnetic field is influencing the so-called Lorentz force on a charged particle moving through this field. The corresponding potential term of a charged particle moving in the $(1,2)$-plane is given by $$({\mathbf{x}},\dot{{\mathbf{x}}}) \mapsto V_{\rm{mag}}({\mathbf{x}},\dot{{\mathbf{x}}})= -\frac{q H_3}{c} \left(x_1\dot{x_2}-\dot{x_1}x_2\right),$$ where $q$ is the charge, $H_3$ the strength of the magnetic field vector orthogonal to the $(1,2)$-plane and $c$ the speed of light.
These are the core results of this article:
- The concepts of generalized Gauss kernels from [@GS98a] and [@BG10] are used to construct the Feynman integrand for a charged particle in a constant magnetic field as a Hida distribution, see Theorem \[magnetictheorem\].
- The results in Theorem \[magnetictheorem\] provide us with the generating functional for a charged particle in a constant magnetic field.
- The generalized expectations (generating functional at zero) yields the Greens functions to the corresponding Schrödinger equation.
White Noise Analysis
====================
Gel’fand Triples
----------------
Starting point is the Gel’fand triple $S_d({\mathbb{R}}) \subset L^2_d({\mathbb{R}},dx) \subset S'_d({\mathbb{R}})$ of the ${\mathbb{R}}^d$-valued, $d \in {\mathbb{N}}$, Schwartz test functions and tempered distributions with the Hilbert space of (equivalence classes of) ${\mathbb{R}}^d$-valued square integrable functions w.r.t. the Lebesgue measure as central space (equipped with its canonical inner product $(\cdot, \cdot)$ and norm $\|\cdot\|$), see e.g. [@W95 Exam. 11]. Since $S_d({\mathbb{R}})$ is a nuclear space, represented as projective limit of a decreasing chain of Hilbert spaces $(H_p)_{p\in {\mathbb{N}}}$, see e.g. [@RS75a Chap. 2] and [@GV68], i.e. $$S_d({\mathbb{R}}) = \bigcap_{p \in {\mathbb{N}}} H_p,$$ we have that $S_d({\mathbb{R}})$ is a countably Hilbert space in the sense of Gel’fand and Vilenkin [@GV68]. We denote the inner product and the corresponding norm on $H_p$ by $(\cdot,\cdot)_p$ and $\|\cdot\|_p$, respectively, with the convention $H_0 = L^2_d({\mathbb{R}}, dx)$. Let $H_{-p}$ be the dual space of $H_p$ and let $\langle \cdot , \cdot \rangle$ denote the dual pairing on $H_{p} \times H_{-p}$. $H_{p}$ is continuously embedded into $L^2_d({\mathbb{R}},dx)$. By identifying $L_d^2({\mathbb{R}},dx)$ with its dual $L_d^2({\mathbb{R}},dx)'$, via the Riesz isomorphism, we obtain the chain $H_p \subset L_d^2({\mathbb{R}}, dx) \subset H_{-p}$. Note
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'A detailed analytical inspection of light scattering by a particle with high refractive index $m+i\kappa$ and small dissipative constant $\kappa$ is presented. We have shown that there is a dramatic difference in the behavior of the electromagnetic field within the particle (inner problem) and the scattered field outside it (outer problem). With an increase in $m$ at fix values of the other parameters, the field within the particle asymptotically converges to a periodic function of $m$. The electric and magnetic type Mie resonances of different orders overlap substantially. It may lead to a giant concentration of the electromagnetic energy within the particle. At the same time, we demonstrate that identical transformations of the solution for the outer problem allow to present each partial scattered wave as a sum of two partitions. One of them corresponds to the $m$-independent wave, scattered by a perfectly reflecting particle and plays the role of a background, while the other is associated with the excitation of a sharply-$m$-dependent resonant Mie mode. The interference of the partitions brings about a typical asymmetric Fano profile. The explicit expressions for the parameters of the Fano profile have been obtained “from the first principles” without any additional assumptions and/or fitting. In contrast to the inner problem, at an increase in m the resonant modes of the outer problem die out, and the scattered field converges to the universal, $m$-independent profile of the perfectly reflecting sphere. Numerical estimates of the discussed effects for a gallium phosphide particle are presented.'
author:
-
- 'Andrey E. Miroshnichenko'
bibliography:
- 'Mie\_Fano.bib'
title: 'Giant In-Particle Field Concentration and Fano Resonances at Light Scattering by High-Refractive Index Particles'
---
Introduction
============
Presently the resonant light scattering by particles related to excitation of different eigenmodes attracts a great deal of attention of researchers all around the world [@Zhao:MT:2009; @Rybin:PRL:2009; @Evlyukhin:PRB:2011; @Staude:ACSN:2013; @Hancu:NL:2014; @Kuznetsov:NC:2014]. In addition to purely academic interest there is a broad spectrum of applications of the phenomenon in physics, chemistry, biology, medicine, data storage and processing, telecommunications, micro- and nanotechnologies, etc., see, e.g., [@Novotny:Book:2006; @Rybin:PRB:2013]. In particular, plenty of hopes were pinned on the resonant excitation of localized and/or bulk plasmons in metal nanoparticles [@Klimov:Book:2014]. Unfortunately, plasmonic resonances in such nanoparticles are usually accompanied with rather large dissipative losses, which in many cases diminish the advantages of the resonances. For this reason recently the frontier of the corresponding study has been shifted to light scattering by dielectric particles with low losses and high refractive index (HRI) $m$. In contrast to the plasmonic resonances, they exhibit the high $Q$-factor Mie resonances of both electric and magnetic types which bring more opportunities for wider applications in sensing, spontaneous emission enhancement, and unidirectional scattering.
Despite the fact that the exact Mie solution, describing light scattering by a sphere with an arbitrary size and material properties, is known for more than a hundred years, and the case of a sphere with HRI has been repeatedly discussed in textbooks and monographs, see, e.g. [@Landau:T8:1984; @Hulst::1981], some important peculiarities of this problem have not been disclosed yet. Meanwhile, the quantitative feature of HRI brings about qualitatively new effects and paradoxes, which merely do not exist at moderate values of the refractive index, see, e.g. [@Evlyukhin:PRB:2010; @Tribelsky:EPL:2012; @Geffrin:NC:2012]. In the present paper we produce a detailed, systematic study of light scattering by a HRI particle. Specifically, we show that the scattering may be accompanied with a *giant concentration* of the electromagnetic field within the particle and reveal the nature of the Fano resonances exhibited by partial scattered waves.
It is well known that at the limit $m \rightarrow \infty$ a small (relative to the incident light wavelength) dielectric sphere scatters light as a perfectly reflecting one (PRS) “into which neither the electric, nor the magnetic field penetrates" [@Landau:T8:1984]. Then, it may be concluded that the electromagnetic field within the scattering particle should vanish at $m \rightarrow \infty$. Seemingly, the conclusion is supported by the argument that a HRI implies a high polarizability of the sphere. Then, at $m \rightarrow \infty$ from the point of view of the polarizability by an external electric field a dielectric sphere becomes equivalent to a perfectly conducting one [@Landau:T8:1984], and the electric field induced within the particle owing to its polarization by the incident light should compensate the field inducing the polarization. That is to say, the field within the particle should vanish.
In fact, the question is much more subtle, and the actual situation is far from this simple picture. The point is that the wavelength inside the particle vanishes at $m \rightarrow \infty$. Then, no matter how small the particle is, at large enough $m$ the wavelength within the particle becomes smaller than the particle size. In this case the incident wave may resonantly excite the Mie electromagnetic eigenmodes in the particle. Moreover, an unlimited growth in $m$ results in infinite cascades of these resonances. The interference of the resonant eigenmodes with the incident wave and, what is the most important, with each other gives rise to dramatic changes in the aforementioned simple scattering process. To reveal these changes is the goal of our study. To this end the full Mie theory is employed.
We show that while at $m \rightarrow \infty$ the scattered field for the outer problem does converge to the one for the PRS (no matter, whether the sphere is small, or large), the field within the particle, though it sounds paradoxical, does not have any limit at all. Such a difference between the outer and inner problems is related to the different lineshapes of the Mie resonances in the two problems. For the former an increase in $m$ makes the resonances less pronounced. For the latter in the non-dissipative limit the amplitude of the resonances increases with an increase in $m$. In the case of a finite dissipation rate (regardless how small it is), the growth of the amplitudes eventually saturates and the resonance lines become periodic functions of $m$. In both the cases the field within the particle does not tend to any fixed limit at $m \rightarrow \infty$.
It is important to stress, that the resonance lines of different orders and different origin (i.e., electric and magnetic) may overlap substantially. The mentioned peculiarities of the inner field in the vicinity of the resonances may result in a *giant concentration* of the electromagnetic energy inside the particle. At realistic values of the refractive index and the proper selection of the particle radius the field inside the particle may exceed the one in the incident wave in several orders of magnitude. Such a huge field may give rise to numerous nonlinear effects. For this reason the discussed results may appear extremely important in the design and fabrication of highly-nonlinear nanostructures.
Regarding the outer problem, it is known that the scattering coefficients in this problem have the well-pronounced asymmetric Fano resonance lines. Recent publications of Rybin et al, Ref. [@Rybin:OE:2013; @Rybin:FTT:2014; @Rybin:SciRep:2014] should be mentioned in this context. Based on the analysis of the exact Lorenz-Mie solution for a cylinder the authors of these publications have revealed that the resonant Mie scattering can be presented through infinite cascades of the Fano resonances between the narrow-line resonant Mie scattering and the non-resonant (background) scattering from the object. The analytical expressions for both the partitions have been obtained through the Maxwell boundary conditions. The numerical fit of the lineshape, resulting from the exact solution in the vicinity of the resonances, to the conventional Fano profile [@Fano:PR:1961] has allowed the authors to obtain the dependence of the Fano asymmetry parameter $q$ [@Fano:PR:1961] on the ratio of the radius of the cylinder R to the wavelength of the incident light $x =2\pi R/\lambda$ (size parameter) in rather a broad range of its variations. They also have shown that in the inspected cases $q(x) \sim −\cot x$. This dependence agrees with their previous results for disordered photonic crystals [@Rybin:NatCom:2012], as well as with the general expression for $q$ in terms of the phase shift of the background partition [@Connerade:RPP:1988].
Despite the study of these authors is a big step forward to understanding the essence of the Fano resonances at light scattering by a particle, they have not disclosed the physical nature of the background partition. Regarding the results obtained by the numerical fit, the great advantage of this procedure is the possibility to fit any curve with any set of the basic functions. However, precisely because of that, based on the fitting solely, one never can answer the question whether the studied profile is the Fano profile indeed, or it is *just fitted* to that profile. It also remains unclear how far beyond the inspected numerical domain the obtained results could be extended, e.g., what happens with the modes with the multip
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
We generalize the unique decoding algorithm for one-point AG codes over the Miura-Kamiya $C_{ab}$ curves proposed by @lee11 to general one-point AG codes, without any assumption. We also extend their unique decoding algorithm to list decoding, modify it so that it can be used with the Feng-Rao improved code construction, prove equality between its error correcting capability and half the minimum distance lower bound by @andersen08 that has not been done in the original proposal except for one-point Hermitian codes, remove the unnecessary computational steps so that it can run faster, and analyze its computational complexity in terms of multiplications and divisions in the finite field. As a unique decoding algorithm, the proposed one is as fast as the BMS algorithm for one-point Hermitian codes, and as a list decoding algorithm it is much faster than the algorithm by @beelen10.\
**Keywords:** algebraic geometry code, Gröbner basis, list decoding\
**MSC 2010:** Primary: 94B35; Secondary: 13P10, 94B27, 14G50
author:
- 'Ryutaroh Matsumoto[^1], Diego Ruano[^2], and Olav Geil'
date: 'April 22, 2013'
title: 'List Decoding Algorithm based on Voting in Gröbner Bases for General One-Point AG Codes'
---
Introduction
============
We consider the list decoding of one-point algebraic geometry (AG) codes. @guruswami99 proposed the well-known list decoding algorithm for one-point AG codes, which consists of the interpolation step and the factorization step. The interpolation step has large computational complexity and many researchers have proposed faster interpolation steps, see [@beelen10 Figure 1].
By modifying the unique decoding algorithm [@lee11] for primal one-point AG codes, we propose another list decoding algorithm based on voting in Gröbner bases whose error correcting capability is higher than [@guruswami99] and whose computational complexity is smaller than [@beelen10; @guruswami99] in many cases. A decoding algorithm for primal one-point AG codes was proposed in [@ldecodepaper], which was a straightforward adaptation of the original Feng-Rao majority voting for the dual AG codes [@fengrao93] to the primal ones. The Feng-Rao majority voting in [@ldecodepaper] for one-point primal codes was generalized to multi-point primal codes in [@beelen08 Section 2.5]. The one-point primal codes can also be decoded as multi-point dual codes with majority voting [@beelen07; @duursma11; @duursma10], whose faster version was proposed in [@sakata11] for the multi-point Hermitian codes. @lee11 proposed another unique decoding (not list decoding) algorithm for primal codes based on the majority voting inside Gröbner bases. The module used by them [@lee11] is a curve theoretic generalization of one used for Reed-Solomon codes in [@kuijper11] that is a special case of the module used in [@lee08]. An interesting feature in [@lee11] is that it did not use differentials and residues on curves for its majority voting, while they were used in [@beelen08; @ldecodepaper]. The above studies [@beelen08; @lee11; @ldecodepaper] dealt with the primal codes. We recently proved in [@gmr13] that the error-correcting capabilities of [@lee11; @ldecodepaper] are the same. The earlier papers [@duursma94; @pellikaan93] suggest that central observations in [@andersen08; @gmr13; @ldecodepaper] were known to the Dutch group, which is actually the case [@duursma12pcomm]. @chen99, @elbrondjensen99 and @amoros06 studied the error-correcting capability of the Feng-Rao [@fengrao93] or the BMS algorithm [@sakata95b; @sakata95a] with majority voting beyond half the designed distance that are applicable to the dual one-point codes.
There was room for improvements in the original result [@lee11], namely, (a) they have not clarified the relation between its error-correcting capability and existing minimum distance lower bounds except for the one-point Hermitian codes, (b) they have not analyzed the computational complexity, (c) they assumed that the maximum pole order used for code construction is less than the code length, and (d) they have not shown how to use the method with the Feng-Rao improved code construction [@feng95]. We shall (1) prove that the error-correcting capability of the original proposal is always equal to half of the bound in [@andersen08] for the minimum distance of one-point primal codes (Proposition \[prop:AG\]), (2) generalize their algorithm to work with any one-point AG codes, (3) modify their algorithm to a list decoding algorithm, (4) remove the assumptions (c) and (d) above, (5) remove unnecessary computational steps from the original proposal, (6) analyze the computational complexity in terms of the number of multiplications and divisions in the finite field. The proposed algorithm is implemented on the Singular computer algebra system [@singular313], and we verified that the proposed algorithm can correct more errors than [@beelen10; @guruswami99] with manageable computational complexity.
This paper is organized as follows: Section \[sec2\] introduces notations and relevant facts. Section \[sec3\] improves [@lee11] in various ways, and the differences to the original [@lee11] are summarized in Section \[sec:diff\]. Section \[sec4\] shows that the proposed modification to [@lee11] works as claimed. Section \[sec:experiment\] compares its computational complexity with the conventional methods. Section \[sec6\] concludes the paper. Part of this paper was presented at 2012 IEEE International Symposium on Information Theory, Cambridge, MA, USA, July 2012 [@gmr12isit].
Notation and Preliminary {#sec2}
========================
Our study heavily relies on the standard form of algebraic curves introduced independently by @geilpellikaan00 and @miura98, which is an enhancement of earlier results [@miura92; @saints95]. Let $F/\mathbf{F}_q$ be an algebraic function field of one variable over a finite field $\mathbf{F}_q$ with $q$ elements. Let $g$ be the genus of $F$. Fix $n+1$ distinct places $Q$, $P_1$, …, $P_n$ of degree one in $F$ and a nonnegative integer $u$. We consider the following one-point algebraic geometry (AG) code $$C_u = \{ {\mathrm{ev}}(f) \mid f \in \mathcal{L}(uQ)\}
\label{eq:cu}$$ where ${\mathrm{ev}}(f) = (f(P_1)$, …, $f(P_n))$. Suppose that the Weierstrass semigroup $H(Q)$ at $Q$ is generated by $a_1$, …, $a_t$, and choose $t$ elements $x_1$, …, $x_t$ in $F$ whose pole divisors are $(x_i)_\infty = a_iQ$ for $i=1$, …, $t$. We do *not* assume that $a_1$ is the smallest among $a_1$, …, $a_t$. Without loss of generality we may assume the availability of such $x_1$, …, $x_t$, because otherwise we cannot find a basis of $C_u$ for every $u$. Then we have that $\mathcal{L}(\infty Q) = \cup_{i=1}^\infty\mathcal{L}(iQ)$ is equal to $\mathbf{F}_q[x_1$, …, $x_t]$ [@saints95]. We express $\mathcal{L}(\infty Q)$ as a residue class ring $\mathbf{F}_q[X_1$, …, $X_t]/I$ of the polynomial ring $\mathbf{F}_q[X_1$, …, $X_t]$, where $X_1$, …, $X_t$ are transcendental over $\mathbf{F}_q$, and $I$ is the kernel of the canonical homomorphism sending $X_i$ to $x_i$. @geilpellikaan00 [@miura98] identified the following convenient representation of $\mathcal{L}(\infty Q)$ by using Gröbner basis theory [@adams94]. The following review is borrowed from [@miuraform]. Hereafter, we assume that the reader is familiar with the Gröbner basis theory in [@adams94].
Let $\mathbf{N}_0$ be the set of nonnegative integers. For $(m_1$, …, $m_t)$, $(n_1$, …, $n_t) \in
\mathbf{N}_0^t$, we define the weighted reverse lexicographic monomial order $\succ$ such that $(m_1$, …, $m_t)$ $\succ$ $(n_1$, …, $n_t)$ if $a_1 m_1 + \cdots + a_t m_t > a_1 n_
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In this paper we study the Cauchy problem for doubly dissipative elastic waves in two space dimensions, where the damping terms consist of two different friction or structural damping. We derive energy estimates and diffusion phenomena with different assumptions on initial data. Particularly, we find the dominant influence on diffusion phenomena by introducing a new threshold of diffusion structure.'
address: |
Institute of Applied Analysis, Faculty for Mathematics and Computer Science\
Technical University Bergakademie Freiberg\
Prüferstra[ß]{}e 9\
09596 Freiberg\
Germany
author:
- Wenhui Chen
date: 'January 1, 2004'
title: Dissipative structure and diffusion phenomena for doubly dissipative elastic waves in two space dimensions
---
Introduction {#Introduction}
============
In this paper we consider the following Cauchy problem for doubly dissipative elastic waves in two space dimensions: $$\label{Eq.DoublyDissElasticWaves}
\left\{
\begin{aligned}
&u_{tt}-a^2\Delta u-\left(b^2-a^2\right)\nabla\operatorname{div}u+(-\Delta)^{\rho}u_t+(-\Delta)^{\theta}u_t=0,&&x\in{\mathbb}{R}^2,\,\,t>0,\\
&(u,u_t)(0,x)=(u_0,u_1)(x),&&x\in{\mathbb}{R}^2,
\end{aligned}
\right.$$ where the unknown $u=u(t,x)\in{\mathbb}{R}^2$ denotes the elastic displacement. The positive constants $a$ and $b$ in are related to the Lamé constants and fulfill $b>a>0$. Moreover, the parameters $\rho$ and $\theta$ in satisfy $0\leq\rho<1/2<\theta\leq1$.
Let us recall some related works to our problem . Taking $a=b=1$, $\rho=0$ and $\theta=1$ in , then we immediately turn to doubly dissipative wave equation, where the damping terms consist of *friction* $u_t$ as well as *viscoelastic damping* $-\Delta u_t$ $$\label{Eq.DoublyDissWave}
\left\{
\begin{aligned}
&u_{tt}-\Delta u+u_t-\Delta u_t=0,&&x\in{\mathbb}{R}^n,\,\,t>0,\\
&(u,u_t)(0,x)=(u_0,u_1)(x),&&x\in{\mathbb}{R}^n,
\end{aligned}
\right.$$ with $n\geq1$. The recent paper [@IkehataSawada2016] derived asymptotic profiles of solutions to in a framework of weighted $L^1$ data. Precisely, the authors found that from asymptotic profiles of solutions point of view, friction $u_t$ is more dominant than viscoelastic damping $-\Delta u_t$ as $t\rightarrow\infty$. Later in [@IkehataMichihisa2018], the authors obtained higher-order asymptotic expansions of solutions to and gave some lower bounds estimates to show the optimality of these expansions. For the other related works on , we refer the reader to the recent papers [@IkehataTakeda2017; @DAbbicco2017; @IkehataTakeda2019]. However, asymptotic profiles of solutions to general doubly dissipative wave equation, where the damping terms consist of friction or structural damping (i.e., taking $a=b=1$ in ), are still open. This open problem is proposed in [@IkehataSawada2016]. The main difficulty is to answer what is the dominant profile of solutions, due to the fact that the asymptotic profiles for wave equation with damping term $(-\Delta)^{\rho}u_t$ for $0\leq \rho<1/2$, or with damping term $(-\Delta)^{\theta}u_t$ for $1/2<\theta\leq 1$, are quite different. One may see, for example, [@Matsumura1976; @Karch2000; @MarcatiNishihara2003; @HosonoOgawa2004; @Narazaki2004; @Nishihara2003; @Takeda2015; @Ikehata2014; @DabbiccoEbert2014; @IkehataOnodera2017; @Michihisa2017; @Michihisa2018; @IkehataTakeda2019NEW; @Shibata2000; @Ponce1985; @DabbiccoReissig2014].
Let us come back to dissipative elastic waves. In recent years the Cauchy problem for dissipative elastic waves have aroused wide concern, which can be modeled by $$\label{Eq.DissElasticWaves}
\left\{
\begin{aligned}
&u_{tt}-a^2\Delta u-\left(b^2-a^2\right)\nabla\operatorname{div}u+{\mathcal}{A}u_t=0,&&x\in{\mathbb}{R}^n,\,\,t>0,\\
&(u,u_t)(0,x)=(u_0,u_1)(x),&&x\in{\mathbb}{R}^n,
\end{aligned}
\right.$$ where $b>a>0$ and the term ${\mathcal}{A}u_t$ describes several kinds of damping mechanisms.\
In the case when $$\begin{aligned}
{\mathcal}{A}u_t=u_t,\,\,\,\,\text{i.e., \emph{friction} or \emph{external damping}},\end{aligned}$$ the authors of [@IkehataCharaodaLuz2014] proved almost sharp energy estimates for $n\geq2$ by using energy methods in the Fourier space and the Haraux-Komornik inequality, and then the recent paper [@ChenReissig2019SD] investigated propagation of singularities, sharp energy estimates and diffusion phenomenon for $n=3$.\
Furthermore, in the case when $$\begin{aligned}
{\mathcal}{A}u_t=(-\Delta)^{\theta}u_t\,\,\,\,\text{with}\,\,\,\,\theta\in(0,1],\,\,\,\,\text{i.e., \emph{structural damping}},
\end{aligned}$$ energy estimates are derived with different data spaces in [@IkehataCharaodaLuz2014] for $n\geq2$, and in [@Reissig2016] for $n=2$. Moreover, some qualitative properties of solutions, including smoothing effect, sharp energy estimate and diffusion phenomena (especially, *double diffusion phenomena* when $\theta\in(0,1/2)$) are obtained for $n=3$.\
Finally, in the case when $$\begin{aligned}
{\mathcal}{A}u_t=(-a^2\Delta-(b^2-a^2)\nabla\operatorname{div})u_t,\,\,\,\,\text{i.e., \emph{Kelvin-Voigt damping}},
\end{aligned}$$ by applying energy methods in the Fourier space, almost sharp energy estimates for $n\geq2$ have been obtained in [@WuChaiLi2017]. Then, sharp energy estimates, $L^p-L^q$ estimates as well as asymptotic profiles of solutions are derived for $n=2$ in [@Chen2019KV]. Other studies on dissipative elastic waves can be found in literatures [@CharaoIkehata2007; @CharaoIkehata2011]. Nevertheless, concerning about decay properties and diffusion phenomena for the Cauchy problem for doubly dissipative elastic waves it seems that we still do not have any previous research manuscripts. Moreover, this problem is strongly related to the open problem proposed in [@IkehataSawada2016]. In this paper we give the answer to the two-dimensional case.
Let us point out that the study of the Cauchy problem is not simply a generalization of elastic waves with friction or structural damping in [@Reissig2016; @ChenReissig2019SD]. On one hand, because there exists two different damping terms $(-\Delta)^{\rho}u_t$ and $(-\Delta)^{\theta}u_t$ with $0\leq\rho<1/2<\theta\leq1$ in our problem , it is not clear which damping term has a dominant influence on dissipative structure. On the other hand, from the paper [@ChenReissig2019SD], the authors derived diffusion phenomena to elastic waves with the damping term $(-\Delta)^{\theta}u_t$ where $\theta\in[0,1/2)\cup(1/2,1]$, which are described by the following so-called *reference system*.
- In the case when $\theta=0$, the reference system consist of heat-type system with mass term as follows: $$\begin{aligned}
\widetilde{V}_t-{\mathcal}{D}_1\Delta\widetilde{V}+{\mathcal}{D}_2\widetilde{V}=0,
\end{aligned}$$ with real diagonal matrices ${\mathcal}{D}_1$ and ${\mathcal}{D}_2$.
- In the case when $\theta\in(0,1/2)$, the reference system consist of two different parabolic systems as follows: $$\begin{aligned}
\widetilde{V}_t+{\mathcal
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We use direct numerical simulations to investigate the interaction between the temperature field of a fluid and the temperature of small particles suspended in the flow, employing both one and two-way thermal coupling, in a statistically stationary, isotropic turbulent flow. Using statistical analysis, we investigate this variegated interaction at the different scales of the flow. We find that the variance of the fluid temperature gradients decreases as the thermal response time of the suspended particles is increased. The probability density function (PDF) of the fluid temperature gradients scales with its variance, while the PDF of the rate of change of the particle temperature, whose variance is associated with the thermal dissipation due to the particles, does not scale in such a self-similar way. The modification of the fluid temperature field due to the particles is examined by computing the particle concentration and particle heat fluxes conditioned on the magnitude of the local fluid temperature gradient. These statistics highlight that the particles cluster on the fluid temperature fronts, and the important role played by the alignments of the particle velocity and the local fluid temperature gradient. The temperature structure functions, which characterize the temperature fluctuations across the scales of the flow, clearly show that the fluctuations of the fluid temperature increments are monotonically suppressed in the two-way coupled regime as the particle thermal response time is increased. Thermal caustics dominate the particle temperature increments at small scales, that is, particles that come into contact are likely to have very large differences in their temperature. This is caused by the nonlocal thermal dynamics of the particles, and the scaling exponents of the inertial particle temperature structure functions in the dissipation range reveal very strong multifractal behavior. Further insight is provided by the PDFs of the two-point temperature increments and by the flux of temperature increments across the scales. All together, these results reveal a number of non-trivial effects, with a number of important practical consequences.'
author:
- 'M. Carbone, A. D. Bragg'
- 'M. Iovieno'
bibliography:
- 'JFM2018.bib'
title: 'Multiscale fluid–particle thermal interaction in isotropic turbulence'
---
Introduction
============
The interaction between inertial particles and scalar fields in turbulent flows plays a central role in many natural problems, ranging from cloud microphysics [@Pruppacher2010; @Grabowski2013] to the interactions between plankton and nutrients [@DeLillo2014], and dust particle flows in accretion disks [@Takeuchi2002]. In engineered systems, applications involve chemical reactors and combustion chambers, and more recently, microdispersed colloidal fluids where the enhanced thermal conductivity due to particle aggregations can give rise to non-trivial thermal behavior [@Prasher2006; @Momenifar2015], and which can be used in cooling devices for electronic equipment exposed to large heat fluxes [@Das2006].
In this work, we focus on the heat exchange between advected inertial particles and the fluid phase in a turbulent flow, with a parametric emphasis relevant to understanding particle-scalar interactions in cloud microphysics. Understanding the droplet growth in clouds requires to characterize the interaction between water droplets and the humidity and temperature fields. A major problem is to understand how the interaction between turbulence, heat exchange, condensational processes, and collisions can produce the rapid growth of water droplets that leads to rain initiation [@Pruppacher2010; @Grabowski2013]. While the study of the transport of scalar fields and particles in turbulent flows are well established research areas in both theoretical and applied fluid dynamics [@Kraichnan1994; @Taylor1922], the characterization of the interaction between scalars and particles in turbulent flows is a relatively new topic [@Bec2014], since the problem is hard to handle analytically, requires sophisticated experimental techniques, and is computationally demanding.
When temperature differences inside the fluid are sufficiently small, the temperature field behaves almost like a passive scalar, that is, the fluid temperature is advected and diffused by the fluid motion but has negligible dynamical effect on the flow. Even in this regime, the statistical properties of the passive scalar field are significantly different from those of the underlying velocity field that advects it. Different regimes take place according to the Reynolds number and the ratio between momentum and scalar diffusivities [@Shraiman2000; @Warhaft2000; @Watanabe2004].
Experiments, numerical simulations and analytical models show that a passive scalar field is always more intermittent than the velocity field, and passive scalars in turbulence are characterized by strong anomalous scaling [@Holzer1994]. This is due to the formation of ramp–cliff structures in the scalar field [@Celani2000; @Watanabe2004]: large regions in which the scalar field is almost constant are separated by thin regions in which the scalar abruptly changes. The regions in which the scalar mildly changes are referred to as Lagrangian coherent structures. The thin regions with large scalar gradient, where the diffusion of the scalar takes place, are referred to as fronts. It has been shown that the large scale forcing influences the passive scalar statistics at small scales [@Gotoh2015]. In particular, a mean scalar gradient forcing preserves universality of the statistics while a large scale Gaussian forcing does not. However, the ramp-cliff structure was observed with different types of forcing, implying that this structure is universal to scalar fields in turbulence [@Watanabe2004; @Bec2014]. Moreover, recent measurements of atmospheric turbulence have shown that external boundary conditions, such as the magnitude and sign of the sensible heat flux, have a significant impact on the fluid temperature dynamics within the inertial range, while for the same scales the fluid velocity increments are essentially independent of these large-scale conditions [@zorzetto18].
When a turbulent flow is seeded with inertial particles, the particles can sample the surrounding flow in a non-uniform and correlated manner [@Toschi2009]. Particle inertia in a turbulent flow is measured through the Stokes number ${\text{\textit{St}}}\equiv\tau_p/\tau_\eta$, which compares the particle response time to the Kolmogorov time scale. A striking feature of inertial particle motion in turbulent flows is that they spontaneously cluster even in incompressible flows [@maxey87; @wang93; @Bec2007; @Ireland2016]. This clustering can take place across a wide range of scales [@Bec2007; @Bragg2015b; @Ireland2016], and the small-scale clustering is maximum when ${\text{\textit{St}}}={\textit{O}\left( 1 \right)}$. A variety of mechanisms has been proposed to explain this phenomena: when ${\text{\textit{St}}}\ll1$ the clustering is caused by particles being centrifuged out of regions of strong rotation [@maxey87; @chun05], while for ${\text{\textit{St}}}\geq {\textit{O}\left( 1 \right)}$, a non-local mechanism generates the clustering, whose effect is related to the particles memory of its interaction with the flow along its path-history [@gustavsson11b; @gustavsson16; @bragg14b; @bragg2015a; @Bragg2015b]. Note that recent results on the clustering of settling inertial particles in turbulence have corroborated this picture, showing that strong clustering can occur even in a parameter regime where the centrifuge effect cannot be invoked as the explanation for the clustering, but is caused by a non-local mechanism [@ireland16b].
When particles have finite thermal inertia, they will not be in thermal equilibrium with the fluid temperature field, and this can give rise to non-trivial thermal coupling between the fluid and particles in a turbulent flow. A thermal response time $\tau_\theta$ can be defined so that the particle thermal inertia is parameterized by the thermal Stokes number ${\text{\textit{St}}}_\theta \equiv \tau_\theta/\tau_\eta$ [@Zaichik2009]. Since both the fluid temperature and particle phase-space dynamics depend upon the fluid velocity field, there can exist non-trivial correlations between the fluid and particle temperatures even in the absence of thermal coupling. Indeed, it was show by [@Bec2014] that inertial particles preferentially cluster on the fronts of the scalar field. Associated with this is that the particles preferentially sample the fluid temperature field, and when combined with the strong intermittency of temperature fields in turbulent flows, that can cause particles to experience very large temperature fluctuations along their trajectories.
Several works have considered aspects of the fluid-particle temperature coupling using numerical simulations. For example, [@Zonta2008] investigated a particle-laden channel flow, with a view to modeling the modification of heat transfer in micro–dispersed fluids. They considered both momentum and temperature two–way coupling and observed that, depending on the particle inertia, the heat flow at the wall can increase or decrease. [@Kuerten2011] considered a similar set-up with larger dispersed particles, and they observed a stronger modification of the fluid temperature statistics due to the particles. [@Zamansky2014; @Zamansky2016] considered turbulence induced by buoyancy, where the buoyancy was generated by heated particles. They observed that the resulting flow is driven by thermal plumes produced by the particles. As the particle inertia was increased, the inhomogeneity and the effect of the coupling were enhanced in agreement with the fact that inertial particles tend to cluster on the scalar fronts. [@Kumar2014] examined how the spatial distribution of droplets is affected by large scale inhomogeneities in the fluid temperature and supersaturation fields, considering the transition between homogeneous and inhomogeneous mixing. A similar flow configuration was also investigated by [@Gotzfried2017].
Each of these studies was primarily focused on the effect of the inertial particles on the large-scale statistics of the fluid temperature field. However, the results of [@Bec2014] imply that the effects of fluid-particle thermal coupling could be strong at the small scales, owing to the fact that they cluster on the fronts of the
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We introduce pretty clean modules, extending the notion of clean modules by Dress, and show that pretty clean modules are sequentially Cohen-Macaulay. We also extend a theorem of Dress on shellable simplicial complexes to multicomplexes.'
address:
- 'Jürgen Herzog, Fachbereich Mathematik und Informatik, Universität Duisburg-Essen, Campus Essen, 45117 Essen, Germany'
- 'Dorin Popescu, Institute of Mathematics “Simion Stoilow”, University of Bucharest, P.O.Box 1-764, Bucharest 014700, Romania'
author:
- Jürgen Herzog and Dorin Popescu
title: Finite filtrations of modules and shellable multicomplexes
---
[^1]
Introduction {#introduction .unnumbered}
============
Let $R$ be a Noetherian ring, and $M$ a finitely generated $R$-module. A basic fact in commutative algebra (see [@M Theorem 6.4]) says that there exists a finite filtration $$\mathcal{F}\: 0=M_0\subset M_1\subset \cdots \subset M_{r-1}
\subset M_r=M$$ with cyclic quotients $M_i/M_{i-1}\iso R/P_i$ and $P_i\in \Supp(M)$. We call any such filtration of $M$ a [*prime filtration*]{}. The set of prime ideals $\{P_1,\ldots, P_r\}$ which define the cyclic quotients of $\mathcal{F}$ will be denoted by $\Supp(\mathcal{F})$. Another basic fact [@M Theorem 6.5] says that $$\Ass(M)\subset \Supp(\mathcal{F})\subset \Supp(M).$$ Let $\Min(M)$ denote the set of minimal prime ideals. Dress [@D] calls a prime filtration ${\mathcal F}$ of $M$ [*clean*]{}, if $\Supp(\mathcal{F})\subset\Min(M)$. The module $M$ is called [*clean*]{}, if $M$ admits a clean filtration. It is clear that for a clean filtration $\mathcal
F$ of $M$ one has $$\Min(M)=\Ass(M)=\Supp({\mathcal F}).$$
Cleanness is the algebraic counterpart of shellability for simplicial complexes. Indeed, let $\Delta$ be a simplicial complex and $K$ a field. Dress [@D] showed that $\Delta$ is (non-pure) shellable in the sense of Björner and Wachs [@BW], if and only if the Stanley-Reisner ring $K[\Delta]$ is clean.
On the other hand Stanley [@St] showed that if $\Delta $ is shellable, then $K[\Delta]$ is sequentially Cohen-Macaulay. In this paper we show more generally that any clean module over a Cohen-Macaulay ring which admits a canonical module is sequentially Cohen-Macaulay if all factors in the clean filtration are Cohen-Macaulay. In fact, we prove this result (Theorem \[sequentially\]) for an even larger class of modules which we call pretty clean. These modules are defined by the property that they have a prime filtration as above, and such that for all $i<j$ for which $P_i \subset P_j$ it follows that $P_i=P_j$.
We now describe the content of this paper in more detail. In Section 1 we recall the concept of dimension filtrations introduced by Schenzel [@Sc], and note (Proposition \[characterization\]) that the dimension filtration of a module is characterized by the associated prime ideals of its factors. In the next section we discuss some basic properties of sequentially Cohen-Macaulay modules. Such modules were introduced by Schenzel [@Sc] and Stanley [@St]. It was Schenzel who observed that a module is sequentially Cohen-Macaulay if and only the non-zero factors of the dimension filtration are Cohen-Macaulay.
The following section is devoted to introduce clean and pretty clean modules. We show that a pretty clean filtration $\mathcal F$ of a module $M$ satisfies $\supp({\mathcal F})=\Ass(M)$, and we give an example of a module $M$ which admits a prime filtration ${\mathcal F}$ with $\supp({\mathcal F})=\Ass(M)$ but which is not pretty clean. We also observe that that all pretty clean filtrations of a module have the same length.
In Section 4 we show (Theorem \[sequentially\]) that under the mild assumptions, mentioned above, pretty clean modules are sequentially Cohen-Macaulay, and we show in Corollary \[interesting\] that under the same assumptions a module is pretty clean if and only if the factors in its dimension filtration are all clean.
In Section 5 we give an interesting class of pretty clean rings, namely of rings whose defining ideal is of Borel type. This generalizes a result in [@HPV] where it is shown that such rings are sequentially Cohen-Macaulay.
In the following section we consider graded and multigraded pretty clean rings and modules. Of particular interest is the case that $R=S/I$ where $S$ is a polynomial ring and $I\subset S$ a monomial ideal. Using a result of Nagel and Römer [@NR Theorem 3.1] we show that in this case the length of each multigraded pretty clean filtrations of $S/I$ is equals to the arithmetic degree of $S/I$.
In [@St1] Stanley conjectured that the depth of $S/I$ is a lower bound for the ‘size’ of the summands in any Stanley decomposition of $S/I$. We show in Theorem \[stanley1\] that Stanley’s conjecture holds if $R$ is a multigraded pretty clean ring.
In Section 7 we show that for a given prime filtration $\mathcal{F}\: 0=M_0\subset M_1\subset \cdots \subset M_{r-1} \subset M_r=M$ of $M$ with factors $M_i/M_{i-1}=R/P_i$ there exists irreducible submodules $P_j$-primary submodules $N_j$ of $M$ such that $M_i=\Sect_{j>i}^rN_j$ for $i=0,\ldots, r$. It turns out, as demonstrated in the next and the following sections, that this presentation of the modules $M_i$ is the algebraic interpretation of shellability for clean and pretty clean filtrations. This becomes obvious in the next section where we recall the theorem of Dress and show that the shelling numbers of a simplicial complex can be recovered from the graded clean filtration, see Proposition \[shelling numbers\].
In Section 9 we introduce multicomplexes. These are subsets $\Gamma\subset \NN^n_\infty$ which are closed under limits of sequence $a_i\in \Gamma$ with $a_i\leq a_{i+1}$ (componentwise), and have the property that whenever $a\in \Gamma$ and $b\leq a$ (componentwise), then $b\in \Gamma$. Here $\NN_\infty=\NN\union \{\infty\}$. We show that if $\Gamma$ is a multicomplex and $a\in \Gamma$, then there exists a maximal element $m\in \Gamma$ with $a\leq m$. Here we need that $\Gamma$ is closed with respect to limits of non-decreasing sequences. Then we define the facets of $\Gamma$ to be those elements $a\in\Gamma$ with the property that if $a\leq m$ and $m$ is maximal in $\Gamma$, then the infinite part of $a$ coincides with the infinite part of $m$, which means that the $i$th component of $a$ is infinite if and only if the $i$th component of $m$ is infinite. We show that each multicomplex has only a finite number of facets.
Multicomplexes in $\NN^n_\infty$ correspond to monomial ideals in $S=K[x_1,\ldots,x_n]$. The monomial ideal $I$ defined by a multicomplex $\Gamma$ is the ideal spanned by all monomials whose exponents belong to $\NN^n\setminus \Gamma$. Our definition of the facets of $\Gamma$ is partly justified by the fact, shown in Lemma \[pairs\], that there is a bijection between the set of facets of $\Gamma$ and the standard pairs of $I$ as defined by Sturmfels, Trung and Vogel in [@STV]. However the main justification of the definition is given by Proposition \[multiprimary\] where we show that a pretty clean filtration of $S/I$ determines uniquely the facets of $\Gamma$. This result finally leads us to the definition of shellable multicomplexes. In Proposition \[extend\] we show that our definition of shellable multicomplexes extends the corresponding notion known for simplicial complexes. However the main result of the final section is Theorem \[multi2\] which asserts that for a monomial ideal $I$ the ring $S/I$ is multigraded pretty clean if and only if the corresponding multicomplex is shellable.
The dimension filtration
========================
Let $M$ be an $R$-module of dimension $d$. In [@Sc] Schenzel introduced the [*dimension filtration*]{} $${\mathcal F}\: 0\subset D_0(M)\subset D_1
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We evaluate the elastic scattering cross section of vector dark matter with nucleon based on the method of effective field theory. The dark matter is assumed to behave as a vector particle under the Lorentz transformation and to interact with colored particles including quarks in the Standard Model. After formulating general formulae for the scattering cross sections, we apply them to the case of the first Kaluza-Klein photon dark matter in the minimal universal extra dimension model. The resultant cross sections are found to be larger than those calculated in previous literature.'
address:
- '$^1$ Department of Physics, Nagoya University, Nagoya 464-8602, Japan'
- '$^2$ Department of Physics, University of Tokyo, Tokyo 113-0033, Japan'
author:
- 'Natsumi Nagata$^{1, 2}$'
title: 'A calculation for vector dark matter direct detection[^1]'
---
Introduction
============
The existence of dark matter (DM) has been established by cosmological observations [@Komatsu:2010fb]. One of the most attractive candidates is what we call Weakly Interacting Massive Particles (WIMPs), which are stable particles with masses of the electroweak scale and weakly interact with ordinary matters. This interactions enable us to search for WIMP DM by using the scattering signal of DM with nuclei on the earth. Such kind of experiments are called the direct detection experiments of WIMP DM.
For the past years, a lot of efforts have been dedicated to the direct detection of WIMP DM, and their sensitivities have been extremely improving. The XENON100 Collaboration, for example, gives a severe constraint on the spin-independent (SI) elastic scattering cross section of WIMP DM with nucleon $\sigma^{\rm SI}_N$ ($\sigma^{\rm
SI}_N < 2.0\times 10^{-45}~{\rm cm}^2$ for WIMPs with a mass of 55 GeV$/c^2$) [@Aprile:2012nq]. Moreover, ton-scale detectors for the direct detection experiments are now planned and expected to have significantly improved sensitivities.
In order to study the nature of DM based on these experiments, we need to evaluate the WIMP-nucleon elastic scattering cross section precisely. In this work, we assume the WIMP DM to be a vector particle, and evaluate its cross section scattering off a nucleon. Several candidates for vector DM have been proposed in various models, and there have been a lot of previous work computing the scattering cross sections [@Cheng:2002ej; @Servant:2002hb; @Birkedal:2006fz]. However, we found that in the calculations some of the leading contributions to the scattering cross section are not evaluated correctly, or in some cases completely neglected. Taking such situation into account, we study the way of evaluating the cross section systematically by using the method of effective field theory.
Direct detection of vector dark matter
======================================
In this section we discuss the way of evaluating the elastic scattering cross section of vector DM with nucleon. First, we write down the effective interactions of vector DM ($B_\mu$) with light quarks and gluon [@Hisano:2010yh]: $$\mathcal{L}^{\mathrm{eff}}=\sum_{q=u,d,s}\mathcal{L}^{\mathrm{eff}}_q
+\mathcal{L}^{\mathrm{eff}}_G,$$ with $$\begin{aligned}
\mathcal{L}^{\mathrm{eff}}_q &=&
f_q m_q B^{\mu}B_{\mu}\bar{q}q+
\frac{d_q}{M}
\epsilon_{\mu\nu\rho\sigma}B^{\mu}i\partial^{\nu}B^{\rho}
\bar{q}\gamma^{\sigma}\gamma^{5}q+\frac{g_q}{M^2}
B^{\rho}i\partial^{\mu}i\partial^{\nu}B_{\rho}\mathcal{O}^q_{\mu\nu},
\label{eff_lagq}
\\
\mathcal{L}^{\mathrm{eff}}_G&=&f_G
B^{\rho}B_{\rho}G^{a\mu\nu}G^a_{\mu\nu},
\label{eff_lagG}\end{aligned}$$ where $m_q$ are the masses of light quarks, $M$ is the DM mass, and $\epsilon^{\mu\nu\rho\sigma}$ is the totally antisymmetric tensor defined as $\epsilon^{0123}=+1$. The covariant derivative is defined as $D_\mu\equiv\partial_\mu+i g_sA^a_\mu T_a$, with $g_s$, $T_a$ and $A^a_\mu$ being the strong coupling constant, the SU(3)$_C$ generators, and the gluon fields, respectively. The gluon field strength tensor is denoted by $G^a_{\mu\nu}$, and $\mathcal{O}^q_{\mu\nu}\equiv\frac12 \bar{q} i \left(D_{\mu}\gamma_{\nu}
+ D_{\nu}\gamma_{\mu} -\frac{1}{2}g_{\mu\nu}{{\ooalign{\hfil/\hfil\crcr$D$}}} \right) q $ are the twist-2 operators of light quarks. When we write down the effective Lagrangian, we consider the fact that the scattering process is non-relativistic. The coefficients of the operators are to be determined by integrating out the heavy particles in high energy theory. The second term in Eq. (\[eff\_lagq\]) gives rise to the spin-dependent (SD) interaction, while the other terms yield the spin-independent (SI) interactions. We focus on the SI interactions hereafter, because the experimental constraint is much severe for the SI interactions, rather than for the SD interactions.
In order to obtain the effective coupling of the vector DM with nucleon induced by the effective Lagrangian, we need to evaluate the nucleon matrix elements of the quark and gluon operators in Eqs.(\[eff\_lagq\]) and (\[eff\_lagG\]). First, the nucleon matrix elements of the scalar-type quark operators are parametrized as $$f_{Tq}\equiv \langle N \vert m_q \bar{q} q \vert N\rangle/m_N~,$$ with $\vert N\rangle$ and $m_N$ the one-particle state and the mass of nucleon, respectively. The parameters are called the mass fractions and their values are obtained from the lattice simulations [@Young:2009zb; @:2012sa]. Second, for the quark twist-2 operators, we can use the parton distribution functions (PDFs): $$\begin{aligned}
\langle N(p)\vert
{\cal O}_{\mu\nu}^q
\vert N(p) \rangle
&=&\frac{1}{m_N}
(p_{\mu}p_{\nu}-\frac{1}{4}m^2_N g_{\mu\nu})\
(q(2)+\bar{q}(2)) \ ,\end{aligned}$$ where $q(2)$ and $\bar{q}(2)$ are the second moments of PDFs of quark $q(x)$ and anti-quark $\bar{q}(x)$, respectively, which are defined as $q(2)+ \bar{q}(2) =\int^{1}_{0} dx ~x~ [q(x)+\bar{q}(x)]$. These values are obtained from Ref. [@Pumplin:2002vw]. Finally, the matrix element of gluon field strength tensor can be evaluated by using the trace anomaly of the energy-momentum tensor in QCD [@Shifman:1978zn]. The resultant expression is given as $$\langle N\vert G^a_{\mu\nu}G^{a\mu\nu}\vert N\rangle
=-\frac{8\pi}{9\alpha_s} m_N f_{TG}$$ with $f_{TG}\equiv 1-\sum_{q=u,d,s}f_{Tq}$. Note that the right hand side of the expression is divided by the strong coupling constant, $\alpha_s$. For this reason, although the gluon contribution is induced by higher loop diagrams, it can be comparable to the quark contributions [@Hisano:2010ct]. Briefly speaking, the enhancement comes from the large gluon contribution to the mass of nucleon. As a result, the SI effective coupling of vector DM with nucleon, $f_N$, is given as $$\begin{aligned}
f_N/m_N&=&\sum_{q=u,d,s}
f_q f_{Tq}
+\sum_{q=u,d,s,c,b}
\frac{3}{4} \left(q(2)+\bar{q}(2)\right)g_q
-\frac{8\pi}{9\alpha_s}f_{TG} f_G ~.
\label{f}\end{aligned}$$
Using the effective coupling, we eventually obtain the SI scattering cross section of DM with nucleon: $$\sigma^{\rm (SI)}_{N}=
\frac{1}{\pi}\biggl(\frac{m_N}{M+m_N}\biggr)^2~\vert f_N\vert ^2~.$$
Now, all we have to do reduces to evaluate the coefficients of the effective operators by integrating out the heavy fields in the high-energy theories. For example, we take the case where the interaction Lagrangian of the vector DM has a generic form as $$\begin{aligned}
\mathcal{L}=
\bar{\psi}_2 ~(
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We investigate the occurrence of anomalous diffusive transport associated with acoustic wave fields propagating through highly-scattering periodic media. Previous studies had correlated the occurrence of anomalous diffusion to either the random properties of the scattering medium or to the presence of localized disorder. In this study, we show that anomalous diffusive transport can occur also in perfectly periodic media and in the absence of disorder. The analysis of the fundamental physical mechanism leading to this unexpected behavior is performed via a combination of deterministic, stochastic, and fractional-order models in order to capture the different elements contributing to this phenomenon. Results indicate that this anomalous transport can indeed occur in perfectly periodic media when the dispersion behavior is characterized by anisotropic (partial) bandgaps. In selected frequency ranges, the propagation of acoustic waves not only becomes diffusive but its intensity distribution acquires a distinctive L[é]{}vy $\alpha$-stable profile having pronounced heavy-tails. In these ranges, the acoustic transport in the medium occurs according to a hybrid transport mechanism which is simultaneously propagating and anomalously diffusive. We show that such behavior is well captured by a fractional diffusive transport model whose order can be obtained by the analysis of the heavy tails.'
author:
- Salvatore Buonocore
- Mihir Sen
- Fabio Semperlotti
bibliography:
- 'ref.bib'
title: 'Occurrence of anomalous diffusion and non-local response in highly-scattering acoustic periodic media'
---
Introduction {#Introduction}
============
In recent years, several theoretical and experimental studies have shown that field transport processes in non-homogeneous and complex media can occur according to either hybrid or anomalous mechanisms. Some examples of these physical mechanisms include anomalous diffusive transport (such as non-Fourier [@povstenko2013fractional; @borino2011non; @ezzat2010thermoelectric], or non-Fickian diffusion [@benson2000application; @benson2001fractional; @cushman2000fractional; @fomin2005effect] with heavy-tailed distribution) or hybrid wave transport (characterized by simultaneous propagation and diffusion [@mainardi1996fractional; @mainardi1996fundamental; @mainardi1994special; @mainardi2010fractional; @chen2003modified; @chen2004fractional]). Simultaneous hybrid and anomalous transport has also been observed, particularly in wave propagation problems involving random scattering media. Electromagnetic waves traveling through a scattering material[@yamilov2014position] such as fog [@belin2008display] or murky water [@zevallos2005time] are relevant examples of practical problems where such transport process can arise.
A distinctive feature of anomalous transport is the occurrence of heavy-tailed distributions of the representative field quantities [@benson2001fractional]. In this case, the diffusion process does not follow a classical Gaussian distribution but instead is characterized by a high-probability of occurrence of the events associated with large variance (i.e. those described by the “heavy” tails).
This behavior is typically not accounted for in traditional field transport theories based on integer order differential or integral models. Purely numerical methods, such as Monte Carlo or finite element simulations[@huang1991optical; @ishimaru2012imaging; @mosk2012controlling; @sebbah2012waves; @gibson2005recent], can capture this response but are very computationally intensive and do not provide any additional insight in the physical mechanisms generating the macroscopic dynamic behavior. The ability to accurately predict the anomalous response and to retrieve information hidden in diffused fields remains a challenging and extremely important topic in many applications. Acoustical and optical imaging, non-intrusive monitoring of engineering and biomedical materials are just a few examples of practical problems in which the ability to carefully predict the field distribution is of paramount importance to achieve accurate and physically meaningful solutions. Nevertheless, in most classical approaches, information contained in the heavy tails is typically discarded because it cannot be properly captured and interpreted by integer-order transport models.
Hybrid and anomalous diffusive transport mechanisms are pervasive also in acoustics. This type of transport can arise when acoustic fields propagate in a highly scattering medium such as a urban environment [@albert2010effect; @remillieux2012experimental], a forest [@aylor1972noise; @tarrero2008sound], a stratified fluid (e.g. the ocean) [@baggeroer1993overview; @dowling2015acoustic; @casasanta2012fractional], or a porous medium [@benson2001fractional; @schumer2001eulerian; @fellah2003measuring; @fellah2000transient].
From a general perspective, classical diffusion of wave fields occurs within a range where the wavelength is comparable to the size of the scatterers, the so-called Mie scattering regime. Any deviation from classical diffusion, being either sub-diffusion [@metzler2000random; @goychuk2012fractional] (typically linked to Anderson localization) or super-diffusion (typically linked to L[é]{}vy-flights) [@barthelemy2008levy; @bertolotti2010multiple], still arises within the same regime. The two dominant factors are either the relation between the transport mean free path and the wavelength, or the statistical distributions of the scattering paths in presence of disorder. When a wave field interacts with scattering elements, it undergoes a variety of physical phenomena including reflection, refraction, diffraction, and absorption that significantly alter its initial characteristics. Depending on the quantity, distribution, and properties of the scatterers the momentum vector of an initially coherent wave can become quickly randomized. For most processes, the Central Limit Theorem (CLT) guarantees that the distribution of macroscopic observable quantities (e.g. the field intensity) converges to a Gaussian profile in full agreement with the predictions from classical Fourier diffusion. At the same time, the transition to a macroscopic diffusion behavior leads to an inevitable coexistence of diffusive and wave-like processes at the meso- and macro-scales.
There are numerous physical processes in nature whose *basin of attraction* is given by the normal (Gaussian) distribution. On the other hand, when the distribution of characteristic step-length has infinite variance, the diffusion process no longer follows the standard diffusion theory, but rather acquires an anomalous behavior with a basin of attraction given by the so-called $\alpha$-stable L[é]{}vy distribution. In the latter case, the unbounded value of the variance of the step-length distribution is due to the non-negligible probability of existence of steps whose lengths greatly differ from the mean value; these are usually denoted as L[é]{}vy flights. The distinctive feature of the $\alpha$-stable L[é]{}vy distributions is the occurrence of heavy tails having a power-law decay of the form $p(l) \sim l^{-(\alpha+1)}$. This characteristic suggests that transport phenomena evolving according to L[é]{}vy statistics are dominated by infrequent but very long steps, and therefore their dynamics are profoundly different from those predicted by the random (Brownian) motion. Many of the complex hybrid transport mechanisms mentioned above fall in this category, and therefore cannot be successfully described in the framework of classical diffusion theory.
In addition, these complex transport mechanisms are typically not amenable to closed-form analytical solutions therefore requiring either fully numerical or statistical approaches to predict the field quantities under various input conditions. Typical modeling approaches rely on random walk statistical models [@metzler2000random; @bouchaud1990anomalous] or on semi-empirical corrections to the fundamental diffusive transport equation via renormalization theory [@asatryan2003diffusion; @cobus2016anderson]. These approaches imply a considerable computational cost and do not provide physical insight in the operating mechanisms of the anomalous response. A few studies have also indicated that, for this type of processes, the macroscopic governing equation describing the evolution of the wave field intensity could be described by a generalization of the classical diffusion equation using fractional derivatives [@bertolotti2010multiple; @metzler2000random; @bertolotti2007light].
To-date, the occurrence of anomalous diffusion of wave fields has been connected and observed only in random and disordered media [@barthelemy2008levy; @burresi2012weak; @bouchaud1990anomalous; @asatryan2003diffusion; @cobus2016anderson]. In this study, we show theoretical and numerical evidence that anomalous behavior can occur even in presence of perfectly periodic media and in absence of disorder. We present this analysis in the context of diffusive transport of acoustic waves although the results could be generalized to other wave fields. In particular, we investigate the specific case of propagation of acoustic waves in a medium with identical and periodically distributed hard scatterers. We develop a theoretical framework for multiple scattering in super-diffusive periodic media. We first show, by full field numerical simulations, that under certain conditions, acoustic waves propagating through a periodic medium are subject to anomalous diffusion. Then, we propose an approach based on a combination of deterministic and stochastic methodologies to explore the physical origin of this unexpected behavior. Ultimately, we show that fractional order models can predict, more accurately and effectively, the resulting anomalous field quantities. More important, we will show that the analysis of the heavy tails provide a reliable means to extract the equivalent fractional order of the medium.
Anomalous diffusion in acoustic periodic media: overview of the method
======================================================================
We consider the generic problem of an acoustic bulk medium made of periodically-distributed cylindrical hard scatterers in air (Fig. \[Fig\_1\]). We assume a monopole-like acoustic source, located in the center of the lattice, which emits at a selected harmonic frequency chosen within the scattering regime.
The main objective is to characterize the propagation of acoustic waves in such medium based on different regimes of dispersion. As previously anticipated,
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
We identify complete fragments of the Simple Theory of Types with Infinity ($\mathrm{TSTI}$) and Quine’s $\mathrm{NF}$ set theory. We show that $\mathrm{TSTI}$ decides every sentence $\phi$ in the language of type theory that is in one of the following forms:
- $\phi= \forall x_1^{r_1} \cdots \forall x_k^{r_k} \exists y_1^{s_1} \cdots \exists y_l^{s_l} \theta$ where the superscripts denote the types of the variables, $s_1 > \ldots > s_l$ and $\theta$ is quantifier-free,
- $\phi= \forall x_1^{r_1} \cdots \forall x_k^{r_k} \exists y_1^{s} \cdots \exists y_l^{s} \theta$ where the superscripts denote the types of the variables and $\theta$ is quantifier-free.
This shows that $\mathrm{NF}$ decides every stratified sentence $\phi$ in the language of set theory that is in one of the following forms:
- $\phi= \forall x_1 \cdots \forall x_k \exists y_1 \cdots \exists y_l \theta$ where $\theta$ is quantifier-free and $\phi$ admits a stratification that assigns distinct values to all of the variable $y_1, \ldots, y_l$,
- $\phi= \forall x_1 \cdots \forall x_k \exists y_1 \cdots \exists y_l \theta$ where $\theta$ is quantifier-free and $\phi$ admits a stratification that assigns the same value to all of the variables $y_1, \ldots, y_l$.
author:
- Anuj Dawar
- Thomas Forster
- Zachiri McKenzie
bibliography:
- 'decidablefragementsoftst24.bib'
title: 'Decidable fragments of the Simple Theory of Types with Infinity and $\mathrm{NF}$ [^1]'
---
Introduction
============
Roland Hinnion showed in his thesis [@hin75] that [*Every consistent $\exists^*$ sentence in the language of set theory is a theorem of $\mathrm{NF}$*]{} or, equivalently: [*Every finite binary structure can be embedded in every model of $\mathrm{NF}$*]{}. Both these formulations invite generalisations. On the one hand we find results like [*every countable binary structure can be embedded in every model of $\mathrm{NF}$*]{} (this is theorem 4 of [@for87]) and on the other we can ask about the status of sentences with more quantifiers: $\forall^*\exists^*$ sentences in the first instance; it is the second that will be our concern here.\
\
It is elementary to check that $\mathrm{NF}$ does not decide all $\forall^*\exists^*$ sentences, since the existence of Quine atoms ($x = \{x\}$) is consistent with, and independent of, $\mathrm{NF}$. However ‘$(\forall x)(x \not= \{x\})$’ is not stratified, and this invites the conjecture that (i) $\mathrm{NF}$ decides all stratified $\forall^*\exists^*$ sentences and that (ii) all unstratified $\forall^*\exists^*$ sentences can be proved both relatively consistent and independent by means of Rieger-Bernays permutation methods. It’s with limb (i) of this conjecture that we are concerned here.\
\
The foregoing is all about $\mathrm{NF}$; the connection with the Simple Theory of Types with Infinity ($\mathrm{TSTI}$) arises because of work of Ernst Specker [@spe62] and [@spe53]: $\mathrm{NF}$ decides all stratified $\forall^*\exists^*$ sentences of the language of set theory if and only if $\mathrm{TSTI} + \mathrm{Ambiguity}$ decides all $\forall^*\exists^*$ sentences of the language of type theory.
> [**Conjecture**]{}: All models of $\mathrm{TSTI}$ agree on all $\forall^*\exists^*$ sentences.
It is towards a proof of this conjecture that our efforts in this paper are directed.\
\
Observe that [*there is a total order of $V$*]{} is consistent with and independent of $\mathrm{TST}$ and it can be said with three blocks of quantifiers: $$(\exists O)[(\forall x y \in O)(x \subseteq y \lor y \subseteq x) \land (\forall u v)( u \not= v \to (\exists x \in O)( u \in x \iff v \not\in x))]$$ making it $\exists^1\forall^6\exists^1$.
Background and definitions {#Sec:Background}
==========================
The Simple Theory of Types is the simplification of the Ramified Theory of Types, the underlying system of [@rw08], that was independently discovered by Frank Ramsey and Leon Chwistek. Following [@mat01] we use $\mathrm{TSTI}$ and $\mathrm{TST}$ to abbreviate the Simple Theory of Types with and without an axiom of infinity respectively. These theories are naturally axiomatised in a many-sorted language with sorts for each $n \in \mathbb{N}$.
We use $\mathcal{L}_{\mathrm{TST}}$ to denote the $\mathbb{N}$-sorted language endowed with binary relation symbols $\in_n$ for each sort $n \in \mathbb{N}$. There are variables $x^n, y^n, z^n, \ldots$ for each sort $n \in \mathbb{N}$ and well-formed $\mathcal{L}_{\mathrm{TST}}$-formulae are built-up inductively from atomic formulae in the form $x^n \in_n y^{n+1}$ and $x^n = y^n$ using the connectives quantifiers of first-order logic.
We refer to sorts of $\mathcal{L}_{\mathrm{TST}}$ as types. We will attempt to stick to the convention of denoting $\mathcal{L}_{\mathrm{TST}}$-structures using calligraphy letters ($\mathcal{M}, \mathcal{N}, \ldots$). A $\mathcal{L}_{\mathrm{TST}}$-structure $\mathcal{M}$ consists of domains $M_n$ for each type $n \in \mathbb{N}$ and interpretations of the relations $\in_n^{\mathcal{M}} \subseteq M_n \times M_{n+1}$ for each type $n \in \mathbb{N}$; we write $\mathcal{M}= \langle M_0, M_1, \ldots, \in_0^{\mathcal{M}}, \in_1^{\mathcal{M}}, \ldots \rangle$. If is an $\mathcal{L}_{\mathrm{TST}}$-structure then we call the elements of $M_0$ atoms.
We use $\mathrm{TST}$ to denote the $\mathcal{L}_{\mathrm{TST}}$-theory with axioms
- (Extensionality) for all $n \in \mathbb{N}$, $$\forall x^{n+1} \forall y^{n+1} (x^{n+1}= y^{n+1} \iff \forall z^n(z^{n} \in_n x^{n+1} \iff z^n \in y^{n+1})),$$
- (Comprehension) for all $n \in \mathcal{N}$ and for all well-formed $\mathcal{L}_{\mathrm{TST}}$-formulae $\phi(x^n, \vec{z})$, $$\forall \vec{z} \exists y^{n+1} \forall x^n (x^n \in_n y^{n+1} \iff \phi(x^n, \vec{z})).$$
Comprehension ensures that every successor type is closed under the set-theoretic operations: union ($\cup$), intersection ($\cap$), difference ($\backslash$) and symmetric difference ($\triangle$). For all $n \in \mathbb{N}$, we use $\emptyset^{n+1}$ to denote the point at type $n+1$ which contains no points from type $n$ and we use $V^{n+1}$ to denote the point at type $n+1$ that contains every point from type $n$. The Wiener-Kuratowski ordered pair allows us to code ordered pairs in the form $\langle x, y \rangle$ as objects in $\mathrm{TST}$ which have type two higher than the type of $x$ and $y$. Functions, as usual, are thought of as collections of ordered pairs. This means that a function $f: X \longrightarrow Y$ will be coded by an object in $\mathrm{TST}$ that has type two higher than the type of $X$ and $Y$. The theory $\mathrm{TSTI}$ is obtained from $\mathrm{TST}$ by asserting the existence of a Dedekind infinite collection at type $1$.
We use $\mathrm{TSTI}$ to denote the $\mathcal{L}_{\mathrm{TST}}$-theory obtained from $\mathrm{TST}$ by adding the axiom $$\exists x^1 \exists f^3(f^3: x^1 \longrightarrow x^1 \textrm{ is injective but not surjective}).$$
Let $X$ be a set. If the $\mathcal{L}_{\mathrm{T
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'This paper proposes a novel scheme which can efficiently reduce the energy consumption of Optical Line Terminals (OLTs) in Time Division Multiplexing (TDM) Passive Optical Networks (PONs) such as EPON and GPON. Currently, OLTs consume a significant amount of energy in PON, which is one of the major FTTx technologies. To be environmentally friendly, it is desirable to reduce energy consumption of OLT as much as possible; such requirement becomes even more urgent as OLT keeps increasing its provisioning data rate, and higher data rate provisioning usually implies higher energy consumption. In this paper, we propose a novel energy-efficient OLT structure which guarantees services of end users with the smallest number of power-on OLT line cards. More specifically, we adapt the number of power-on OLT line cards to the real-time incoming traffic. Also, in order to avoid service disruption resulted by powering off OLT line cards, proper optical switches are equipped in OLT to dynamically configure the communications between OLT line cards and ONUs.'
---
--------------------------- --

--------------------------- --
<span style="font-variant:small-caps;">Design and Analysis of Green Optical Line Terminal for TDM Passive Optical Networks</span>\
[<span style="font-variant:small-caps;">mina taheri</span>]{}\
[<span style="font-variant:small-caps;">nirwan ansari</span>]{}\
[<span style="font-variant:small-caps;">TR-ANL-2015-004</span>\
]{}\
[<span style="font-variant:small-caps;">Advanced Networking Laboratory</span>]{}\
[<span style="font-variant:small-caps;">Department of Electrical and Computer Engineering</span>]{}\
[<span style="font-variant:small-caps;">New Jersy Institute of Technology</span>]{}\
Introduction
============
As energy consumption is becoming an environmental and therefore social and economic issue, green Information and Communication Technology (ICT) has attracted significant research attention recently. It was reported that Internet consumes as much as $\sim 1\%-2.5\% $ of the total electricity in broadband enabled countries[@BalEne09; @pickavet2009worldwide; @fettweisict], and currently and in the medium term future, the majority of the energy of Internet is consumed by access networks owing to the large quantity of access nodes [@BalEne08].
Energy consumptions of access networks depend on the access technologies. Among various access technologies including WiMAX, FTTN, and point to point optical access networks, passive optical networks (PONs) consume the smallest energy per transmission bit attributed to the proximity of optical fibers to the end users and the passive nature of the remote node [@LanOnt08]. However, as PON is deployed worldwide, it still consumes a significant amount of energy. It is desirable to reduce the energy consumption of PONs since every single watt saving will end up with overall terawatt and even larger power saving. Reducing energy consumption of PONs becomes even more important as the current PON systems evolve into next-generation PONs with increased data rate provisioning [@ZhaNex09; @ansari2013media].
In PONs, energy is consumed by optical line terminal (OLT) and optical network units (ONUs). Owing the large quantity, ONUs consume a large portion of the overall PON energy [@ZhaTow11]. Although OLT consumes a less amount of power than the total aggregated ONUs, one OLT line card does consume a much larger amount of power than one ONU. Reducing energy consumption of OLT is as important as reducing energy consumption of ONUs especially from the operators’ and home users’ perspectives. For the network operators, decreasing the energy consumption of OLT can significantly reduce the energy consumption of the central office, while decreasing the energy consumption of ONUs has small and likely negligible impacts on that of home users who have many other electrical appliances with much higher energy consumption.
Formerly, sleep mode and adaptive line rate have been proposed to efficiently reduce the power consumption of ONUs by taking advantages of the bursty nature of the traffic at the user side [@zhang2013standards; @KubStu10; @WonSle09; @ChoEne10; @taheri2014multi]. It is, however, challenging to introduce “sleep” mode into OLT to reduce its energy consumption for the following reasons. In PONs, OLT serves as the central access node which controls the network resource access of ONUs. Putting OLT into sleep can easily result in service disruption of ONUs in communicating with the OLT. Thus, a proper scheme is needed to reduce the energy consumption of OLT without degrading services of end users.
In this paper, we propose a novel energy-efficient OLT structure which can adapt its power-on OLT line cards according to the real-time arrival traffic. To avoid service degradation during the process of powering on/off OLT line cards, proper devices are added into the legacy OLT chassis to facilitate all ONUs communicate with power-on line cards. To the best of our knowledge, this is the first work focusing on reducing the energy consumption of OLT [^1].
Framework of the energy-efficient OLT design {#sec:I}
============================================

In the central office, one OLT chassis typically comprises of multiple OLT line cards that transmit downstream signals and receive upstream signals at different wavelengths. Each line card communicates with a number of ONUs. Two wavelengths for the uplink and the downlink are assigned to each ONU. In the currently deployed EPON and GPON systems, one OLT line card usually communicates with either $16$ or $32$ ONUs and such an arrangement is referred to as a PON segment. To avoid service disruptions of ONUs connected to the central office, all these OLT line cards in the OLT chassis are usually power-on all the time. To reduce the energy consumption of OLT, our main idea is to adapt the number of power-on OLT line cards in the OLT chassis to the real-time incoming traffic.
There are two types of subscribers that each network serves; Business subscribers and residential subscribers. Business and residential areas are usually disjunct. It is more likely that each PON segment serves either business customers or residential customers. These two types of customers have different traffic profiles. Business users demand high bandwidth during the day and low bandwidth at night while residential customers request high bandwidth in the evening and low bandwidth during the day.
During the day time, residential segments are lightly loaded. Therefore, one OLT line card can serve several residential segments. In the similar way, the traffic from the business segments can combined to traverse a smaller number of line cards in the evening.
Business and residential segments usually have low bandwidth demands during the midnight. In these situations, the whole network is lightly loaded. In order to save energy, the number of line cards can be reduced based on the traffic volume.
Parameters of the proposed model are notated below:
$C_u$: Data rate of one OLT line card in the upstream direction. $C_d$: Data rate of one OLT line card in the downstream direction. $L$: Total number of line cards (PON segments). $N_j$: Number of ONUs connected to PON segment $j$. $T$: Fixed traffic cycle in TDM PON.
$u_{i,j}(t)$: Arrival upstream traffic rate from ONU $i$ of PON segment $j$ at time $t$. $d_{i,j}(t)$: Arrival downstream traffic rate to ONU $i$ of PON segment $j$ at time $t$. $l(t)$: smallest number of required OLT line cards at time $t$. By powering on all the OLT line cards, the overall upstream data rate and downstream data rate accommodated by the OLT chassis equal to $C_u \cdot L$ and $C_d \cdot L$, respectively. $C_u \cdot L$ (or $C_d \cdot L$) may be greater than the real-time upstream (or downstream) traffic.
The traffic rate of each segment cannot be more than the provisioned capacity of the dedicated fiber. Therefore, the following constraints have to be satisfied for any segment $j$: $$\sum_{i=1}^{N_j}{u_{i,j}(t)} \leq C_u$$ $$\sum_{i=1}^{N_j}{d_{i,j}(t)} \leq C_d$$ The real time incoming upstream and downstream traffics are defined as $\mathop{\sum_{j=1}^L\sum_{i=1}^{N_j}}{u_{i,j}(t)}$ and $\mathop{\sum_{j=1}^L\sum_{i=1}^{N_j}}{d_{i,j}(t)}$, respectively. Then, $$l(t)= max ({\lceil\mathop{\sum_{j=1}^L\sum_{i=1}^{N_j}}{u_{i,j}(t)}/{C_u}\rceil , \lceil\mathop{\sum_{j=1}^L\sum_{i=1}^{N_j}}{d_{i,j}(t)}/C_d\rceil })$$
Our ultimate objective is to **
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- 'A.S. Umar[^1], V.E. Oberacker,'
- 'J.A. Maruhn'
date: 'Received: / Revised version: date'
title: |
Neutron Transfer Dynamics and Doorway to Fusion\
in Time-Dependent Hartree-Fock Theory
---
=10000 =10000 =10000
Introduction
============
Heavy-ion fusion reactions are a sensitive probe of the size, shape, and structure of atomic nuclei as well as the collision dynamics. With the increasing availability of radioactive ion-beams the study of fusion reactions of neutron-rich nuclei are now possible [@Li03; @Li05; @Ji04]. Other experimental frontiers are the synthesis of superheavy nuclei in cold and hot fusion reactions [@Ho02; @Og04; @Gi03; @Mo04; @II05], and weakly bound light systems [@YZ06; @PF04; @KG98; @TS00]. Microscopic descriptions of nuclear fusion may provide us with a better understanding of the interplay between the strong, Coulomb, and the weak interactions as well as the enhanced static and dynamic correlations present in these many-body systems.
Recently, two aspects of the collision dynamics leading to fusion that involve pre-compound neutrons have been of interest. Over the last decade a number of fusion studies have reported that the average number of neutrons evaporated by the compound nucleus is considerably less than what is predicted by statistical fusion evaporation calculations [@WB06]. This phenomenon is quite possibly linked to the excitation of the pre-compound collective dipole mode, which is likely to occur when ions have significantly different $N/Z$ ratio, and is a reflection of dynamical charge equilibration. This was studied in the context of TDHF in Refs. [@SC01; @SC07; @US85] and we have recently observed this phenomenon in the $^{64}$Ni+$^{132}$Sn system [@UO07a]. Similarly, considerable attention has been given to the influence of neutron transfer on fusion cross-sections. Studies suggest that the transfer of neutrons with positive $Q$ value strongly enhances the fusion cross-section in comparison to systems having negative $Q$ value [@Li07; @ZSW07; @DLW83]. This may explain the fact that lowering of the potential barrier for neutron-rich systems does not always lead to higher fusion cross-sections. In Ref. [@ZSW07] near-barrier fusion of neutron-rich nuclei was studied within a channel coupling model for intermediate neutron rearrangement using a semi-empirical time-dependent three-body Schrödinger equation. Studies showed that for the $^{40}$Ca+$^{96}$Zr system neutrons were transferred in the early stages of the collision from the $2d_{5/2}$ state of $^{96}$Zr to the unoccupied levels of the $^{40}$Ca nucleus.
It is generally acknowledged that the TDHF method provides a useful foundation for a fully microscopic many-body theory of low-energy heavy-ion reactions [@Ne82]. Historically, fusion in TDHF has been viewed as a final product of two colliding heavy-ions, and the dynamical details influencing the formation of the compound system have not been carefully dissected in terms of the pre-compound properties. Due to the availability of much richer fusion data and considerable advances in TDHF codes that make no symmetry assumptions and use better effective interactions, it may now be possible to examine these effects more carefully. In TDHF complete fusion proceeds by converting the entire relative kinetic energy in the entrance channel into internal excitations of a single well-defined compound nucleus. The dissipation of the relative kinetic energy into internal excitations is due to the collisions of the nucleons with the “walls” of the self-consistent mean-field potential. TDHF studies demonstrate that the randomization of the single-particle motion occurs through repeated exchange of nucleons from one nucleus into the other. Consequently, the equilibration of excitations is very slow, and it is sensitive to the details of the evolution of the shape of the composite system. This is in contrast to most classical pictures of nuclear fusion, which generally assume near instantaneous, isotropic equilibration. The relaxation of the final compound system is a long-time process occurring on a time scale on the order of a few thousand [*fm/c*]{}. In contrast, the pre-compound stage corresponds to a time scale of a few hundred [*fm/c*]{}.
In this manuscript we focus on the analysis of transfer during the early stages of the collision. In particular, we confirm the findings of Ref. [@ZSW07]. We also show that in TDHF different single-particle states seem to see different potential barriers in comparison to the generic ion-ion barrier. This influences the overall dynamics leading to fusion and consequently the effective potential barrier.
Transfer in TDHF
================
The TDHF calculations have been carried out using our new three-dimensional unrestricted TDHF code [@UO06]. For the effective interaction we have used the Skyrme SLy4 force [@CB98], including all of the time-odd terms. Static Hartree-Fock calculations for all the nuclei studied here produce spherically symmetric systems. The chosen mesh spacing was $1$ fm in all three directions, which yield a binding energy accuracy of about $50$ keV in comparison to a spherical Hartree-Fock code. For these calculations we have in addition required that the fluctuations in energy be as low as $10^{-4}$-$10^{-5}$, the corresponding accuracy in binding energy is about $10^{-12}$. This ensures that the tails of the wavefunctions are well converged in the numerical box. The box size used was $60$ fm in the direction of the collision axis and $30$ fm in the other two directions. The initial nuclear separations were $25$ fm.
![\[fig:vrOO\] Potential barrier, $V(R)$, for the $^{16}$O+$^{24}$O system obtained from density constrained TDHF calculations (black curve). Also shown is the point Coulomb potential (red curve).](fig1.eps)
$^{16}$O+$^{24}$O system
------------------------
As an example of a collision involving one neutron rich nucleus we studied the $^{16}$O+$^{24}$O system. In order to determine the potential barrier for the system we have used the DC-TDHF method as described in Ref. [@UO06b]. In this approach the TDHF time-evolution takes place with no restrictions. At certain times during the evolution the instantaneous density is used to perform a static Hartree-Fock minimization while holding the neutron and proton densities constrained to be the corresponding instantaneous TDHF densities. Some of the effects naturally included in the DC-TDHF calculations are: neck formation, particle exchange, internal excitations, and deformation effects to all order. The heavy-ion potential was obtained by initializing the system at $E_{\mathrm{c.m.}}=9.5$ MeV, which is slightly above the barrier shown in Fig. \[fig:vrOO\]. The peak of the barrier is about $8.4$ MeV at a nuclear separation of $9.9$ fm. This is lower than the barrier of the $^{16}$O+$^{16}$O system, which has a height of about $10$ MeV. Here and in the following, the heavy-ion interaction potential has been calculated with a constant mass parameter corresponding to the reduced mass of the ions. This is a good approximation as long as one is only interested in the value of the potential barrier height (as is the case here). For the calculation of sub-barrier fusion cross sections, however, it is essential that coordinate-dependent mass parameters be utilized [@UO07a] because the cross sections depend sensitively on the shape of the potential in the interior region.
![\[fig:rhoz\] Partially integrated neutron densities calculated from Eq.(\[eq:rhoz\]) for the $^{24}$O nucleus plotted on a logarithmic scale versus the collision axis coordinate $z$ for the $^{16}$O+$^{24}$O system at three energies, $E_{\mathrm{c.m.}}=7,8$, and $9$ MeV. The black-solid curves correspond to the initial partial density, the red-dashed curves are the same quantity at the distance of closest approach, and the blue-solid curves are partial densities long after the recoil. Filled spheres near the bottom axis approximately show the initial and final location of the two nuclei.](fig2.eps)
In order to examine the center-of-mass energy dependence of mass exchange below and above the barrier we have initiated TDHF collisions at energies $E_{\mathrm{c.m.}}=6$, 7, 8, 9, and 9.5 MeV. Interestingly, the head-on (zero impact parameter) TDHF collisions for the lowest four energies behave as a typical sub-barrier collision: the two ions approach a minimum distance with no visible overlap, then recoil and move away from each other. This is also true for $E_{\mathrm{c.m.}}=9$ MeV despite the fact that this energy lies above the ion-ion barrier. This suggests that while we can talk about an [*effective*]{} ion-ion barrier the individual single-particle states may see a barrier somewhat different than the effective one. This is in agreement with the findings of Ref. [@ZSW07], and we shall come back to this point again later in the manuscript. Even though we are dealing with sub-barrier energies, we observe mass exchange (mainly neutron) from $^{
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'When a highly charged globular macromolecule, such as a dendritic polyelectrolyte or charged nanogel, is immersed into a physiological electrolyte solution, monovalent and divalent counterions from the solution bind to the macromolecule in a certain ratio and thereby almost completely electroneutralize it. For charged macromolecules in biological media, the number ratio of bound mono- versus divalent ions is decisive for the desired function. A theoretical prediction of such a sorption ratio is challenging because of the competition of electrostatic (valency), ion-specific, and binding saturation effects. Here, we devise and discuss a few approximate models to predict such an equilibrium sorption ratio by extending and combining established electrostatic binding theories such as Donnan, Langmuir, Manning as well as Poisson–Boltzmann approaches, to systematically study the competitive uptake of mono- and divalent counterions by the macromolecule. We compare and fit our models to coarse-grained (implicit-solvent) computer simulation data of the globular polyelectrolyte dendritic polyglycerol sulfate (dPGS) in salt solutions of mixed valencies. The dPGS has high potential to serve in macromolecular carrier applications in biological systems and at the same time constitutes a good model system for a highly charged macromolecule. We finally use the simulation-informed models to extrapolate and predict electrostatic features such as the effective charge as a function of the divalent ion concentration for a wide range of dPGS generations (sizes).'
author:
- Rohit Nikam
- Xiao Xu
- Matej Kanduč
- Joachim Dzubiella
title: 'Competitive sorption of mono- versus divalent ions by highly charged globular macromolecules'
---
\[intro\] Introduction
======================
Polyelectrolytes in polar solvents such as water are important and ubiquitous in biological as well as in synthetic matter. [@Muthukumar2017; @Katchalsky1964; @Rubinstein2012; @Boroudjerdi2005; @Forster1995; @Dobrynin2005; @Liu2003] In these systems, electrostatic interactions, regulated by free ions and water, play a dominant role in shaping the structural and electrostatic characteristics of the polyelectrolyte, and the subsequent function of the system. [@Muthukumar2017; @Rubinstein2012; @Boroudjerdi2005] The electrostatic attraction between the isolated polyelectrolyte molecule and the oppositely charged counterions in the solution leads to strong counterion condensation on the molecule. This significantly modifies its interaction with other charged molecules (*e.g.*, proteins, DNA, etc.) and its electric properties such as the electrophoretic mobility in an external electric field. [@Boroudjerdi2005; @Forster1995; @Liu2003] Therefore, understanding counterion condensation is of utmost importance in order to understand the properties of polyelectrolytes and their implications in the biological and synthetic environments. [@Chremos2016; @Boroudjerdi2005] Condensation effectively leads to neutralizing an equivalent amount of the structural charge ${Z_\mathrm{d}}$ of the macromolecule. [@alexander1984charge; @belloni1998ionic] Hence, the charged substrate plus its confined counterions may be considered as a single entity with an effective (or renormalized) charge ${Z_\mathrm{eff}}$, which is significantly lower than the bare structural charge ${Z_\mathrm{d}}$. One can then identify the difference ${Z_\mathrm{d}}-{Z_\mathrm{eff}}$ as the amount of counterions condensed in the surface region. [@Bocquet2002]
The phenomenon of counterion condensation and the effect of ionic strength on the configurational properties of different types of polyelectrolyte molecules such as chains, [@Forster1992; @dobrynin1995; @Wenner2002; @Raspaud1998; @Dobrynin2005; @Liu2002; @Liu2003; @Muthukumar2004; @Chremos2016] brushes, [@ruhe2004polyelectrolyte; @pincus1991colloid; @borisov1991collapse; @Zhulina1995; @Zhulina2000] or polyelectrolyte nanogels [@nanogel1; @nanogel2; @nanogel3; @arturo2017; @arturo2018] have been studied extensively in the past. Through the knowledge of the distribution of the salt ions around the polyelectrolyte, *e.g.*, measured in terms of the radial distribution function in simulations and experiments, it is possible to derive important properties such as charge–charge correlation, osmotic compressibility and shear viscosity of the system. [@forster1995polyelectrolytes] Muthukumar, in his extensive and comprehensive review of the experimental, theoretical and simulation based research done on polyelectrolyte chains, described the effect of salt concentration, valency of counterions, chain length and polyelectrolyte concentration on counterion condensation. [@Muthukumar2017; @manning2012poisson] Besides the properties of a single isolated polyelectrolyte molecule, the ionic strength of the solution also influences the interaction of polyelectrolytes with other entities, such as adsorption on substrates, [@VandeSteeg1992; @Dahlgren1993; @Netz1999; @Hariharan1998; @Gittins2001; @caruso2000hollow] formation of ultra-thin polyelectrolyte multilayer membranes, [@Decher1992; @Decher1997; @Ladam2000; @McAloney2001; @Dubas2001] the structure and solubility of polyelectrolyte complexes [@hugerth1997effect; @rusu2003formation; @winkler2002complex; @Kudlay2004a; @Mende2002] or coacervates. [@spruijt2010binodal; @Gucht2011; @biesheuvel2004electrostatic; @Perry2014]
As an emerging class of functional polyelectrolytes, polyelectrolyte nanogels [@nanogel1; @nanogel2; @nanogel3; @arturo2017; @arturo2018] and dendritic or hyperbranched polyelectrolytes [@JensDernedde2010; @Khandare2012; @Groeger2013; @Maysinger2015; @Reimann2015] have attracted considerable interest in the scientific community in the last years due to their multifaceted bioapplications, such as biological imaging, drug delivery and tissue engineering. [@Leereview; @Ballauff2004; @Tian2013] In particular, the hyperbranched or dendritic polyglycerol sulfate molecules (hPGS or dPGS, respectively) are found to possess strong anti-inflammatory properties,[@Maysinger2015; @Reimann2015] act as a transport vehicle for drugs towards tumor cells,[@Sousa-Herves2015; @Groeger2013; @Vonnemann2014] and can be used as imaging agents for the diagnosis of rheumatoid arthritis. [@Vonnemann2014] This wide variety of applications, thus, have proven them to be high potential candidates for the use in medical treatments. [@Khandare2012] Hence, the understanding of dPGS interaction with the *in vivo* environment becomes important. The highly symmetric dendritic topology, terminated with monovalent negatively charged sulfate groups, makes dPGS also an excellent representative model in the class of highly charged globular polyelectrolytes. [@xu2017charged; @nikam2018charge] Because of the charged terminal groups, dPGS mainly interacts through electrostatics, rendering counterion condensation and subsequent charge renormalization effects to become substantial for function.
There have been past efforts to investigate the counterion condensation and to define the effective charge as a result of the charge renormalization on charged hard-sphere colloids. [@Ohshima1982; @Zimm1983; @alexander1984charge; @Belloni1984; @belloni1998ionic; @Ramanath1988; @Manning2007; @Bocquet2002; @Gillespie2014] However, the characterization of open-structure nanogel particles or dendrites such dPGS, which in part are penetrable to ions and a surface is not well defined, remains challenging. [@Ohshima2008] Recently, Xu *et al.* implemented a simple but accurate scheme to define and determine the effective surface potential and its location for dPGS, by mapping potentials obtained from simulations to the Debye–Hückel potential in the far-field regime. [@xu2017charged] This scheme is widely known as the Alexander prescription. [@alexander1984charge; @Trizac2002; @bocquet2002effective; @Levin2004] Based on this criterion, a systematic electrostatic characterization of dPGS has been performed via coarse-grained [@xu2017charged] and all-atom [@nikam2018charge] simulations by defining the number of condensed (bound) ions. It was then established that the strong binding of dPGS to lysozyme – an abundant protein in the human biological environment – and a sequential formation of a protein corona around dPGS in the presence of NaCl salt solution, is dominantly governed by the entropic gain due to the release of a few Na$^+$ counterions during binding. [@xu:biomacro] Proteins typically bind strongly to the macromolecular surface, thereby forming a protein ‘corona’, a dense shell of proteins that can entirely coat the macromolecule. [@Owens2006; @Cedervall2007; @Lindman2007; @Monopoli2012;
|
{
"pile_set_name": "ArXiv"
}
| null |
**Quantum Kaluza-Klein Cosmologies (V)**
Zhong Chao Wu
Dept. of Physics
Beijing Normal University
Beijing 100875, P.R. China
**Abstract**
In the No-boundary Universe with $d=11$ supergravity, under the $S_n \times S_{11-n}$ Kaluza-Klein ansatz, the only seed instanton for the universe creation is a $S_7 \times S_4$ space. It is proven that for the Freund-Rubin, Englert and Awada-Duff-Pope models the macroscopic universe in which we are living must be 4- instead of 7-dimensional without appealing to the anthropic principle.
PACS number(s): 98.80.Hw, 11.30.Pb, 04.60.+n, 04.70.Dy
Key words: quantum cosmology, Kaluza-Klein theory, supergravity, gravitational instanton
In a series of papers \[1\] the origin of the dimension of the universe was investigated for the first time in quantum cosmology. As far as I am aware, in the No-Boundary Universe \[2\], the only way to tackle the dimensionality of the universe is through Kaluza-Klein cosmologies. In the Kaluza-Klein model with $d=11$ supergravity, under the $S_n \times S_{11-n}$ ansatz, it has been shown that the macroscopic universe must be 4- or 7-dimensional. The motivation of this paper is to prove that the universe must be 4-dimensional.
In $d=11$ simple supergravity, in addition to fermion fields, a 3-index antisymmetric tensor $A_{MNP}$ is introduced into the theory by supersymmetry \[3\]. In the classical background of the $WKB$ approximation, one sets the fermion fields to vanish. Then the action of the bosonic fields can be written $$\bar{I}= \int \sqrt{-g_{11}}\left ( \frac{1}{2} R - \frac{1}{48}
F_{MNPQ}F^{MNPQ} + \frac{\sqrt{2}}{6\cdot (4!)^2}
\eta^{M_1M_2\cdots
M_{11}}F_{M_1M_2M_3M_4}F_{M_5M_6M_7M_8}A_{M_9M_{10}M_{11}} \right
)d^{11}x,$$ where $$F_{MNPQ} \equiv 4! \partial_{[M}A_{NPQ]},$$ $$\eta^{A\cdots N} = \frac{1}{\sqrt{-g_{11}}} \epsilon^{A\cdots N}$$ and $R$ is the scalar curvature of the spacetime with metric signature $(-, +, +, \cdots +)$. The theory is invariant under the Abelian gauge transformation $$\delta A_{MNP} = \partial_{[M}\zeta_{NP]}.$$ It is also noticed that the action is invariant under the combined symmetry of time reversal with $A_{MNP} \rightarrow -A_{MNP}$.
The field equations are $$R_{MN} - \frac{1}{2}Rg_{MN} = \frac{1}{48}
(8F_{MPQR}F_N^{\;\;\;PQR} -g_{MN}F_{SPQR}F^{SPQR}),$$ and $$F^{MNPQ}_{\;\;\;\;\;\;\;\;\;;M}= \left
[\frac{-\sqrt{2}}{2\cdot(4!)^2 }\right ]\cdot \eta^{M_1 \cdots
M_8NPQ}F_{M_1\cdots M_4}F_{M_5\cdots M_8}.$$
At the $WKB$ level, it is believed that the Lorentzian evolution of the universe originates from a compact instanton solution, i.e. a stationary action solution of the Euclidean Einstein and other field equations. In order to investigate the origin of the dimension of the universe, we are trying to find the following minisuperspace instantons: the $d=11$ spacetime takes a product form $S_n\times S_{11-n}$ with an arbitrary metric signature and all components of the $F$ field with mixed indices in the two factor spaces to be zero. In the factor space $S_n \;(n =1,2,3)$ the $F$ components must be vanish due to the antisymmetry of the indices. Then $F$ must be a harmonic in $S_{11-n}$ since the right hand side of the field equation (6) vanishes. It is known in de Rham cohomology that $H^4(S_4) =1$ and $H^4(S_m) =0 \;\;(m\neq
4)$. So there is no nontrivial instanton for $n = 1,2,3$. For $n=5,6$, both $F$ components in $S_5$ and $S_6$ must be harmonics and so vanish. By the dimensional duality, there does not exit nontrivial instanton either for $n= 10, 9, 8$. The case $S_4
\times S_7$ is the only possibility for the existence of a nontrivial instanton, the $F$ components must be a harmonic in $S_4$, but do not have to in $S_7$. The no-boundary proposal and the ansatz are very strong, otherwise the nonzero $F$ components could live in open or closed $n$-dimensional factor spaces $(
4\leq n\leq 10)$ \[1\].
Four compact instantons are known, their Lorentzian versions are the Freund-Rubin, Englert, Awada-Duff-Pope and Englert-Rooman-Spindel spaces \[4\]\[5\]\[6\]\[7\]. They are products of a 4-dimensional anti-de Sitter space and a round or squashed 7-sphere. These spaces are distinguished by their symmetries from other infinitely many solutions with the same $F$ field. From now on, Greek letters run from 0 to 3 for the indices in $S_4$ and small Latin letters from 4 to 10 for the indices in $S_7$.
One can analytically continue the $S_7$ or $S_4$ space at the equator to form a 7- or 4-dimensional de Sitter or anti-de Sitter space, which is identified as our macroscopic spacetime, and the $S_4$ or $S_7$ space as the internal space. One may naively think, since in either case the seed instanton is the same, that the creation of a macroscopic 7- or 4-dimensional universe should be equally likely. However, a closer investigation shows that this is not the case, it turns out that the macroscopic universe must be 4-dimensional, regardless whether the universe is habitable.
The Freund-Rubin is of the $N=8$ supersymmetry \[4\]. Here the only nonzero $F$ components are in the $S_4$ factor space of the instanton $$F_{\mu \nu \sigma \delta} = i\kappa \sqrt{g_4}\epsilon_{\mu \nu
\sigma \delta },$$ where $g_4$ is the determinant of the $S_4$ metric, the $F$ components are set imaginary in $S^4$ such that their values become real in the anti-de Sitter space, which is an analytic continuation of the $S_4$ space, as shown below. The $F$ field plays the role of an anisotropic effective cosmological constant, which is $\Lambda_7 = \kappa^2/3$ for $S_7$ and $\Lambda_4 = -
2\kappa^2/3$ for $S_4$, in the sense that $R_{mn} = \Lambda_7 \;
g_{mn}$ and $R_{\mu \nu} = \Lambda_4 \; g_{\mu \nu}$, respectively. The $S_4$ space must have radius $r_4 =
(3/\Lambda_4)^{1/2}$ and metric signature $(-,-,-,-)$, while the $S_7$ space is of radius $r_7 =(6/\Lambda_7)^{1/2}$ and metric signature $(+,+, \cdots +)$.
Since the metric signature of the factor space $S_4$ is not appropriate, one has to analytically continue the $S_4$ manifold into an anti-de Sitter space with the right metric signature $(-,+,+,+)$. The $S_4$ metric can be written $$ds_4^2= -dt^2 - \frac{3}{\Lambda_4} \sin^2\left
(\sqrt{\frac{\Lambda_4}{3}}t \right )(d\chi^2 + \sin^2 \chi
(d\theta^2 + \sin^2 \theta d\phi^2)).$$ One can obtain the 4-dimensional anti-de Sitter space by setting $
\rho = i\chi$. However, if one looks closely in the quantum creation scenario, this continuation takes two steps. First, one has to continue on a three surface where the metric is stationary. One can choose $\chi = \frac{\pi}{2}$ as the surface, set $\omega
= i(\chi -
|
{
"pile_set_name": "ArXiv"
}
| null |
---
address: |
Laboratoire d’Astrophysique, UMR 5572, Observatoire Midi-Pyrénées\
14 avenue E.-Belin, F-31400 Toulouse, France
author:
- 'G. GOLSE, J.-P. KNEIB and G. SOUCAIL'
title: CONSTRAINING THE COSMOLOGICAL PARAMETERS FROM GRAVITATIONAL LENSES WITH SEVERAL FAMILIES OF IMAGES
---
Introduction
============
Recent works on constraining the cosmological parameters using the CMB and the high redshift supernovae seem to converge to a new “standard cosmological model” favouring a flat universe with $\Omega_m\sim 0.3$ and $\Omega_\lambda\sim 0.7$: White [@White] and references therein. However these results are still uncertain and depend on some physical assumptions, so the flat $\Omega_m=1$ model is still possible (Le Dour [*et al.*]{} [@LeDour]). It is therefore important to explore other independent techniques to constrain these cosmological parameters.
In cluster gravitational lensing, the existence of multiple images – with known redshifts – given by the same source allows to calibrate in an absolute way the total cluster mass deduced from the lens model. The great improvement in the mass modeling of cluster-lenses that includes the cluster galaxies halos (Kneib [*et al.*]{} [@Kneib96], Natarajan & Kneib [@Natarajan]) leads to the hope that clusters can also be used to constrain the geometry of the Universe, through the ratio of angular size distances, which only depends on the redshifts of the lens and the sources, and on the cosmological parameters. The observations of cluster-lenses containing large number of multiple images lead Link & Pierce [@Link] (hereafter LP98) to investigate this expectation. They considered a simple cluster potential and on-axis sources, so that images appear as Einstein rings. The ratio of such rings is then independent of the cluster potential and depends only on $\Omega_m$ and $\Omega_\lambda$, assuming known redshifts for the sources. According to them, this would allow marginal discrimination between extreme cosmological cases. But real gravitational lens systems are more complex concerning not only the potential but also off-axis positions of sources. They conclude that this method is ill-suited for application to real systems.
We have re-analyzed this problem building up on the modeling technique developed by us. As demonstrated below, we reach a rather different conclusion showing that it is possible to constrain $\Omega_m$ and $\Omega_\lambda$ using the positions of multiple images at different redshifts and some physically motivated lens models.
Troughout this paper we have assumed $H_0=65$ km s$^{-1}$ Mpc$^{-1}$, however the proposed method is independant of the value of $H_0$.
Influence of $\Omega_m$ and $\Omega_\lambda$ on the images formation
====================================================================
Angular size distances ratio term
---------------------------------
In the lens equation: $\mathbf{\theta_{S}}= \mathbf{\theta_{I}} -
\displaystyle{\frac{2}{c^2}\frac{D_{OL}D_{LS}}{D_{OS}}} \mathbf\nabla
\phi_\theta(\mathbf{\theta_{I}}) $, the dependence on $\Omega_m$ and $\Omega_\lambda$ is solely contained in the term $F=\displaystyle{{D_{OL}}{D_{LS}}/{D_{OS}}}$. For a given lens plane, $F(z_s)$ increases rapidly up to a certain redshift and then stalls, with significant differences for various values of the cosmological parameters (see Fig. \[F\_zs\]). Thus in order to constrain the actual shape of $F(z_s)$ several families of multiple images are needed, ideally with their redshifts regularly distributed in $F(z_s)$ to maximize the range in the $F$ variation.
If we consider fixed redshifts for both the lens and the sources, at least 2 multiple images are needed to derive cosmological constraints. In that case $F$ has only an influence on the modulus of $\mathbf{\theta_{I}}-\mathbf{\theta_{S}}$. So taking the ratio of two different $F$ terms provides the intrinsic dependence on cosmological scenarios, independently of $H_0$. A typical configuration leads to the Fig. \[F\_zs\] plot. The discrepancy between the different cosmological parameters is not very large, less than 3% between an EdS model and a flat low matter density one. The figure also illustrates the expected degeneracy of the method, also confirmed by weak lensing analyzes, with a continuous distribution of background sources ([*e.g.*]{} Lombardi & Bertin [@Lombardi] ).
Relative influence of the different parameters
----------------------------------------------
We now look at the relative influence of the different parameters, including the lens parameters, to derive expected error bars on $\Omega_m$ and $\Omega_\lambda$. To model the potential we choose the mass density distribution proposed by Hjorth & Kneib [@Hjorth], characterized by a core radius, $a$, and a cut-off radius $s\gg a$. We can then get the expression of the deviation angle modulus $D_{\theta_{I}}=\parallel\mathbf{\theta_{I}}-\mathbf{\theta_{S}}\parallel$.
For 2 families of multiple images, the relevant quantity becomes the ratio of 2 deviation angles for 2 images $\theta_{I1}$ and $\theta_{I2}$ belonging to 2 different families at redshifts $z_{s1}$ and $z_{s2}$. Let’s define $R_{\theta_{I1},\theta_{I2}}=\displaystyle{\frac{D_{\theta_{I1}}}{D_{\theta_{I2}}}}$. With several families, the problem is highly constrained because a single potential must reproduce the whole set of images. In practice we calculate $\displaystyle{\frac{dR_{\theta_{I1},\theta_{I2}}}{R_{\theta_{I1},\theta_{I2}}}}$ versus the different parameters it depends on. We chose a typical configuration to get a numerical evaluation of the errors on the cosmological parameters: $z_l=0.3$, $z_{s1}=0.7$, $z_{s2}=2$, $\displaystyle{\frac{\theta_{I2}}{\theta_{I1}}}=2$, $\displaystyle{\frac{\theta_{s}}{\theta_{a}}}=10$ ($\theta_a=a/D_{OL}$,$\theta_s=s/D_{OL}$) and we assume $\Omega_m=0.3$ and $\Omega_\lambda=0.7$. We then obtain the following orders of magnitudes for the different contributions :
= 0.57 + 0.74 + 0.17 + 0.4( - ) - 0.1 - 0.06 - 0.015 + 0.02
As expected, even with 2 families of multiple images the influence of the cosmological parameters is of the second order. The precise value of the redshifts is quite fundamental, therefore a spectroscopic determination ($dz=0.001$) is essential. The position of the (flux-weighted) centers of the images are also important. With HST observations we assume $d\theta_I=0.1$”.
So even if the problem is less dependent on the core and cut-off radii (in other word the mass profile), they will represent the main sources of error. Taking $d\theta_a/\theta_a= d\theta_s/\theta_s= 20$ %, we then derive the errors $d\Omega_m$ and $d\Omega_{\lambda}$ from the above relation in the flat low matter density we chose. We did this computation for different sets of cosmological models. Indeed the errors we will obtain with this method change significantly with respect to $\Omega_m$ and $\Omega_\lambda$. All other things being equal apart from the cosmological parameters, we plot $d\Omega_m$ and $d\Omega_\lambda$ for a continuous set of universe models (Fig. \[erreurs\]). For instance in the 2 popular cosmological scenarios, we have :
$\Omega_m=0.3\pm0.24 $ $\Omega_\lambda=0.7\pm0.5$ or $\Omega_m=1\pm0.33$ $\Omega_\lambda=0\pm1.2$
As this can be easily understood from the Fig. \[F\_zs\] degeneracy plot, the method is in general far more sensitive to the matter density than to the cosmological constant, for which the error bars are larger.
However the results we could obtain this way are as precise as the ones given by other constraints. But these errors are just typical; provided spectroscopic and HST observations, they depend mostly on the particular cluster and the potential model chosen to describe it. They could be quite tightened with a precise model, and by increasing the number of clusters with multiple images.
\[simul\]Constraint on $(\Omega_m,\Omega_\lambda)$ from strong lensing
======================================================================
Method and algorithm for numerical simulations
-----------------------------------------------
We consider basically the potential introduced in section 2.2. After considering the lens equation, fixing arbitrary values $(\Omega_m^0$,$\Omega_\lambda^0)$ and a cluster lens redshift $z_l$, our code can determine the images of a source galaxy at a redshift $z_s$. Then taking as single observables these sets of images as well as the different redshifts, we can recover some parameters (the more important ones being $\sigma_0$, $\theta_a$
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
The main difficulty in solving the Helmholtz equation within polygons is due to non-analytic vertices. By using a method nearly identical to that used by Fox, Henrici, and Moler in their 1967 paper; it is demonstrated that such eigenvalue calculations can be extended to unprecedented precision, very often to well over a hundred digits, and sometimes to over a thousand digits.
A curious observation is that as one increases the number of terms in the eigenfunction expansion, the approximate eigenvalue may be made to alternate above and below the exact eigenvalue. This alternation provides a new method to bound eigenvalues, by inspection.
Symmetry must be exploited to simplify the geometry, reduce the number of non-analytic vertices and disentangle degeneracies. The symmetry-reduced polygons considered here have at most one non-analytic vertex from which all edges can be seen. Dirichlet, Neumann, and periodic-type edge conditions, are independently imposed on each polygon edge.
The full shapes include the regular polygons and some with re-entrant angles (cut-square, L-shape, 5-point star). Thousand-digit results are obtained for the lowest Dirichlet eigenvalue of the L-shape, and regular pentagon and hexagon.
author:
- Stephen
bibliography:
- 'references.bib'
title: 'Computing ultra-precise eigenvalues of the Laplacian within polygons'
---
\[sec:intro\]Introduction {#secintrointroduction .unnumbered}
=========================
The task is to calculate very precise eigenvalues of the Laplacian within the shapes shown in Fig. \[fig:allshapes\], on which may be imposed either Neumann or Dirichlet boundary conditions.
![Four geometries (with abbreviated names) in which a variety of Dirichlet and Neumann eigenvalues are calculated to within at least 100 digits. Of the regular polygons, only the regular pentagon is shown.[]{data-label="fig:allshapes"}](allshapes)
The technique is substantially identical to the method used by Fox, Henrici, and Moler [@fhm1967] (hereafter referred to as “FHM”) who used a method of particular solutions (MPS) called the “point-matching” or “collocation” method to calculate Dirichlet eigenvalues within the now-famous L-shape.
To see what is possible using this method, an assorted set of eigenvalues, all truncated to 100 digits of precision, for the chosen shapes (Fig. \[fig:allshapes\]) is presented in Table \[tab:ultraprecise\]. In addition, three “thousand-digit” results are submitted to the “On-line Encyclopedia of Integer Sequences” [@oeisorg] (OEIS.org). These results far exceed all previous published results.[^1]
It is generally a good idea to plot some eigenfunctions if only to inspect the contours and nodal patterns to ensure that one is actually calculating eigenvalues. Several eigenfunction contour plots are shown in Figs. \[fig:stareigenfunctions\], \[fig:pentagonplots\], and \[fig:cutsquareeigenfunctions\].
Of course, published concerns regarding the numerically ill-conditioned nature of this method must be addressed. Some such concerns were actually identified by FHM, but more recently clarified by Betcke and Trefethen [@bt2004] in 2004, when they demonstrated the numerical advantages of the so-called “generalized singular-value decomposition” (GSVD) method, a relative to the point-matching method.
Fortunately, the answer to make the point-matching method work is short and simple: One must both (a) select adequate matching points (both number and distribution) and (b) keep enough precision in the intermediate calculations. My empirical observations are that Chebyshev nodes chosen as matching points (as suggested by Betcke and Trefethen [@bt2004]) often work well, even where equally-spaced points do not, and the precision of the intermediate calculations must be significantly higher, often several times, than the precision of the eigenvalue.
The ability to calculate many digits depends on the convergence rate, and a good way to improve it is to exploit symmetries. Fortunately, this exploitation has the important side-effects of (a) reducing the complexity of the region in which one must work, (b) classifying eigenmodes, and (c) disentangling geometrically degenerate eigenfunctions. Do not underestimate the importance of exploiting symmetry.
At virtually every step in this project, only free software[^2] running on modern commodity computer hardware has been used, and with great success. The software of choice is the [GP/PARI]{} calculator [@PARI2] using the GNU Multiple Precision Arithmetic Library [gmp]{} [@gmp6] running on a laptop computer with a [GNU/linux]{} operating system and its countless ancillary programs. (Some use was also made of [maxima]{} [@maxima] for the occasional symbolic calculation.) This programming environment permits efficient numerical computations while “natively” retaining up to several thousand digits of precision.
Several personal computers were used, but the best was a modern laptop computer with and a processor with eight threads.[^3] CPU times are reported using that laptop, and are unavoidably approximate due to multitasking.
Before beginning, it should be made clear that this is a classical and very well-known problem, worked on by many people over the last two-hundred years—with many applications and results. As such, I shall limit the discussion to only those facts that are required to reproduce and possibly extend the present calculations. A thirty-year-old, but still popular and relevant survey of the problem was given by Kuttler and Sigillito [@ks1984]. More practically, active investigators, Barnett and Betcke have created [ MPSpack]{} [@bb2010; @bh2014] that helps bring sophisticated eigenvalue calculations closer to the rest of us.
Despite the long history, except for the recent work of P. Amore, et al. [@abfr2015] , who incidentally make this same observation, I am unaware of any published, non-closed-form eigenvalues accurate to just beyond a dozen or so digits for [*any*]{} shape not related to the closed-form solutions within the equilateral triangle, rectangle, or circle (ellipse). All eigenvalue results in this report are likely unprecedented.
\[sec:eigen\]The eigenvalue problem {#seceigenthe-eigenvalue-problem .unnumbered}
===================================
Let $\mathbf{r}$ be a point in the plane described by either Cartesian coordinates or polar coordinates , where and , and where notation ambiguity is removed by context. The two-dimensional Helmholtz equation is $$\Delta\Psi(k;\mathbf{r})+\lambda\Psi(k;\mathbf{r}) = 0
\label{eq:helmholtz1}$$ where $\Delta$ is the Laplacian and $k=\sqrt{\lambda}$ is the usual “wavenumber”. Without a boundary, $\lambda$ (or $k$) is treated as a continuous eigen-parameter, and $\Psi(k;\mathbf{r})$ may describe a free wave with wavelength $\Lambda=2\pi/k$. The “interior” Helmholtz eigenvalue problem is obtained by restricting $\mathbf{r}$ to the interior of a region (one of Fig. \[fig:allshapes\]), and imposing relevant boundary conditions (Neumann or Dirichlet), which constrains $\lambda$ to a non-accumulating set of discrete eigenvalues, some of which may be degenerate. (The trivial, “closed-form”, Neumann solution, i.e., $\Psi=\mbox{constant}$ with $\lambda=0$, is completely ignored in this project.)
The point-matching method works if the eigenvalues are non-degenerate. Thus it is very important to deal with degeneracies either by (a) dismissing them or (b) disentangling them using symmetry.
First, closed-form solutions are not only known[^4], but also arbitrarily high in “accidental” degeneracy as one climbs the eigenvalue towers[^5]; so after identifying them, simply exclude them from the calculations.
Second, if there is a geometric or reflection symmetry, all eigenfunctions can be sorted into separate symmetry classes—some of which may be closed-form. Doing so effectively splits the problem up into a set of sub-problems, each one with a corresponding symmetry-reduced polygon and edge conditions, and a resulting, non-accumulating, infinite tower of distinct eigenvalues $$0 < \lambda_1 < \lambda_2 < \lambda_3 < \cdots < \lambda_\alpha < \cdots,$$ where $\alpha$ labels the eigenvalue within that tower. Considering only the non-closed-form symmetry classes, it is indeed assumed that there are no degeneracies within such a tower.[^6] Thus, to each non-closed-form symmetry class, we associate a sub-problem, effectively consisting of non-degenerate eigenmodes. A tower of such eigen-pairs for a given symmetry class shall be written as the set $$\left\{ \lambda_\alpha, \Psi_\alpha(\mathbf{r})\,\middle|\, \alpha=1,2,3,... \right\}
\label{eq:exacteigenpair}$$ where $\Psi(k_\alpha;\mathbf{r})\equiv\Psi_\alpha(\mathbf{r})$. Degeneracies, of course, may exist between separate symmetry classes, so it may be possible to choose a subset of non-closed-form classes and calculate only those eigenvalues.
It is these symmetry-reduced, sub-problems
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- 'Thomas Reeves[^1]'
- 'Anil Damle[^2]'
- 'Austin R. Benson'
title: 'Network Interpolation[^3]'
---
[^1]: Center for Applied Mathematics, Cornell University, Ithaca, NY 14853 ().
[^2]: Department of Computer Science, Cornell University, Ithaca, NY 14853 (, ).
[^3]: Submitted to the editors on June 28, 2019.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We present a dimensional analysis of two characteristic time scales in the boundary layer where the disk adjusts to the rotating neutron star (NS). The boundary layer is treated as a transition region between the NS surface and the first Keplerian orbit. The radial transport of the angular momentum in this layer is controlled by a viscous force defined by the Reynolds number, which in turn is related to the mass accretion rate. We show that the observed low- Lorentzian frequency is associated with radial oscillations in the boundary layer, where the observed break frequency is determined by the characteristic diffusion time of the inward motion of the matter in the accretion flow. Predictions of our model regarding relations between those two frequencies and frequencies of kHz QPO’s compare favorably with recent observations for the source 4U 1728-34. This Letter contains a theoretical classification of kHz QPO’s in NS binaries and the related low frequency features. Thus, results concerning the relationship of the low-Lorentzian frequency of viscous oscillations and the break frequency are presented in the framework of our model of kHz QPO’s viewed as Keplerian oscillations in a rotating frame of reference.'
author:
- Lev Titarchuk
- Vladimir Osherovich
title: Correlations between kHz QPO and Low Frequency Features Attributed to Radial Oscillations and Diffusive Propagation in the Viscous Boundary Layer Around a Neutron Star
---
\#1[0= -0.015em0-0 0.03em0-0 -0.015em0.0433em0 ]{}
0.5 truecm
Introduction
============
The discovery of kilohertz quasiperiodic oscillations (QPO’s) in the low mass X-ray neutron star (NS) binaries (Strohmayer 1996; Van der Klis 1996 and Zhang 1996) has stimulated both theoretical and observational studies of these sources. In the upper part of the spectrum (400- 1200 Hz) for most of these sources, two frequencies $\nu_k$ and $\nu_h$ have been seen. Initially, the fact that for some sources, the peak separation frequency $\Delta \nu=\nu_h-\nu_k$ does not change much led to the beat frequency interpretation (Strohmayer 1996; Van der Klis 1998) which was presented as a concept for the first time in the paper by Alpar & Shaham (1985). Beat-frequency models, where the peak separation is identified with the NS spin rate have been challenged by observations: for Sco X-1, $\Delta\nu$ varies by 40% (van der Klis 1997 hereafter VK97) and for source 4U 1608-52, $\Delta\nu$ varies by 26% (Mendez 1998). Mounting observational evidence that $\Delta\nu$ is not constant demands a new theoretical approach. For Sco X-1, in the lower part of the spectrum, VK97 identified two branches (presumably the first and second harmonics) with frequencies 45 and 90 Hz which slowly increase in frequency when $\nu_k$ and $\nu_h$ increase. Furthermore, in the spectra observed by Rossi X-ray Timing Explorer (RXTE) for 4U 1728-34, Ford and van der Klis (1998, herein FV98) found low frequency Lorentzian (LFL) oscillations with frequencies between 10 and 50 Hz. These frequencies as well as break frequency, $\nu_{break}$ of the power spectrum density (PSD) for the same source were shown to be correlated with $\nu_k$ and $\nu_h$. It is clear that the low and high parts of the PSD of the kHz QPO sources should be related within the framework of the same theory. Difficulties which the beat frequency model faces are amplified by the requirement of relating the observed low frequency features, described above, with $\nu_k$ and $\nu_h$.
Recently, a different approach to this problem has been suggested: kHz QPO’s in the NS binaries have been modeled by Osherovich & Titarchuk (1999) as Keplerian oscillations in a rotating frame of reference. In this new model the fundamental frequency is the Keplerian frequency $\nu_k$ (the lower frequency of two kHz QPO’s) $$\nu_k={{1}\over{2\pi}}\left({{GM}\over{R^3}}\right)^{1/2},$$ where G is the gravitational constant, M is the NS mass, and R is the radius of the corresponding Keplerian orbit. The high QPO frequency $\nu_h$ is interpreted as the upper hybrid frequency of the Keplerian oscillator under the influence of the Coriolis force $$\nu_h=[\nu_k^2+(\Omega/\pi)^2]^{1/2},$$ where $\Omega$ is the angular rotational frequency of the NS magnetosphere.
For three sources (Sco X-1, 4U 1608-52 and 4U 1702-429), we demonstrated that the solid body rotation ($\Omega=\Omega_0=const$) is a good first order approximation. Slow variation of $\Omega$ as a function of $\nu_k$ within the second order approximation is related to the differential rotation of the magnetosphere controlled by a frozen-in magnetic structure. This model allows us to address the relation between the high and low frequency features in the PSD of the neutron systems. We interpreted the $\sim 45$ and $90$ Hz oscillations as 1st and 2nd harmonics of the lower branch of the Keplerian oscillations in the rotating frame of reference: $$\nu_L=(\Omega/\pi)(\nu_k/\nu_h)\sin\delta,$$ where $\delta$ is the angle between ${\bf \Omega}$ and the vector normal to the plane of the Keplerian oscillations. For Sco X-1, we found that the angle $\delta=5.5^o$ fits the observations.
In this Letter we include the LFL oscillations and related break frequency phenomenon in our classification. We attribute LFL oscillations to radial oscillations in the viscous boundary layer surrounding a neutron star. According to the model of Shakura & Sunyaev (1973, hereafter SS73), the innermost part of the Keplerian disk adjusts itself to the rotating central object (i.e. neutron star). The recent modelling by Titarchuk, Lapidus & Muslimov (1998, hereafter TLM) led to the determination of the characteristic thickness of the viscous boundary layer $L$. In the following section, we present the extension of this work to relate the frequency of the viscous oscillations $\nu_v$ and $\nu_{break}$ with $\nu_k$. Comparison with the observations is carried out for 4U 1728-34. The last section of this Letter contains our theoretical classification of kHz QPO’s and related low frequency phenomena.
Radial Oscillations and Diffusion in the Viscous Boundary Layer
===============================================================
We define the boundary layer as a transition region confined between the NS surface and the first Keplerian orbit. The radial motion in the disk is controlled by the friction and the angular momentum exchange between adjacent layers resulting in the loss of the initial angular momentum by an accreting matter. The corresponding radial transport of the angular momentum in a disk is described by the equation (e.g. SS73): $$\dot M {d\over {dR}}(\omega R^2) =
2\pi {d\over {dR}} (W_{r\varphi}R^2),$$ where $\dot{M}$ is the accretion rate, and $ W_{r\varphi}$ is the component of a viscous stress tensor which is related to the gradient of the rotational frequency $\omega$, namely $$W_{r\varphi}=-2\eta HR{{d\omega}\over{dR}},$$ where $H$ is a half-thickness of a disk, and $\eta$ is the turbulent viscosity. The nondimensional parameter which is essential for equation (4) is the Reynolds number for the accretion flow $$\gamma={{\dot M}\over{4\pi\eta H}}={{3R v_r}\over {{\it v}_t{\it l}_t}},$$ which is the inverse $\alpha-$parameter in the SS73-model; $v_r$ is a characteristic velocity, $v_t$ and $l_t$ are a turbulent velocity and related turbulent scale respectively.
Equations $\rm \omega=\omega_0~{\rm~at}~R=R_0$ (NS radius) and ${\rm \omega=\omega_K~at~R=R_{out}}$ (radius where the boundary layer adjusts to the Keplerian motion), and $\rm {{d\omega}\over{dr}}={{d\omega_k}\over{dr}}~
at~\rm R=R_{out}$ were assumed by TLM as boundary conditions. Thus the profile $\omega(R)$ and the outer radius of the viscous boundary layer $R_{out}$ are uniquely determined by these boundary conditions. Presenting $\omega(R)$ in terms of dimensionless variables: namely angular velocity $\theta=\omega/\omega_0$, radius $ r=R/R_0$ ($ R_0=x_0R_s$, $ R_s=2GM/c^2$ is the Schwarzschild radius), and mass $ m=M/M_{\odot}$, we express Keplerian angular velocity as $$\theta_K={{6}/(a_K r^{3/2}}),$$ where $a_K=m(x_0/3)^{3/2}(\nu_0/363~{\rm Hz})$ and the NS rotational frequency $\nu_0$ has a particular value for each star. The particular coefficient
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We consider the $\pi^+\pi^-\pi_0\gamma$ final state in electron-positron annihilation at cms energies not far from the threshold. Both initial and final state radiations of the hard photon are considered, but without interference between them. The amplitude for the final state radiation is obtained by using the effective Wess-Zumino-Witten Lagrangian for pion-photon interactions valid for low energies. In real experiments energies are never so small that $\rho$ and $\omega$ mesons would have a negligible effect. So a phenomenological Breit-Wigner factor is introduced in the final state radiation amplitude to account for the vector mesons influence. Using radiative 3$\pi$ production amplitudes, a Monte Carlo event generator is developed which could be useful in experimental studies.'
author:
- |
A.Ahmedov$^a$, G.V.Fedotovich$^b$, E.A.Kuraev$^c$, Z.K.Silagadze$^b$\
$^a$[Laboratory of Particle Physics, JINR, 141980, Dubna, 141980 Russia ]{}\
$^b$[Budker Institute of Nuclear Physics, 630 090, Novosibirsk, Russia ]{}\
$^c$[Laboratory of Theoretical Physics, JINR, 141980, Dubna, Russia ]{}
title: 'Near threshold radiative 3$\pi$ production in $e^+e^-$ annihilation'
---
Introduction
============
The new Brookhaven experimental result for the anomalous magnetic moment of the muon [@1] aroused considerable interest in the physics community, because it was interpreted as indicating a new physics beyond the Standard Model [@2]. However, such claims, too premature in our opinion, assume that the theoretical prediction for the muon anomaly is well understood at the level of necessary precision. Hadronic uncertainties become of main concern [@3]. Fortunately the leading hadronic contribution is related to the hadronic corrections to the photon vacuum polarization function, which can be accurately calculated provided that the precise experimental data on the low-energy hadronic cross sections in the $e^+e^-$ annihilation are at our disposal.
In the last few years high-statistics experimental data were collected in the $\rho$-$\omega$ region in Novosibirsk experiments at the VEPP-2M collider [@4]. In this region the hadronic cross sections are dominated by the $e^+e^-\to 2\pi$ and $e^+e^-\to 3\pi$ channels. The former is of uppermost importance for reduction of errors in the evaluation of the hadronic vacuum polarization contribution to the muon g-2. Considerable progress was reported for this channel by the CMD-2 collaboration [@5]. The $e^+e^-\to 3\pi$ channel, which gives a less important but still significant contribution to the hadronic error, was also investigated in the same experiment in the $\omega$-meson region [@6]. Such high precision experiments require accurate knowledge of various backgrounds. Among them the $e^+e^-\to 3\pi\gamma$ channel provides an important background needed to be well understood. This experimental necessity motivated our investigation of the three pion radiative production presented here. Besides, being of interest as an important background source, this process could be of interest by itself, because a detailed experimental study of the final state radiation will allow one to get important information about pion-photon dynamics at low energies. However, such an experimental investigation will require much more statistics than available in VEPP-2M experiments and maybe would be feasible only at $\phi$-factories where the low energy region can be reached by radiative return technique as was recently been demonstrated in the KLOE experiment [@7].
Initial state radiation
=======================
Let $J_\mu$ be the matrix element of the electromagnetic current between the vacuum and the $\pi^+\pi^-\pi^0$ final state. Then the initial state radiation (ISR) contribution to the $e^+e^-\to\pi^+\pi^-\pi^0\gamma$ process cross section is given at $O(\alpha^3)$ by the standard expression [@8] $$\begin{aligned}
& & d\sigma_{ISR}(e^+e^-\to 3\pi\gamma)=\frac{e^6}{4(2\pi)^8
(Q^2)^2}\left \{ \frac{Q^2}{4E^2}~J\cdot J^* \left (\frac{p_+}{k\cdot p_+}-
\frac{p_-}{k\cdot p_-} \right )^2- \right . \nonumber \\ & & \left .
-\frac{Q^2}{2E^2}~\frac{p_+\cdot J~
p_+\cdot J^* + p_-\cdot J~ p_-\cdot J^*}{k\cdot p_+ k\cdot p_-}-
\frac{J\cdot J^*}{2E^2}\left (
\frac{k\cdot p_+}{k\cdot p_-}+\frac{k\cdot p_-}{k\cdot p_+}\right )
+ \right . \label{eq1} \\ & & \left . +\frac{m_e^2}{E^2}
\left ( \frac{p_+\cdot J} {k\cdot p_-}-\frac{p_-\cdot J}{k\cdot p_+}\right )
\left (\frac{p_+\cdot J^*}{k\cdot p_-}-
\frac{p_-\cdot J^*}{k\cdot p_+}\right ) \right \}d\Phi \equiv
\frac{e^6}{4(2\pi)^8}~|A_{ISR}|^2d\Phi, \nonumber\end{aligned}$$ where $d\Phi$ stands for the Lorentz invariant phase space $$d\Phi=\frac{d\vec{k}}{2\omega}~\frac{d\vec{q}_+}{2E_+}~
\frac{d\vec{q}_-}{2E_-}~
\frac{d\vec{q}_0}{2E_0}~\delta(p_++p_--k-q_+-q_--q_0)$$ and $Q^2=(q_++q_-+q_0)^2=4E(E-\omega)$ is the photon virtuality, $E$ being the beam energy and $\omega$ – the energy of the $\gamma$ quantum. Particle 4-momenta assignment can be read from the corresponding diagrams presented in Fig.\[Fig1\].
The current matrix element $J_\mu$ has a general form $$J_\mu=\epsilon_{\mu\nu\sigma\tau}q_+^\nu q_-^\sigma q_0^\tau
~F_{3\pi}(q_+,q_-,q_0).
\label{eq2}$$ For the $F_{3\pi}$ form-factor, which depends only on invariants constructed from the pions 4-momenta, we will take the expression from [@9] $$F_{3\pi}=\frac{\sqrt{3}}{(2\pi)^2f_\pi^3}\left [\sin{\theta}\cos{\eta}
~R_\omega (Q^2)-\cos{\theta}\sin{\eta}~R_\phi (Q^2)\right ]
\left ( 1-3\alpha_K-\alpha_K H \right ) .
\label{eq3}$$ Here $\alpha_K\approx 0.5$, $f_\pi\approx 93~\mathrm{MeV}$ is the pion decay constant, $\eta=\theta-\arcsin{\frac{1}{\sqrt{3}}}\approx 3.4^\circ$ characterizes the departure of the $\omega$-$\phi$ mixing from the ideal one, and $$H=R_\rho(Q_0^2)+R_\rho(Q_+^2)+R_\rho(Q_-^2),$$ where $$Q_0^2=(q_++q_-)^2,\;\;
Q_+^2=(q_0+q_+)^2,\;\;Q_-^2=(q_0+q_-)^2.$$ The dimensionless Breit-Wigner factors have the form $$R_V(Q^2)=\left [ \frac{Q^2}{M_V^2}-1+i\frac{\Gamma_V}{M_V}\right ]^{-1},
\;\; R_\rho(Q^2)=\left [ \frac{Q^2}{M_\rho^2}-1+
i\frac{\sqrt{Q^2}\Gamma_\rho(Q^2)}{M_\rho^2}\right ]^{-1},$$ where $V=\omega, \phi$ and for the $\rho$ meson the energy dependent width is used $$\Gamma_\rho(Q^2)=\Gamma_\rho \frac{M_\rho^2}{Q^2}\left (\frac{Q^2-4m\pi^2}
{M_\rho^2-4m\pi^2}\right )^{3/2}.$$ The last term in (\[eq1\]) is completely irrelevant for VEPP-2M energies if the hard photon is emitted at a large angle. So we will neglect it in the following.
Final state radiation
=====================
To describe final state radiation (FSR), we use the effective low-energy Wess-Zumino-Witten Lagrangian [@10]. The relevant piece of this Lagrangian is reproduced below $$\begin{aligned}
& & \hspace*{30mm}\left . \left .
{\cal
|
{
"pile_set_name": "ArXiv"
}
| null |
KOBE–FHD–95–06\
October 1995
[**[$\Delta s$]{} density in a proton and unpolarized\
lepton–polarized proton scatterings**]{}\
T. Morii\
\
[*Sciences for Natural Environment*]{}\
[*and*]{}\
[*Graduate School of Science and Technology,*]{}\
[*Kobe University, Nada, Kobe 657, Japan*]{}\
Alexander I. Titov\
\
[*Joint Institute for Nuclear Research,*]{}\
[*141980, Dubna, Moscow region, Russia*]{}\
and\
T. Yamanishi\
\
[*Osaka University, Ibaraki, Osaka 567, Japan*]{}\
It is shown that the parity–violating deep–inelastic scatterings of unpolarized charged leptons on polarized protons, $\ell^{\mp} + \vec P\to \stackrel{\scriptscriptstyle(-)}{\nu_{\ell}} + X$, could provide a sensitive test for the behavior and magnitude of the polarized strange–quark density in a proton. Below charm threshold these processes are also helpful to uniquely determine the magnitude of individual polarized parton distributions.
There have been interesting problems on strange–quark (s–quark) contents in contemporary hadron physics. Deductions of the $\sigma$–term from pion-nucleon scatterings imply an existence of significant s–quark contents in a nucleon[@sigmaterm]. New analysis suggests that about one third of the rest mass of the proton comes from $s\bar s$ pairs. So far, interesting experimental proposals[@newexp] have been presented to measure the neutral weak form factors of the nucleon which might be sensitive to the s–quarks inside the nucleon. A different idea is also proposed to directly probe the s–quark content of the proton by using the lepto– and photo–production of $\phi$–meson that is essentially 100% $s\bar s$[@phi]. Another surprizing results on the s–quark contents in the nucleon have been drawn from the data of polarized deep inelastic scatterings[@PDIS]. To our surprise, the experimental data have suggested that, contrary to the prediction of the naive quark model, there is a large and negative contribution of s–quarks to the proton spin, $i.e.$ $\Delta s=-0.12$, and furthermore very little of the proton spin is carried by quarks. For low-energy properties of baryons, conventional phenomenological quark models treat nucleons as consisting of only u– and d–quarks and thus it naturally comes as a big surprise when some recent measurements and theoretical analyses have indicated a possible existence of a sizable s–quark. In order to get deep understanding of hadron dynamics, it is very important to investigate the behavior of s–quarks in a nucleon. In this paper, we concentrate on the behavior of the polarized s–quark and study the processes sensitive to its polarized distributions in the nucleon.
So far, several people have suggested various processes, which are sensitive to polarized s–quark distributions, such as Drell–Yan processes[@Leader], inclusive $W^{\pm}$– and $Z^0$–productions[@Soffer94] in polarized proton–polarized proton collisions, and also inclusive $\pi^{\pm}$– and $K^{\pm}$–productions in polarized lepton–polarized proton scatterings[@Close]. However, since the differential cross sections for Drell–Yan processes and inclusive $W^{\pm}$– /$Z^0$–hadroproductions are described by the product of two parton distributions participating in such processes, one cannot extract the $x$–dependence of polarized s–quark distributions without ambiguities from such cross sections. In addition, those of inclusive $\pi^{\pm}$– and $K^{\pm}$–leptoproductions include the fragmentation functions of $\pi^{\pm}$– and $K^{\pm}$–decays which possess some theoretical ambiguities, and hence it is also difficult to derive the exact behavior of polarized s–quark distributions from these processes. Recently, it has been pointed out that parity–violating polarized electron elastic scatterings on unpolarized protons can give informations on the matrix elements, $\langle p|\bar s\Gamma_{\mu}s|p\rangle$ with $\Gamma_{\mu}$ $=\gamma_{\mu}$ and $\gamma_{\mu}\gamma_5$[@Fayyazuddin]. However, since its differential cross section includes not only the spin–dependent but also spin–independent proton form factors, one cannot extract the polarized s–quark content without ambiguities even from such processes. Here we consider a different process for examining the polarized s–quark density, which is the parity–violating polarized deep inelastic scattering at high energy. It must be advantageous to study such a process because its differential cross section includes only the spin–dependent structure function of the proton and is explicitly described as a function of $x$.
In parity–violating deep inelastic scatterings of unpolarized charged lepton on longitudinally polarized proton, an interesting parameter is the single–spin asymmetry $A_L^{W^{\mp}}$ defined as $$\begin{aligned}
A_L^{W^{\mp}}&=&\frac{(d\sigma_{++}^{W^{\mp}}+d\sigma_{-+}^{W^{\mp}})-
(d\sigma_{+-}^{W^{\mp}}+d\sigma_{--}^{W^{\mp}})}
{(d\sigma_{++}^{W^{\mp}}+d\sigma_{-+}^{W^{\mp}})+
(d\sigma_{+-}^{W^{\mp}}+d\sigma_{--}^{W^{\mp}})}\nonumber\\
&=&\frac{d\sigma_{0+}^{W^{\mp}}-d\sigma_{0-}^{W^{\mp}}}
{d\sigma_{0+}^{W^{\mp}}+d\sigma_{0-}^{W^{\mp}}}
=\frac{d\Delta_L\sigma^{W^{\mp}}/dx}{d\sigma^{W^{\mp}}/dx}~,
\label{eqn:A_L}\end{aligned}$$ where $d\sigma_{0-}^{W^{\mp}}$, for instance, denotes that the lepton is unpolarized and the helicity of the proton is negative. Note that since a fast incoming negatively (positively) charged lepton, $\ell^-$ ($\ell^+$), couples to a $W$–boson only when it has a negative (positive) helicity, part of spin–dependent cross sections in eq.(\[eqn:A\_L\]) should be zero. For parity–violating weak–interacting reactions with $W^{\mp}$ exchanges, $\ell^{\mp} + \vec P\to \stackrel{\scriptscriptstyle(-)}{\nu_{\ell}} + X$, the spin–dependent and spin–independent differential cross sections as a function of momentun fraction $x$ are given by[@Anselmino] $$\begin{aligned}
&&\frac{d\Delta_L\sigma^{W^{\mp}}}{dx}
=16\pi M_NE\frac{\alpha^2}{Q^4}\eta\left\{\pm (\frac{2}{3}+\frac{xM_N}{6E})x~
g_1^{W^{\mp}}(x, Q^2)+(\frac{2}{3}-\frac{xM_N}{12E})~g_3^{W^{\mp}}(x, Q^2)
\right\}~,\nonumber\\
&&\label{eqn:dDs}\\
&&\frac{d\sigma^{W^{\mp}}}{dx}
=16\pi M_NE\frac{\alpha^2}{Q^4}\eta\left\{(\frac{2}{3}-\frac{xM_N}{4E})~
F_2^{W^{\mp}}(x, Q^2)\pm\frac{1}{3}x~F_3^{W^{\mp}}(x, Q^2)\right\}~,
\label{eqn:ds}\end{aligned}$$ where $E$ is the energy of the charged lepton beam and $M_N$ the mass of the proton. $\eta$ is written in terms of the $W$–boson mass $M_W$ as $$\eta=\frac{1}{2}\left(\frac{G_FM^2_W}{4\pi\alpha}\frac{Q^2}{Q^2+M_W^2}\right)~.
\label{eqn:eta}$$ $g_1^{W^{\mp}}$, $g_3^{W^{\mp}}$ in eq.(\[eqn:dDs\]) and $F_2^{W^{\mp}}$, $F_3^{W^{\mp}}$ in eq.(\[eqn:ds\]) represent spin–dependent and spin–independent proton structure functions, respectively. Below charm threshold, the region of which could be investigated by SMC and/or E143 Collabolations, we can describe these structure functions for the W$^-$ exchange as $$\begin{aligned}
&&F_2^{W^-}(x, Q^2)=2x\left[c_1\{u_v(x, Q^2)+u_s(x, Q^2)\}+
c_2~\bar d_s(x, Q^2)+c_3~\bar s_s(x, Q^2)\right]~,
\nonumber\\
&&F_3^{W^-}(x, Q^2)=2\left[c_1\{u_v(x, Q^2)+u_s(x, Q^2)\}-
c_2~\bar d_s(x, Q^2)-c_3~\bar s_s(x, Q^2)\right]~,
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- Xiangyu Cao
- Alexandre Nicolas
- Denny Trimcev
- Alberto Rosso
bibliography:
- 'yield.bib'
title: ' Soft modes and strain redistribution in continuous models of amorphous plasticity: the Eshelby paradigm, and beyond?'
---
![(a) Sketch of the macroscopic shear stress response of a disordered solid subject to a quasistatic deformation, with depictions of common deformation protocols. Stress fluctuations are not represented in the sketch. (b) Representations of the new strain variables $e_1$, $e_2$, and $e_3$. In this work we consider pure shear along the $e_2$ direction and $\gamma$ identifies to the average of $e_2$.[]{data-label="fig:Macroscopic_response"}](figure1.png){width="0.8\columnwidth"}
Apply a dab of toothpaste onto a toothbrush and slightly tilt the brush. The paste will respond to the small shear stresses $\Sigma$ thus created in its bulk by deforming elastically. In contrast, when you squeeze a toothpaste tube, the stresses in the material exceed a critical yield value $\Sigma_y$, and the paste starts to flow. This “liquid”-like phase under shear is observed not only in pastes and (concentrated) suspensions, but also in other soft solids such as emulsions and foams [@coussot2014yield]. Other disordered materials such as metallic glasses also depart from an elastic behavior under large enough stresses, but then break instead of flowing. In the athermal limit, the change observed at $\Sigma_y$ is a dynamical phase transition known as yielding transition.
To study it, a standard experimental protocol consists in slowly[^1] deforming the material and monitoring its macroscopic stress. For small deformations, the response is linear and elastic. For larger ones, the deformation becomes macroscopically irreversible, due to the onset of plasticity. Three distinct plastic responses can be observed, as shown in Fig. \[fig:Macroscopic\_response\]: (i) the stress grows monotonically and saturates at a steady-state value $\Sigma_y$; (ii) the stress overshoots $\Sigma_y$ and, upon reaching $\Sigma_\text{max}>\Sigma_y$, drops rapidly to the stationary value $\Sigma_y$ [@divoux2011stress] (note that it is still unclear if the material fails globally at $\Sigma_\text{max}$ – as in a spinodal transition [@zapperi1997first; @procaccia2017mechanical] – or through a large sequence of finite-size avalanches) (iii) at $\Sigma_\text{max}$, the material breaks and the stress drops to zero.
Microscopically, the mechanism underlying the irreversible plastic response is the localised rearrangement of a few particles (droplets in emulsions, bubbles in foams), a process called shear transformation (ST) [@argon1979plastic]. Recently, more detailed investigation has revealed that these ST do not arise randomly in the material, but display spatial correlations [@chikkadi2012shear; @nicolas2014spatiotemporal]. It is now widely believed that these correlations stem from the elastic deformation induced by the ST, which has a peculiar quadrupolar shape, as predicted by Eshelby half a century ago [@eshelby1957determination; @schall2007structural]. This quadrupolar kernel has been observed in atomistic simulations of several model glasses [@maloney2006amorphous; @puosi2014time]. Moreover, the plastic ST instability is preceded by the emergence of localisation in the low-frequency modes of the vibrational spectrum [@tanguy10mode; @manning11soft; @charbonneau16universal]. The localised soft spots tend to coincide with the subsequent ST. Interestingly, the displacement field around a soft spot displays a long-range tail, decaying as $r^{1-d}$, with $d$ the spatial dimension, which is consistent with the quadrupolar shape observed after the plastic event.
On the basis of this picture of localised plastic rearrangements, elasto–plastic models (EPM) have proposed to coarse-grain disordered solids into a collection of blocks alternating between an elastic regime and plastic events interacting via a quadrupolar kernel. Following similar endeavours for the study of earthquakes [@chen1991self], these models have succeeded in capturing the presence of strongly correlated dynamics in these systems (avalanches, possible shear bands, etc.) [@baret2002extremal; @budrikis2013avalanche; @nicolas2017deformation; @lin2014scaling; @lin15prl; @gueudre17scaling]. However, a clear connection between the microscopic description and these coarse-grained models is missing. In particular, the universality of the quadrupolar propagator used in EPM may still be questioned and the discreteness of EPM precludes the study of vibrational modes.
An alternative approach is provided by continuum models that extend the free energy description of solids beyond the perfect elastic limit. In these models plasticity is introduced by means of a disordered potential which displays many local minima, as explained in Section \[sec:model\]. Such models, possibly pioneered by Kartha and co-workers [@kartha95tweed], were intensively studied by Onuki [@onuki2003plastic] and Jagla [@jagla07shear]. This paper intends to use the continuum approach to bridge the gap between atomistic simulations and discrete EPM [^2], with an emphasis on the initial soft modes and the actual response to ST.
Considering two-dimensional (2D) materials subjected to pure shear, we find that the low-frequency modes are always peaked in point-like “soft spots”, where the next ST will take place. This extreme localisation is at variance with short-range depinning models, where soft modes have a finite localisation length which can be tuned by playing with the disorder strength [@cao2017localisation; @Tanguy2004localisation]. A closer analysis shows a halo of finite displacements around the soft spots pointing in the radial direction, with a $1/r$ radial decay and a two-fold azimuthal symmetry (this corresponds to a $1/r^2$ decay with four-fold azimuthal symmetry in the strain field), due to the elastic embedding of the impurities, see Section \[sec:denny\]. Surprisingly, these halos do not always match Eshelby’s solution. Instead, we find a one-parameter continuous family of kernels depending on the distribution of plastic disorder in the system (see Fig. \[fig:displacement\_th\]). Their shapes are rationalised analytically in Section \[sec:single\_impurity\] by calculating the soft mode associated with a point-like plastic impurity at $r=0$ embedded in an incompressible elastic medium. In polar coordinates, the (non-affine) displacement field $u$ reads: $$u_r(r,\theta) \propto \frac{ \cos(2\theta)} {1 + \delta \cos(4\theta)} \,,\, u_\theta = 0\,, \label{eq:u}$$ where $\theta = 0$ is the principal axis of positive stretch, and $\delta=(\mu_3-\mu_2)/(\mu_3+\mu_2)$ quantifies the plasticity-induced anisotropy in the shear moduli $\mu_2$ and $\mu_3$ (associated with the strains $e_2$ and $e_3$, respectively; see Fig \[fig:Macroscopic\_response\]). For $\delta=0$ we recover the standard quadrupolar (Eshelby-like) propagator. When $\delta \to 1$ we find a fracture-like kernel concentrating the deformation along the diagonal directions. This limit is obtained when the plastic potential softens the material to such an extent that the modulus along these directions vanishes (namely $\mu_2 \to 0$ while $\mu_3$ remains finite). To the best of our knowledge, the fracture-like propagator has not been observed yet, but we speculate that it might be seen in carefully aged glasses, in the marginal state that precedes global failure, when extended regions are on the brink of plastic failure. In the case of a single impurity, the soft mode has exactly the same shape as the final strain field induced by the ST, but also closely (or exactly if $\mu_2=\mu_3$) matches the transient strain field during the plastic event, up to renormalisation (Section \[sec:SPE3\]).
{width=".9\textwidth"}
Field-based models \[sec:model\]
================================
To begin with, we recall how plasticity is introduced in Continuum Mechanics descriptions of disordered solids [@kartha95tweed; @puglisi2000mechanics; @onuki2003plastic; @lookman03ferro; @jagla07shear; @jagla2017non]. In the spirit of the works of Jagla [@jagla07shear], this is achieved by first writing the free energy of an elastic material and then incorporating the plastic disorder in it.
Strain variables and linear elasticity
--------------------------------------
Even though glassy materials are discrete at the atomic scale, they can be handled as continua as long as one is interested in length scales larger than a few particle diameters [@tsamados2009local]. To linear order, deformations in a continuous medium are quantified by the strain tensor $$\begin{aligned}
&\epsilon_{ij} = \frac12(\partial_j u_{i} + \partial_i u_{j}) \text{
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- Setsuko Wada
- Takashi Onaka
- Issei Yamamura
- Yoshitada Murata
- 'Alan T. Tokunaga'
date: 'Received 6 May 2003 / Accepted 2 June 2003'
title: '$^{13}$C isotope effects on infrared bands of quenched carbonaceous composite (QCC)'
---
Introduction
============
A wide range of the / ratio has been reported in various celestial objects. The ratio changes due to nucleosynthesis and mixing in the interior of stars. During the first dredge-up on the red giant branch (RGB) the convective envelope reaches regions abundant in that was processed from and the ratio decreases. Observations of RGB stars often show the / ratio of 5–20, even lower than theoretically predicted, suggesting the presence of extra-mixing below the convective envelope (e.g. Gilroy [@gil89]). In the third dredge-up during the asymptotic giant branch (AGB) phase, an increase in / is generally expected, but the ratio could also decrease due to cool bottom processing for low-mass stars (Wasserburg et at. [@was95]; Nollett et al. [@nol03]) or to hot bottom burning for more massive stars (Frost et al. [@fro98]). Observations of five carbon-rich circumstellar envelopes indicate the ratio of 30–65 (Kahane et al. [@kah92]), while ten carbon stars are shown to have the ratio in the range 12–60 in their circumstellar envelopes (Greaves & Holland [@gre97]). Some carbon stars show the ratio of / as low as 3 (Lambert et al. [@lam86]; Ohnaka & Tsuji [@ohn96]; Schöier & Olofsson [@sch00]). The / ratio in post-AGB stars and planetary nebulae (PNe) reflects the cumulative effects of different mixing and nuclear processing events during the entire evolution of their progenitors. Lower limits of the ratio of 3 to 10 have been obtained for several objects in the post-AGB phase (Palla et al. [@pal00]; Greaves & Holland [@gre97]). Recently Josselin & Lébre ([@jos01]) estimated an upper limit of / of 5 for the post-AGB candidate, HD179821, whereas a relatively large ratio of $72 \pm 26$ is reported for another post-AGB star, HD56126 (Bakker & Lambert [@bak98]). Clegg et al. ([@cle97]) found low ratios of 15 and 21 in two PNe. Further low values of the ratio of 2–30 have been reported in recent studies of several PNe (Palla et al. [@pal00]; Balser et al. [@bal02]; Josselin & Bachiller [@jos03]), suggesting that some stars undergo non-standard processing in the stellar interior and a low / can be expected during the late stage of their evolution. The solar system value is 89 (Anders & Grevesse [@and89]).
The / ratio also provides key information on the chemical evolution in the Galaxy (for a review, Wilson [@wil99]). Observations of molecules and solid CO$_2$ in interstellar medium indicate that the ratio ranges from 10–100 and increases with the Galactocentric distance. The / ratio is suggested to be about 10–20 in the Galactic center region (Wilson [@wil99]; Boogert et al. [@boo00]; Savage et al. [@sav02]). The Galactic gradient is thought to be built by nucleosynthesis of the Galactic chemical evolution and the suggested ratio of 10–20 in the Galactic center indicates the presence of significant stellar sources of . Interstellar graphite spherules in the Murchison meteorite show a range of the ratio of 7–1330 (Bernatowicz et al. [@ber91]). Some presolar SiC grains show very low / ratios of less than 10 and they are thought to originate from very -rich stars in the AGB phase (Amari et al. [@ama01]). These observations suggest that low / ratio environments are not uncommon in objects in the AGB, post-AGB, and PN phases as well as in some interstellar medium. Carbon-bearing species formed in these environments could thus show non-negligible carbon isotopic effects in their spectrum.
A set of emission bands at 3.3, 6.2, 7.6–7.8, 8.6, and 11.2$\mu$m have been observed in various celestial objects and are called the unidentified infrared (UIR) bands. Fainter companion bands are also sometimes seen. The exact nature of the carriers has not yet been understood completely, but it is generally believed that the emitters or emitting atomic groups containing polycyclic aromatic hydrocarbons (PAH) or PAH-like atomic groups of carbonaceous materials, including such as nanodiamond grains, are responsible for the UIR bands (Léger & Puget [@leg84]; Allamandola et al. [@all85]; Sakata et al. [@sak84]; Papoular et al. [@pap89]; Arnoult et al. [@arn00]; Jones & d’Hendecourt [@jon00]). Alternatively, Holmid ([@hol00]) has recently proposed de-excitation of Rydberg matters as possible carriers. The UIR bands have been observed in a wide range of objects, including regions, reflection nebulae, post-AGB stars, and PNe (for a review, see Tokunaga [@tok97]). They have also been commonly seen in the diffuse Galactic emission (Tanaka et al. [@tan96]; Onaka et al. [@ona96]; Mattila et al. [@mat96]; Kahanpää et al. [@kah03]) as well as in external galaxies (e.g. Mattila et al. [@mat99]; Helou et al. [@hel00]; Reach et al. [@rea00]; Lu et al. [@lu03]), indicating that the carriers are a common member of interstellar medium and present in various environments. Carbon-rich objects in the evolutionary stage from post-AGBs to PNe often show the emission bands and thus isotopic effects should be detectable if they arise from carbonaceous materials of low / ratios. Simple calculations of a -benzene molecule suggest that the peak shift can be as much as 0.15$\mu$m for the C$-$C stretching mode (Appendix \[cal\]).
Observations of the Infrared Space Observatory (ISO; Kessler et al. [@kes96]) have provided a large database of the UIR band spectra in various objects (e.g. Beintema et al. [@bei96]; Molster et al. [@mol96]; Verstraete et al. [@ver96; @ver01]; Boulanger [@bou98]; Cesarsky et al. [@ces00a; @ces00b]; Uchida et al. [@uch98; @uch00]; Moutou et al. [@mou00]; Honey et al. [@hon01]). Recently Peeters et al. ([@pee02]) have investigated in detail the 6–9$\mu$m spectra of 57 sources taken by the Short Wavelength Spectrometer (SWS; de Graauw et al. [@degr96]) on board the ISO and found that the 6.2, 7.7, and 8.6 $\mu$m UIR bands show appreciable variations particularly for post-AGB stars and PNe. On the other hand, the variations in the 11.2$\mu$m band are relatively modest and those in the 3.3$\mu$m are less pronounced (Tokunaga et al. [@tok91]; Roche et al. [@roc91]; Hony et al. [@hon01]; van Diedenhoven et al. [@van03]). These variations can be interpreted in part by nitrogen substitutions in PAHs and anharmonicity, but not all of the observed aspects of the UIR bands have yet been fully understood (Verstraete et al. [@ver01]; Pech et al. [@pec02]; Peeters et al. [@pee02]). Part of the observed variations could also originate from isotopic effects of the UIR band carriers since the objects that show the variations are mostly post-AGB stars and PNe, in which small / ratios can be expected.
In the present paper we investigate isotopic effects on the UIR bands experimentally. We synthesize a laboratory analogue of carbonaceous dust, the quenched carbonaceous composite (QCC; Sakata et al. [@sak84]), with various / ratios from the starting gas of a mixture of H$_4$ and H$_4$. The QCC shows infrared bands similar to the UIR bands and shifts in the band peaks due to the substitution are clearly detected. In Sect.2 we describe the experimental procedure. The results are shown in Sect. 3 and discussed in comparison with observations in Sect. 4. A summary is given in Sect. 5.
Experimental
============
The experimental procedure for synthesizing the QCC is described in detail in Sakata et al. ([@sak84]). Methane (CH$_4$), the source gas of the QCC, is decomposed by the imposed microwave radiation and becomes a plasma. Carbonaceous condensates are formed in the injection beam of the plasma by quenching of the gas. Typically two types of the QCC are formed. One is a brown-black material (hereafter called dark-CC), which is collected on a substrate in the main injection beam. It has been shown to consist of a coagulation of carbon-onion-like particles (
|
{
"pile_set_name": "ArXiv"
}
| null |
= 10000
Recently there has been much interest in the search of unconventional electron behavior deviating from the Fermi liquid picture[@unconv]. Besides this, the other paradigm that is well-established on theoretical grounds is the Luttinger liquid behavior of one-dimensional (1D) electron systems[@sol; @hal]. There have been suggestions that this behavior could be extended to two-dimensional (2D) systems, in the hope that it may explain some of the features of the copper-oxide materials[@and]. However, at least for the Luttinger model, the analytic continuation in the number $D$ of dimensions has shown that the Luttinger liquid behavior is lost as soon as one departs from $D = 1$[@ccm; @arri].
Several authors have also analyzed the possibility that singular interactions could lead to the breakdown of the Fermi liquid picture[@sing]. With regard to real low-dimensional systems, such as carbon nanotubes, the main electron interaction comes actually from the long-range Coulomb potential $V(|{\bf r}|) \sim
1/|{\bf r}| $. This is also the case of the 2D layers in graphite, which have a vanishing density of states at the Fermi level. Quite remarkably, a quasiparticle decay rate linear in energy has been measured experimentally in graphite[@exp], pointing at the marginal Fermi liquid behavior in such 2D layers. Due to the singular Coulomb interaction, the imaginary part of the electron self-energy in the 2D system behaves at weak $g$ coupling like $g^2 \omega $[@expl]. It is crucial, though, the fact that the effective coupling scales at low energy as $g \sim 1/\log (\omega )$. This prevents the logarithmic suppression of the quasiparticle weight, which gets corrected by terms of order $g^2 \log (\omega ) \sim 1/\log
(\omega )$[@marg].
In this letter we investigate whether the long-range Coulomb interaction may lead to the breakdown of the Fermi liquid behavior at any dimension between $D = 1$ and 2. The issue is significant for the purpose of comparing with recent experimental observations of power-law behavior of the tunneling conductance in multi-walled nanotubes[@mwnt]. These are systems whose description lies between that of a pure 1D system and the 2D graphite layer. It turns out, for instance, that the critical exponent measured for tunneling into the bulk of the multi-walled nanotubes is $\alpha \approx
0.3$. This value is close to the exponent found for the single-walled nanotubes[@bock; @yao]. However, it is much larger than expected by taking into account the reduction due to screening ($\sim 1/\sqrt{N}$) in a wire with a large number $N$ of subbands, what points towards sensible effects of the long-range Coulomb interaction in the system.
We develop the analytic continuation in the number of dimensions having in mind the low-energy modes of metallic nanotubes, which have linear branches crossing at the Fermi level. From this picture, we build at general dimension $D$ a manifold of linear branches in momentum space crossing at a given Fermi point. We consider the hamiltonian $$\begin{aligned}
H & = & v_F \int_0^{\Lambda } d p |{\bf p}|^{D-1}
\int \frac{d\Omega }{(2\pi )^D} \;
\Psi^{+} ({\bf p}) \; \mbox{\boldmath $\sigma
\cdot $} {\bf p} \; \Psi ({\bf p}) \nonumber \\
\lefteqn{ + e^2 \int_0^{\Lambda } d p |{\bf p}|^{D-1}
\int \frac{d\Omega }{(2\pi )^D} \;
\rho ({\bf p}) \; \frac{c(D)}{|{\bf p}|^{D-1}} \;
\rho (-{\bf p}) \;\;\;\;\;\; }
\label{ham}\end{aligned}$$ where the $\sigma_i $ matrices are defined formally by $ \{ \sigma_i , \sigma_j \} = 2\delta_{ij}$. Here $\rho ({\bf p})$ are density operators made of the electron modes $\Psi ({\bf p})$, and $ c(D)/|{\bf p}|^{D-1} $ corresponds to the Fourier transform of the Coulomb potential in dimension $D$. Its usual logarithmic dependence on $|{\bf p}|$ at $D = 1$ is obtained by taking the 1D limit with $ c(D) =
\Gamma ((D-1)/2)/(2\sqrt{\pi})^{3-D}$.
The dispersion relation $\varepsilon ({\bf p}) = \pm |{\bf p}|$ is that of Dirac fermions, with a vanishing density of states at the Fermi level above $D = 1$. This ensures that the Coulomb interaction remains unscreened in the analytic continuation. At $D = 2$ we recover the low-energy description of the electronic properties of a graphite layer, dominated by the presence of isolated Fermi points with conical dispersion relation at the corners of the Brillouin zone[@graph].
In the above picture, we are neglecting interactions that mix the two inequivalent Fermi points common to the low-energy spectra of graphite layers and metallic nanotubes. In the latter, such interactions have been considered in Refs. and , with the result that they have smaller relative strength ($\sim 0.1/N$, in terms of the number $N$ of subbands) and remain small down to extremely low energies. More recently, the question has been addressed regarding the interactions in the graphite layer, and it also turns out that phases with broken symmetry cannot be realized, unless the system is doped about half-filling[@nos] or it is in a strong coupling regime[@khves].
We will accomplish a self-consistent solution of the model by looking for fixed-points of the renormalization group transformations implemented by the reduction of the cutoff $\Lambda $[@sh]. As usual, the integration of high-energy modes at that scale leads to the cutoff dependence of the parameters in the low-energy effective theory. We will see that the Fermi velocity $v_F$ grows in general as the cutoff is reduced towards the Fermi point. On the other hand, the electron charge $e$ stays constant as $\Lambda
\rightarrow 0$. This comes from the fact that the polarizability $\Pi $ does not show any singular dependence on the high-energy cutoff $\Lambda $ for $D < 3$. The polarizability is then given by $$\Pi ({\bf k}, \omega_k) = b(D) \frac{v_F^{2-D} {\bf k}^2}
{ | v_F^2 {\bf k}^2 - \omega_k^2 |^{(3-D)/2} }\; ,\;$$ where $b(D) = \frac{2}{ \sqrt{\pi} } \frac{ \Gamma ( (D+1)/2 )^2
\Gamma ( (3-D)/2 ) }{ (2\sqrt{\pi})^D \Gamma (D+1) }$.
The dependence of $v_F$ on the cutoff $\Lambda $ implies an incomplete cancellation between self-energy and vertex corrections to the polarizability. The dressed polarizability depends therefore on the effective Fermi velocity $v_F (\Lambda )$. The renormalized value of $v_F$ is determined by fixing it self-consistently to the value obtained in the electron propagator $G$ corrected by the self-energy contribution $$\begin{aligned}
\Sigma ({\bf k}, \omega_k) & = & - e^2 \int_0^{\Lambda }
d p |{\bf p}|^{D-1} \int \frac{d\Omega }{(2\pi )^D}
\int \frac{d \omega_p}{2\pi } \nonumber \\
\lefteqn{ G ({\bf k} - {\bf p}, \omega_k - \omega_p)
\frac{-i}{ \frac{|{\bf p}|^{D-1}}{c(D)} + e^2 \Pi ({\bf p},
\omega_p) } . }
\label{selfe}\end{aligned}$$
The fixed-points of the renormalization group in the limit $\Lambda \rightarrow 0$ determine the universality class to which the model belongs. At $D = 2$, we are bound to obtain the low-energy fixed-point at vanishing coupling of the model of Dirac fermions with Coulomb interaction[@marg]. On the other hand, at $D = 1$ there has to be presumably a fixed-point corresponding to Luttinger liquid behavior. We note, however, that no solution of the model has been obtained yet without carrying dependence on the transverse scale needed to define the 1D logarithmic potential. Our dimensional regularization overcomes the problem of introducing such external parameter, which prevents a proper scaling behavior of the model[@wang].
At general $D$, the self-energy (\[selfe\]) shows a logarithmic dependence on the cutoff at small frequency $\omega_k$ and small momentum ${\bf k}$. This is the signature of the renormalization of the electron field scale and the Fermi velocity. In the low-energy theory with high-energy modes integrated out, the electron propagator becomes $$\begin{aligned}
\frac{1}{G} & = & \frac{1}{G_0} - \Sigma
\approx Z
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
The rotation curve for the IV galactic quadrant, within the solar circle, is derived from the Columbia University - U. de Chile CO(J=1$\to$0) survey of molecular gas. A new sampling, four times denser in longitude than in our previous analysis, is used to compute kinematical parameters that require derivatives w/r to galactocentric radius; the angular velocity $\Omega(R)$, the epicyclic frequency $\kappa(R)$, and the parameters $A(R)$ and $B(R)$ describing, respectively, gas shear and vorticity. The face-on surface density of molecular gas is computed from the CO data in galactocentric radial bins for the subcentral vicinity, the same spectral region used to derive the rotation curve, where the two-fold ambiguity in kinematical distances is minimum. The rate of massive star formation per unit area is derived, for the same radial bins, from the luminosity of IRAS point-like sources with FIR colors of UC H[II]{} regions detected in the CS(J=2$\to$1) line. Massive star formation occurs preferentially in three regions of high molecular gas density, coincident with lines of sight tangent to spiral arms. The molecular gas motion in these arms resembles that of a solid body, characterized by constant angular velocity and by low shear and vorticity. The formation of massive stars in the arms follows the Schmidt law, $\Sigma_{MSFR} \propto [\Sigma_{gas}]^n$, with an index of $n =
1.2 \pm 0.2$. Our results suggest that the large scale kinematics, through shear, regulate global star formation in the Galactic disk.
author:
- 'A. Luna, L. Bronfman, L. Carrasco, and J. May'
title: |
Molecular Gas, Kinematics, and OB Star Formation\
in the Spiral Arms of the Southern Milky Way
---
INTRODUCTION
============
The rotation curve, describing the circular speed of rotating material as a function of galactocentric radius, is a fundamental tool for the study of the kinematics of our Galaxy. It is best derived, because of interstellar extinction, from observations of atomic and molecular gas in radio and mm wavelengths. The derivation involves determining the [*terminal velocity*]{}, or maximum absolute radial velocity relative to the Sun, toward lines of sight that sample the Galaxy within the solar circle (quadrants I and IV). Such terminal velocities correspond, assuming pure circular motion, to the tangent points to circumferences around the galactic center, named [*subcentral points*]{}. These points subtend a circumference that connects the solar position with the galactic center. A detailed analysis of the rotation curve can reveal important physical characteristics of the rotating material, such as the amount of shear and vorticity at each galactocentric radius. These physical quantities regulate the gravitational stability of a differentially rotating gaseous disk and, consequently, the large scale distribution and properties of star formation in the galactic disk.
The first derivation of the rotation curve for the IV galactic quadrant that made use of the CO(J=1$\to$0) line - the best tracer of molecular hydrogen in the interstellar medium - was presented by Alvarez, May, & Bronfman (1990). The spectral data used to determine the terminal velocities were taken from the Columbia - U. de Chile surveys [@grabelsky87; @bronf89], which have a sampling interval of 0$^{\circ}$.125 (roughly the beam size). However, the terminal velocities in @alvarez90 were measured only every 0$^{\circ}$.5 in galactic longitude, due to difficulties involved in the visual examination of a very large number of spectra. A new derivation of the rotation curve, that uses a computer search code to examine all the available spectra ( $\approx$15000), is presented here. The disk kinematic characteristics in the IV galactic quadrant are analyzed in detail, from this new rotation curve. These characteristics, as a function of galactocentric radius, are compared with the molecular gas density and with the local rate of massive star formation.
A proper derivation of the spiral pattern of our Galaxy requires knowledge of the distances to the adopted tracers. These distances are also required to compute the masses and luminosities of such tracers. For the gas, kinematical distances can be obtained from radial velocity data of radio line observations, adopting a rotation curve, under the assumption of pure circular motions. For clouds within the solar circle, however, there is a two-fold ambiguity in the kinematic distance, that is difficult to circumvent and has to be resolved in a case-by-case basis. But in the vicinity of the subcentral points such ambiguity is minimal, since at the subcentral points themselves the kinematic distances are univocally defined.
It is worth noting that large scale streaming motions in spiral arms, with amplitude of $\sim$10 km/s, which produce deviations from pure rotation, have been observed in a number of regions of the Galaxy (Burton et al. 1988). Streaming motions of such amplitude may introduce uncertainties of up to 5% in the estimation of galactocentric radii when the streaming is along the line of sight. In such unfavorable case, the corresponding uncertainties in the estimated distances, for the section of the Galaxy analyzed here, may go from of 0.6 kpc to 1.7 kpc. In any case, for objects beyond $\sim$3kpc from the Sun, because of optical extinction, kinematical distances are usually the only ones available.
Massive stars are formed within aggregates of molecular gas and dust of 10$^5$-10$^6$ solar masses, about 50-100 pc in size, which are commonly known as giant molecular clouds, or GMCs for short. The association between OB stars and the interstellar medium has been established through optical, infrared, and CO observations of GMCs close enough to be largely unaffected by extinction (Orion, Carina, etc). The physical conditions in GMCs control their rates of OB star formation, and are one of the main agents that regulate the evolution of the galactic disk [@evans99].
There is a close relationship between the galactic spiral structure and the formation of GMCs and, hence, with the formation rate of OB stars [@dame86; @solomon86]. Therefore, the GMCs and the regions of OB star formation provide a very good tool to trace the spiral arm pattern of a galaxy. An early description of the Milky Way spiral arm pattern was given by @gg76, who observed the H109$\alpha$ line emitted in H[II]{} regions associated with young massive stars. A four arm spiral pattern for the southern Milky Way was later proposed by @cyh87, using a larger observational database of hydrogen recombination lines (H109$\alpha$ & H110$\alpha$). The four arm spiral pattern is in general agreement with that obtained from H[I]{} and CO large scale observations of the Galaxy [@rob83; @grabelsky87; @bronf88; @alvarez90; @valle02].
Star formation is likely to occur in regions where the gas in the Galactic disk is unstable to the growth of gravitational perturbations. In a classical paper, @schm59 introduced the parametrization of the volume density of star formation and the volume density of gas, relating them through a power law; such parametrization, known as "Schmidt Law”, has been studied observationally [@kennic89; @wong02] and explained on theoretical grounds [@toomre64; @tan00]. A study of the gas stability in the galactic disk must include (a) comparison of the gas density with a critical value above which the gaseous aggregates undergo gravitational collapse [@toomre64; @kennic89] and (b) examination of the gas shear rate, that governs the process of destruction of molecular clouds (e.g. Kenney, Carlstrom, & Young 1993; Wong & Blitz 2002), presumably through the injection of turbulent motions [@maclow04].
The link between massive star formation and kinematical conditions in disks has been studied mostly for external spiral galaxies [@aalto99; @wong02; @bossier03], where the spatial resolution that can be achieved by the observations is not as good as for the Milky Way. The main goal of the present paper is, therefore, to accurately describe the spiral arm structure in the [*subcentral vicinity*]{} of our Galaxy, focusing on the molecular gas kinematics, density, and on the rate of massive star formation, with the hope of contributing to the understanding of the formation and evolution of disk galaxies in general. The analysis is carried out for the IV galactic quadrant, where the spiral structure is more evident [@bronf88] than in the I quadrant. Preliminary work has been presented by @cys83 and, more recently by @aluna01.
Section (§2) describes the observational datasets used, the most complete available in their kind. These data are used in section (§3) to derive the rotation curve and analyze the relation between molecular gas kinematics, molecular gas surface density, and massive star formation rate. The validity of Schmidt Law for the Milky Way is analyzed in section (§4), and a summary of the results is given in section §5.
OBSERVATIONS
============
The data used to derive the rotation curve and the molecular gas surface density are part of the Columbia-U. Chile $^{12}$CO(J=1$\to$0) surveys. These surveys provide us with the most extensive and homogeneous observational dataset of CO emission in the galactic disk [@grabelsky87; @bronf89; @dame01]. The beam-size of the antenna in the CO line is 8$\arcmin$.8, and an angular sampling of 0$^\circ$.125 was adopted. The surveys cover the entire IV galactic quadrant in longitude, and $\pm 2^\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
address: 'Department of Mathematics, Hokkaido University, Sapporo, 060-0810 Japan'
author:
- Toshiyuki Akita
title: |
A formula for the Euler characteristics of\
even dimensional triangulated manifolds
---
[^1]
A finite simplicial complex $K$ is called an [*Eulerian manifold*]{} (or a [*semi-Eulerian complex*]{} in the literature) if all of maximal faces have the same dimension and, for every nonempty face $\sigma\in K$, $$\chi({\operatorname{Lk}}\sigma)=\chi(S^{\dim K-\dim\sigma-1})$$ holds, where ${\operatorname{Lk}}\sigma$ is the link of $\sigma$ in $K$ and $S^n$ is the $n$-dimensional sphere. Note that $K$ is not necessary connected. Any triangulation of a closed manifold is an Eulerian manifold. More generally, a triangulation of a homology manifold without boundary provides an Eulerian manifold. The purpose of this short note is to prove the following alternative formula for the Euler characteristics of even dimensional Eulerian manifolds.
\[main\] Let $K$ be a $2m$-dimensional Eulerian manifold. Then $$\label{eq-main}
\chi(K)=\sum_{i=0}^{2m}\left(-\frac{1}{2}\right)^i f_i(K)$$ holds, where $f_i(K)$ is the number of $i$-simplices of $K$.
A finite simplicial complex $L$ is called a [*flag complex*]{} if every collection of vertices of $L$ which are pairwise adjacent spans a simplex of $L$. The formula was proved in [@akita] under the additional assumptions that $K$ is a PL-triangulation of a closed $2m$-manifold and is a flag complex. M. W. Davis pointed out that the formula follows from a result in [@davis], provided $K$ is a flag complex (see [*Note added in proof*]{} in [@akita]). Both results follow from the considerations of the Euler characteristics of Coxeter groups. In this note, we deduce the formula from the generalized Dehn-Sommerville equations proved by Klee [@klee].
Let $K$ be a finite $(d-1)$-dimensional simplicial complex and $f_i=f_i(K)$ the number of $i$-simplices of $K$ as before. The $d$-tuple $(f_0,f_1,\dots,f_{d-1})$ is called the [*$f$-vector*]{} of $K$. The [*$f$-polynomial*]{} $f_K(t)$ of $K$ is defined by $$f_K(t)=t^d+f_0t^{d-1}+\cdots+f_{d-2}t+f_{d-1}.$$ Define the [*$h$-polynomial*]{} $h_K(t)$ of $K$, $$h_K(t)=h_0t^d+h_1t^{d-1}+\cdots+h_{d-1}t+h_d,$$ by the rule $h_K(t)=f_K(t-1)$. The $(d+1)$-tuple $(h_0,h_1,\dots,h_d)$ is called the [*$h$-vector*]{} of $K$. The $h$-vector of $K$ satisfies the generalized Dehn-Sommerville equations, as stated below in Theorem \[DS\].
\[DS\] Let $K$ be a $(d-1)$-dimensional Eulerian manifold. Then $$h_{d-i}-h_i=(-1)^i\binom{d}{i}(\chi(K)-\chi(S^{d-1}))$$ holds for all $i$ $(0\leq i\leq d)$.
Klee stated the generalized Dehn-Sommerville equations in terms of the $f$-vector rather than the $h$-vector. The formulae quoted in Theorem \[DS\] are equivalent to those in [@klee] and can be found in [@ed]. Theorem \[DS\] was also proved in [@panov] by a quite different method, provided that $K$ is a triangulation of a closed manifold.
Now we prove Theorem \[main\]. We have $$h_K(-1)=\sum_{i=0}^{2m+1}(-1)^{2m+1-i}h_i
=\sum_{i=0}^{m} (-1)^i (h_{2m+1-i}-h_i).$$ Now Theorem \[DS\] asserts that $$h_{2m+1-i}-h_i=(-1)^i\binom{2m+1}{i}(\chi(K)-2).$$ Hence we obtain $$\label{h-poly}
h_K(-1)=(\chi(K)-2)\sum_{i=0}^m\binom{2m+1}{i}
=2^{2m}(\chi(K)-2).$$ On the other hand, we have $$\label{f-poly}
f_K(-2)=(-2)^{2m+1}+\sum_{i=0}^{2m}(-2)^{2m-i}f_i
=2^{2m}\left( -2+\sum_{i=0}^{2m}\left(-\frac{1}{2}\right)^if_i
\right).$$ Since $h_K(-1)=f_K(-2)$ by the definition of the $h$-polynomial $h_K(t)$, Theorem \[main\] follows from and .
[1]{}
T. Akita, Euler characteristics of Coxeter groups, PL-triangulations of closed manifolds, and cohomology of subgroups of Artin groups, J. London Math. Soc. (2) 61 (2000), 721–736.
V. M. Buchstaber, T. E. Panov, [*Torus actions and their applications in topology and combinatorics*]{}, University Lecture Series 24, American Mathematical Society, Providence, 2002.
R. Charney, M. W. Davis, Reciprocity of growth functions of Coxeter groups, Geom. Dedicata 39 (1991), 373–378.
V. Klee, A combinatorial analogue of Poincaré’s duality theorem, Canad. J. Math. 16 (1964), 517–531.
E. Swartz, From spheres to manifolds, preprint (2005).
[^1]: Partially supported by the Grant-in-Aid for Scientific Research (C) (No.17560054) from the Japan Society for Promotion of Sciences.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We introduce an intuitive measure of genuine multipartite entanglement which is based on the well-known concurrence. We show how lower bounds on this measure can be derived that also meet important characteristics of an entanglement measure. These lower bounds are experimentally implementable in a feasible way enabling quantification of multipartite entanglement in a broad variety of cases.'
author:
- 'Zhi-Hao Ma$^{1}$, Zhi-Hua Chen$^{2}$, Jing-Ling Chen$^{3}$'
- 'Christoph Spengler, Andreas Gabriel, Marcus Huber'
title: Measure of genuine multipartite entanglement with computable lower bounds
---
Introduction
------------
Entanglement is an essential component in quantum information and at the same time a central feature of quantum mechanics [@Horodecki09; @Guhne09]. Its potential applications in quantum information processing vary from quantum cryptography [@Ekert91] and quantum teleportation [@Bennett93] to measurement-based quantum computing [@BRaussendorf01]. The use of entanglement as a resource not only bears the question of how it can be detected, but also how it can be quantified. For this purpose, several entanglement measures have been introduced, one of the most prominent of which is the concurrence [@Wootters98; @Horodecki09; @Guhne09]. However, beyond bipartite qubit systems [@Wootters98] and highly symmetric bipartite qudit states such as isotropic states and Werner states [@Terhal00; @Werner01] there exists no analytic method to compute the concurrence of arbitrary high-dimensional mixed states. For a bipartite pure state $|\psi\rangle$ in a finite-dimensional Hilbert space $\mathcal{H}
_1\otimes \mathcal{H}_2=\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}$ the concurrence is defined as [@Mintert05] $C(|\psi\rangle)=\sqrt{2\left(1-\texttt{Tr}\rho_1^2\right)}$ where $\rho_1=\texttt{Tr}_2\rho$ is the reduced density matrix of $\rho={\ensuremath{| \psi \rangle}}{\ensuremath{\langle \psi |}}$. For mixed states $\rho$ the concurrence is generalized via the convex roof construction $C(\rho)=\inf_{\{p_i,|\psi_i\rangle\}} \sum_i p_i C({\ensuremath{| \psi_i \rangle}})$ where the infimum is taken over all possible decompositions of $\rho$, i.e. $\rho=\sum_i p_i |\psi_i\rangle\langle\psi_i|$. This generalization is well-defined, however, as it involves a nontrivial optimization procedure it is not computable in general. The concurrence is a useful measure with respect to a broad variety of tasks in quantum information which exploit entanglement between two parties. However, considering multipartite systems, a generalization of the concurrence is needed that strictly quantifies the amount of genuine multipartite entanglement - the type of entanglement that not only is the key resource of measurement-based quantum computing [@Briegel09] and high-precision metrology [@Giovannetti04] but also plays a central role in biological systems [@Sarovar; @Caruso], quantum phase transitions [@Oliv; @Afshin] and quantum spin chains [@spinchains]. Although many criteria detecting genuine multipartite entanglement have been introduced (see e.g. Refs. [@Huber10; @Huberqic; @HuberDicke; @Krammer; @HuberClass; @Deng09; @Deng10; @Chen10; @Bancal; @Horodeckicrit; @Yucrit; @Hassancrit; @Seevinckcrit; @Uffink; @Collins; @Guehnecrit]), there is still no computable measure quantifying the amount of genuine multipartite entanglement present in a system. There are only few quantities available for pure states (a set of possible measures is given in Ref. [@HHK1]) which, however, are in general incomputable for mixed states and corresponding computable lower bounds have not been found so far. In this paper, we define a generalized concurrence (analogously to a measure proposed for pure states in Ref. [@Milburn]) for systems of arbitrarily many parties as an entanglement measure which distinguishes genuine multipartite entanglement from partial entanglement. As a main result we show that strong lower bounds on this measure can be derived by exploiting close analytic relations between this concurrence and recently introduced detection criteria for genuine multipartite entanglement.
Genuine multipartite entanglement
---------------------------------
An $n$-partite pure state $|\psi\rangle\in \mathcal{H}_1\otimes
\mathcal{H}_2\otimes\cdots\otimes\mathcal{H}_n$ is called biseparable if it can be written as $|\psi\rangle=|\psi_A\rangle \otimes |\psi_B\rangle$, where $|\psi_A\rangle \in \mathcal{H}_{A} = \mathcal{H}_{j_1}\otimes \ldots \otimes \mathcal{H}_{j_k}$ and $|\psi_B\rangle \in \mathcal{H}_{B} = \mathcal{H}_{j_{k+1}}\otimes \ldots \otimes \mathcal{H}_{j_n}$ under any bipartition of the Hilbert space, i.e. a particular order $\{j_1,j_2,\ldots j_{k}|j_{k+1},\cdots
j_n \}$ of $\{1,2,\cdots, n\}$ (for example, for a 4-partite state, $\{1,3|2,4\}$ is a partition of $\{1,2,3,4\}$). An $n$-partite mixed state $\rho$ is biseparable if it can be written as a convex combination of biseparable pure states $\rho=\sum\limits_{i}p_i|\psi_i\rangle \langle\psi_i|$, wherein the contained $\{|\psi_i\rangle\}$ can be biseparable with respect to different bipartitions (thus, a mixed biseparable state does not need to be separable w.r.t. any particular bipartition of the Hilbert space). If an $n$-partite state is not biseparable then it is called genuinely $n$-partite entangled.\
If we denote the set of all biseparable states by $\mathcal{S}_2$ and the set of all states by $\mathcal{S}_1$ we can illustrate the convex nested structure of multipartite entanglement (see Fig. \[fig\_convex\]).\
![Illustration of the convex nested structure of multipartite entanglement. The set of biseparable states $\mathcal{S}_2$ is convexly embedded within the set $\mathcal{S}_1$ of all states ($\mathcal{S}_2 \subset \mathcal{S}_1$).[]{data-label="fig_convex"}](zwiebel.eps)
A measure of genuine multipartite entanglement (g.m.e.) $E(\rho)$ should at least satisfy:
- $E(\rho)=0 \,\forall\,\rho\in \mathcal{S}_2$ (zero for all biseparable states)
- $E(\rho)>0 \,\forall\,\rho\in \mathcal{S}_1$ (detecting all g.m.e. states)
- $E(\sum_ip_i\rho_i)\leq \sum_ip_iE(\rho_i)$ (convex)
- $E(\Lambda_{LOCC}[\rho])\leq E(\rho)$ (non-increasing under local operations and classical communication)[^1]
- $E(U_{local}\rho U^\dagger_{local})= E(\rho)$ (invariant under local unitary transformations)
There are of course further possible conditions which are sometimes required (such as e.g. additivity), but this set of conditions constitutes the minimal requirement for any entanglement measure. For a more detailed analysis of such requirements consult e.g. Refs. [@Mintertrep05; @HHK1].
Concurrence for genuine $n$-partite entanglement
------------------------------------------------
Let us now introduce a measure of multipartite entanglement satisfying all necessary conditions (M1-M5) for being a multipartite entanglement measure.\
[**Definition 1.**]{} For $n$-partite pure states ${\ensuremath{| \Psi \rangle}} \in \mathcal{H}_{1}\otimes \mathcal{H}_{2}\otimes\cdots\otimes \mathcal{H}_{n}$, where $dim(\mathcal{H}_{i})=d_{i},i=1,2, \cdots ,n$ we define the gme-concurrence as $$\begin{aligned}
\label{gmeconcurrence}
C_{gme}({\ensuremath{| \Psi \rangle}}):=\min\limits_{\gamma_i \in \gamma} \sqrt{2(1-{\mbox{Tr}}(\rho^{2}_{A_{\gamma_i}}))}\ ,\end{aligned}$$ where $\gamma=\{\gamma_i\}$ represents the set of all possible bipartitions $\{A_i|B_i\}$ of $\{1,2,\ldots,n\}$. The gme-concurrence can be generalized for mixed states $\rho$ via a convex roof construction, i.e. $$\begin{aligned}
C_{gme}(\rho)=
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study the inertia stack of $[\mathcal{M}_{0,n}/S_n]$, the quotient stack of the moduli space of smooth genus $0$ curves with $n$ marked points via the action of the symmetric group $S_n$. Then we see how from this analysis we can obtain a description of the inertia stack of $\mathcal{H}_g$, the moduli stack of hyperelliptic curves of genus $g$. From this, we can compute additively the Chen–Ruan (or orbifold) cohomology of $\mathcal{H}_g$.'
address: 'KTH Matematik, Lindstedtsvägen 25, S-10044 Stockholm\'
author:
- Nicola Pagani
title: '**The orbifold cohomology of moduli of hyperelliptic curves**'
---
Introduction
============
A hyperelliptic curve of genus $g$ is a smooth algebraic curve that admits a $2:1$ map to $\mathbb{P}^1$, and thus has $2g+2$ branch points. From its very definition, it is clear that the moduli stack of genus $g$ hyperelliptic curves $\mathcal{H}_g$ admits a map onto the moduli stack $[\mathcal{M}_{0,2g+2}/S_{2g+2}]$, which is an isomorphism at the level of coarse moduli spaces. The foundations for moduli of hyperelliptic curves, as well as the precise definition of the previous map, can be found in [@lonsted] (in particular Theorem 5.5).
The last decade has seen tremendous improvements in our understanding of the moduli space of hyperelliptic curves $\mathcal{H}_g$. We mention here some of the recent achievements that are relevant to the present work. In the paper [@arsievistoli], $\mathcal{H}_g$ is described as a moduli stack of cyclic covers of the projective line. As a consequence of this description, the authors are able to determine its Picard group. Along these lines, the Picard group of the Deligne-Mumford compactification $\overline{\mathcal{H}}_g$ was computed (see [@cornpic]), and very recently the whole integral Chow ring of $\mathcal{H}_g$ was computed in [@fulghesu] (see also [@edidin], [@gorviv]). In the last years, much effort was also made in studying the automorphism groups of hyperelliptic curves [@shaska1], [@shaska2], [@shaska3], [@shaska4].
In this paper we deal with rational cohomology and Chow group with rational coefficients. From both these points of view, the moduli stacks $\mathcal{H}_g$ are trivial. The triviality of $H^*(\mathcal{H}_g, \mathbb{Q})$ follows from [@kisinlehrer Theorem 2.13], while the triviality of $A^*_{\mathbb{Q}}(\mathcal{H}_g)$ follows from its description as finite quotient of the affine variety $\mathcal{M}_{0,n}$. Still some nontriviality can be measured with rational coefficients, but one has to consider instead the *orbifold cohomology* or the *stringy Chow group*. The orbifold cohomology as a vector space (or Chen–Ruan cohomology) of an orbifold $\mathcal{X}$ is obtained by adding to the usual cycles of $\mathcal{X}$ the cycles of all the *twisted sectors* of $\mathcal{X}$. The twisted sectors are orbifolds that parametrize pairs $(x, g)$ where $x$ is a point of $\mathcal{X}$ and $g \in \operatorname{Aut}(x)$. The new cycles are then given an unconventional degree, which is the sum of their original degree as cycles inside their twisted sector $Y$, plus a rational number (called *age* or *degree shifting number*) that depends on the normal bundle $N_Y \mathcal{X}$.
The orbifold cohomology of moduli spaces of curves is studied in [@pagani1], [@pagani2], [@spencer2] (see also the PhD thesis [@paganitesi], [@spencer]). The present work has some nontrivial intersection with [@pagani2] and [@spencer2], since in these two papers in particular the orbifold cohomology and stringy Chow group of $\mathcal{M}_2= \mathcal{H}_2$ are described.
The main result of this paper is Theorem \[principale\], where we give for any $g$ a closed formula for the *orbifold Poincaré polynomial* of $\mathcal{H}_g$, that is, a [^1], whose coefficient of $q^i$ corresponds to the dimension of the group $H^{i}$.
To achieve this result, we first describe in Section \[sezione2\] the twisted sectors of $[\mathcal{M}_{0,n}/S_n]$ as quotients of certain $\mathcal{M}_{0,k}$ modulo a subgroup of $S_k$.
Then, in Section \[sezione3\], we study the twisted sectors of $\mathcal{H}_g$. If $g$ is odd, we see that the twisted sectors of $\mathcal{H}_g$ are simply the twisted sectors of $[\mathcal{M}_{0,2g+2}/S_{2g+2}]$ repeated twice. If $g$ is even, most of the twisted sectors of $\mathcal{H}_g$ correspond to the twisted sectors of $[\mathcal{M}_{0,2g+2}/S_{2g+2}]$, whose distinguished automorphism is not an involution, repeated twice. The remaining few twisted sectors of $\mathcal{H}_g$ are still described as quotients of moduli of genus $0$, pointed curves modulo the action of a certain subgroup of the symmetric group on the marked points.
Finally, in Section \[sezione4\] we compute all the degree shifting numbers, and we write the explicit results by recollecting the results of the previous sections.
Notation
--------
We work over $\mathbb{C}$; cohomologies and Chow groups are taken with rational coefficients. Orbifold for us means smooth Deligne–Mumford stack, and we always work within the category of Deligne–Mumford stacks. If a finite group $G$ acts on a scheme (stack) $X$, $[X/G]$ is the stack quotient and $X/G$ is the quotient as a scheme. We call $\mu_N:= \mathbb{Z}_N^{\vee}$ the group of characters of $\mathbb{Z}_N$, and $\mu_N^*$ the subgroup whose elements are the invertible characters. We make an implicit use of the relative language of schemes. For instance, when no confusion can arise, we speak of a genus $g$ smooth curve, meaning a family of genus $g$ smooth curve over a certain base $S$.
Definition of Orbifold Cohomology
=================================
In this section we define orbifold cohomology. For a more detailed study of this topic, we address the reader to [@agv2 Section 3] for the various inertia stacks, and to [@agv2 Section 7.1] for the degree shifting number (the original reference is [@chenruan]). What we call orbifold cohomology is the graded vector space underlying the Chen–Ruan cohomology ring (or algebra): the latter is a more refined object that we will not introduce in this work.
We introduce the following natural stack associated to a Deligne–Mumford stack $X$, which points to where $X$ fails to be an algebraic space.
\[definertia\] ([@agv1 4.4], [@agv2 Definition 3.1.1]) Let $X$ be an algebraic stack. The *inertia stack* $I(X)$ of $X$ is defined as $$I(X) := \coprod_{N \in \mathbb{N}} I_N(X)$$ where $I_N(X)(S)$ is the following groupoid:
1. The objects are pairs $(\xi, \alpha)$, where $\xi$ is an object of $X$ over $S$, and $\alpha: \mu_N \to \operatorname{Aut}(\xi)$ is an injective homomorphism.
2. The morphisms are the morphisms $g: \xi \to \xi'$ of the groupoid $X(S)$, such that $g \cdot \alpha(1)= \alpha'(1) \cdot g$.
We also define $I_{TW}(X):= \coprod_{N > 1}I_N(X)$, in such a way that $$I(X)=I_1(X) \coprod I_{TW}(X).$$ The connected components of $I_{TW}(X)$ are called *twisted sectors* of the inertia stack of $X$, or also twisted sectors of $X$. The inertia stack comes with a natural forgetful map $f:I(X) \to X$.
We observe that, by our very definition, $I_N(X)$ is an open and closed substack of $I(X)$, but it rarely happens that it is connected. One special case is when $N$ equals to $1$: in this case the map $f$ restricted to $I_1(X)$ induces an isomorphism of the latter with $X$. The connected component $I_1(X)$ will be referred to as the *untwisted sector*.
We also observe that given a generator of $\mu_N$, we obtain an isomorphism of $I(X)$ with $I'(X)$, where the latter is defined as the ($2
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
We study a crime hotspot model suggested by Short-Bertozzi-Brantingham [@sbb]. The aim of this work is to establish rigorously the formation of hotspots in this model representing concentrations of criminal activity. More precisely, for the one-dimensional system, we rigorously prove the existence of steady states with multiple spikes of the following types:
\(i) Multiple spikes of arbitrary number having the same amplitude (symmetric spikes),
\(ii) Multiple spikes having different amplitude for the case of one large and one small spike (asymmetric spikes).
We use an approach based on Liapunov-Schmidt reduction and extend it to the quasilinear crime hotspot model. Some novel results that allow us to carry out the Liapunov-Schmidt reduction are: (i) approximation of the quasilinear crime hotspot system on the large scale by the semilinear Schnakenberg model, (ii) estimate of the spatial dependence of the second component on the small scale which is dominated by the quasilinear part of the system.
The paper concludes with an extension to the anisotropic case.
author:
- 'Henri Berestycki [^1]'
- 'Juncheng Wei [^2]'
- 'Matthias Winter [^3]'
title: Existence of Symmetric and Asymmetric Spikes for a Crime Hotspot Model
---
[**Key words:**]{} crime model, reaction-diffusion systems, multiple spikes, symmetric and asymmetric, quasilinear chemotaxis system, Schnakenberg model, Liapunov-Schmidt reduction
[**AMS subject classification:**]{} Primary 35J25, 35 B45; Secondary 36J47, 91D25
Introduction: The statement of the problem
==========================================
Pattern forming reaction-diffusion systems have been and are applied to many phenomena in the natural sciences. Recent works have also started to use such systems to describe macroscopic social phenomena. In this direction, Short, Bertozzi and Brantingham [@sbb] have proposed a system of non-linear parabolic partial differential equations to describe the formation of hotspots of criminal activity. Their equations are derived from an agent-based lattice model that incorporates the movement of criminals and a given scalar field representing the “attractiveness of crime”. The system in one dimension reads as follows: $$\begin{aligned}
\nonumber
A_{t} & =\varepsilon^{2}A_{xx}-A+\rho A+A_{0} (x), \ \mbox{in} \ (-L, L),\\
\rho_{t} & =D (\rho_{x}-2\frac{\rho}{A}A_{x})_{x}-\rho
A+\gamma (x), \ \mbox{in} \ (-L, L).
\label{sysoriginal}\end{aligned}$$ Here $A$ is the “attractiveness of crime” and $\rho$ denotes the density of criminals. The rate at which crimes occur is given by $\rho A$. When this rate increases, the number of criminals is reduced while the attractiveness increases. The second feature is related to the well documented occurrence of repeat offenses. The positive function $A_0 (x)$ is the intrinsic (static) attractiveness which is stationary in time but possibly variable in space. The positive function $\gamma (x)$ is the source term representing the introduction rate of offenders (per unit area). For the precise meanings of the functions $A_0 (x)$ and $\gamma (x)$, we refer to [@sbb; @sbbt; @soptbbc] and the references therein.
This paper is concerned with the mathematical analysis of the one-dimensional version of this system. Let us describe our approach. Setting $$v=\frac{\rho}{A^{2}},$$ the system is transformed into $$\begin{aligned}
\nonumber
A_{t} & =\varepsilon^{2}A_{xx}-A+vA^{3}+A_{0} (x) \ \mbox{in} \ (-L, L),\\
(A^2v)_{t} & =D\left( A^{2}v_{x}\right) _{x}-vA^{3}+\gamma (x) \ \mbox{in} \ (-L, L).
\label{sysdyn}\end{aligned}$$ We always consider Neumann boundary conditions $$A_x(-L)=A_x(L)=\rho_x(-L)=\rho_x(L)=v_{x}(-L)=v_x(L)=0.$$ Note that $v$ is well-defined and positive if $A$ and $\rho$ are both positive.
The parameter $0<{\varepsilon}^2$ represents nearest neighbor interactions in a lattice model for the attractiveness. We assume that it is very small which corresponds to the temporal dependence of attractiveness dominating its spatial dependence. This models the case of attractiveness propagating rather slowly, i.e. much slower than individual criminals. It is a realistic assumption if the criminal spatial profile remains largely unchanged, or, in other words, if the relative crime-intensity does only change very slowly. This appears to be a reasonable assumption since it typically takes decades for dangerous neighborhoods, i.e. those attracting criminals, to evolve into safe ones and vice versa.
Roughly speaking, a $k$ spike solution $(A, v)$ to (\[sysdyn\]) is such that the component $A$ has exactly $k$ local maximum points. In this paper, we address the issue of existence of steady states with multiple spikes in the following two cases: Symmetric spikes (same amplitudes) or asymmetric spikes (different amplitudes). Our approach is by rigorous nonlinear analysis. We apply Liapunov-Schmidt reduction to this quasilinear system.
In this approach, to establish the existence of spikes, we derive the following new results:
- Approximation of the crime hotspot system on the large scale of order one by the semi-linear Schnakenberg model (see Section 3, in particular equation (\[approx\])),
- Estimate of the spatial dependence of the second component on the small scale of order ${\varepsilon}$, dominated by the quasilinear part of the system (see Section 6, in particular inequalities (\[estw3\]) – (\[estw1\])).
We remark that asymmetric multiple spike steady states (of $k_1$ small and $k_2$ large spikes) are an intermediate state between two different symmetric multiple spike steady states of $k_1+k_2$ spikes (for which all spikes are fully developed) and $k_2$ spikes (for which the small spikes are gone). These rigorous results shed light on the formation of hotspots for the idealized model of criminal activity introduced in [@sbb].
Let us now comment on previous works. As far as we know, there are three mathematical works related to the crime model (\[sysdyn\]). Short, Bertozzi and Brantingham [@sbb] proposed this model based on mean field considerations. They have also performed a weakly nonlinear analysis on (\[sysoriginal\]) about the constant solution $$(A, \rho)= \left(\gamma +A_0, \frac{\gamma}{\gamma+A_0} \right)$$ assuming that both $A_0 (x)$ and $\gamma (x)$ are homogeneous. Rodriguez and Bertozzi have further shown local existence and uniqueness of solutions [@rb1]. In [@ccm], Cantrell, Cosner and Manasevich have given a rigorous proof of the bifurcations from this constant steady state. On the other hand, in the isotropic case, Kolokolnikov, Ward and Wei [@kww1] have studied existence and stability of multiple symmetric and asymmetric spikes for (\[sysdyn\]) using formal matched asymptotics. They derived qualitative results on competition instabilities and Hopf bifurcation and gave some extensions to two-space dimensions.
The present paper provides rigorous justification for many of the results in [@kww1] and also derives some extensions. In particular, we establish here the following three new results: first, we reduce the quasilinear chemotaxis problems to a Schnakenberg type reaction-diffusion system and prove the existence of symmetric $k$ spikes. Second, this paper gives the first rigorous proof of the existence of asymmetric spikes in the isotropic case. Third, we study the pinning effect in an inhomogeneous setting $A_0 (x)$ and $\gamma (x)$. The stability of these spikes is an interesting issue which should be addressed in the future.
We should mention that another model of criminality has been proposed and analyzed by Berestycki and Nadal [@bn]. In a forthcoming paper [@bw], we shall study the existence and stability of hotspots (spikes) in this system as well. It is quite interesting to observe that both models admit hotspot (spike) solutions.
The structure of this paper is as follows. We formally construct a one-spike solution in Section 2 in which we state our main results. In Section 3 we show how to approximate the crime hotspot model by the Schnakenberg model. Section 4 is devoted to the computation of the amplitudes and positions of the spikes to leading order. Nondegeneracy conditions are derived in Section 5. These are required for the existence proof, given in Sections 6–8. In Section 6 we introduce and study the approximate solutions. In Section 7 we apply Liapunov-Schmidt reduction to this problem. Lastly, we solve the reduced problem in Section 8 and conclude the existence proof. In Section 9 we extend the proof of single spike solution to the case when both $A_0 (x)$ and $\gamma (x)$ are allowed to be inhomogeneous. Finally, in Section 10 we discuss our results and their significance and
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We construct stationary finitely dependent colorings of the cycle which are analogous to the colorings of the integers recently constructed by Holroyd and Liggett. These colorings can be described by a simple necklace insertion procedure, and also in terms of an Eden growth model on a tree. Using these descriptions we obtain simpler and more direct proofs of the characterizations of the 1- and 2-color marginals.'
address:
- 'Alexander E. Holroyd'
- 'Tom Hutchcroft, Department of Mathematics, University of British Columbia'
- 'Avi Levy, Department of Mathematics, University of Washington'
author:
- 'Alexander E. Holroyd'
- Tom Hutchcroft
- Avi Levy
bibliography:
- 'coloring.bib'
date: 28 July 2017
title: Finitely dependent cycle coloring
---
Introduction
============
A random process indexed by the vertex set of a graph is $k$**-dependent** if its restrictions to any two sets of vertices at graph distance greater than $k$ are independent of each other. A process is **finitely dependent** if it is $k$-dependent for some finite $k$. For several decades it was not known whether every stationary finitely dependent process on $\Z$ is a **block factor** of an i.i.d. process, that is, the image of an i.i.d. process under a finite range function that commutes with translations. This question was raised by Ibragimov and Linnik [@ibragimov1965independent] in 1965 and resolved in the negative by Burton, Goulet, and Meester [@burton1993] in 1993. Recently Holroyd and Liggett [@HL] proved that *proper coloring* distinguishes between block factors and finitely dependent processes: block factor proper colorings of $\Z$ do not exist, but finitely dependent stationary proper colorings do. In fact, these colorings fit into a more general family constructed subsequently in [@hhlMalCol]. See also [@H; @HL2].
The finitely dependent colorings of [@HL] have short but mysterious descriptions. An interesting feature of these colorings is that the supports of proper subsets of the colors can each be expressed as block factors of i.i.d. [@HL Theorem 4], even though the coloring as a whole cannot [@HL Proposition 2]. Moreover, their descriptions as block factors are remarkably simple and explicit. However, the proofs used to obtain these descriptions in [@HL] were in some cases quite involved. Here we introduce a more canonical construction via colorings of the $n$-cycle, which can be expressed in terms of a necklace insertion process akin to those in [@mallowsSheppNecklace; @nakataNecklace], or in terms of the classical Eden growth model on a 3-regular tree. Using this new construction, we are able to obtain simpler and more direct proofs of the statements concerning subsets of colors mentioned above.
The necklace insertion process is as follows. Suppose we have a necklace of colored beads. Start with 3 beads with uniformly random distinct colors from $\{1,\ldots,q\}$. At each step, pick a uniformly random gap between consecutive beads and insert a bead with uniformly random color differing from those of the two neighbors. After $n-1$ steps we have a coloring $(C_1,\ldots,C_{n+2})$ of the $(n+2)$-cycle.
Here is a different description of the above process, which is easily seen to be equivalent (see Section \[sec:index\] for details). Consider a planar embedding of the 3-regular tree $\mathbb T$ (with a distinguished root vertex), together with its planar dual $\mathbb D$, which is an infinite-degree triangulation (see Figure \[fig:cycleColor\]). For a vertex $v$ of $\mathbb T$, let $\Delta(v)$ be the set of three vertices in $\mathbb D$ incident to the face dual to $v$. We consider the Eden growth model [@eden1961two] on $\mathbb T$, which is a random growing sequence of clusters $T_1,T_2,\ldots$ defined as follows. The initial cluster $T_1$ consists of the root vertex. Given $T_n$, we choose a vertex uniformly at random from those adjacent to but not lying in $T_n$, and add it to $T_n$ to form the new cluster $T_{n+1}$. Let $D_n$ be the subgraph of $\mathbb{D}$ induced by the vertex set $\bigcup_{v \in T_n} \Delta(v)$. The graph $D_n$ inherits a planar embedding from $\mathbb D$, in which there is one outer face containing all of its $n+2$ vertices, and all other faces are triangles. Let $q\geq 3$ be an integer. Conditional on $D_n$, choose a uniformly random proper $q$-coloring of $D_n$ and, independently, a uniform vertex $u$ of $D_n$. Let $C_1,\ldots,C_{n+2}$ be the sequence of colors of the vertices of the outer face in clockwise order starting from $u$.
![\[fig:cycleColor\] Two versions of the construction of the 2-dependent 3-coloring of the cycle, from a cluster of the Eden model on a 3-regular tree. On the left we uniformly properly 3-color the planar map comprising the dual triangles of the vertices of the cluster. The coloring is read clockwise around the outer face. On the right, we may alternatively fix a uniform proper coloring of the infinite dual map in advance. The coloring is read around the outer boudary of the cluster.](tree2.pdf "fig:"){width="49.00000%"} ![\[fig:cycleColor\] Two versions of the construction of the 2-dependent 3-coloring of the cycle, from a cluster of the Eden model on a 3-regular tree. On the left we uniformly properly 3-color the planar map comprising the dual triangles of the vertices of the cluster. The coloring is read clockwise around the outer face. On the right, we may alternatively fix a uniform proper coloring of the infinite dual map in advance. The coloring is read around the outer boudary of the cluster.](tree.pdf "fig:"){width="49.00000%"}
\[thm:main\] Fix $n\geq 1$ and $(k,q)\in \{(1,4),(2,3)\}$. The sequence $(C_1,\ldots,C_{n+2})$ constructed above is a $k$-dependent proper $q$-coloring of the $(n+2)$-cycle. The coloring is symmetric in law under rotations and reflections of the cycle and permutations of the colors. Moreover, the sequence $(C_1,\ldots,C_{n+2-k})$ is equal in law to $(X_1,\ldots,X_{n+2-k})$ where $(X_i)_{i\in\Z}$ is the stationary $k$-dependent $q$-coloring of $\Z$ constructed in [@HL].
A third description of our construction is as follows. It is possible to define the uniformly random proper $q$-coloring of the infinite graph $\mathbb D$ (by consistency). When $q=3$ this coloring is simply a uniform choice from the $3!$ proper $q$-colorings of $\mathbb D$, corresponding to the permutations of the colors. (On the other hand when $q=4$, there are uncountably many such colorings.) We can now choose the Eden model cluster $T_n$ independently of the coloring of $\mathbb D$. The coloring of Theorem \[thm:main\] then arises as the sequence of colors appearing in clockwise order around the outer boundary of $T_n$. See Figure \[fig:cycleColor\].
We next address the one- and two-color marginals of the colorings.
\[thm:onetwo\] Let $I$ be either $\Z$, or $\{1,\ldots,n\}$ for $n\geq 3$ (interpreted as the vertex set of a cycle). Let $X=(X_i)_{i\in I}$ be the 1-dependent 4-coloring and let $Y=(Y_i)_{i\in I}$ be the 2-dependent 3-coloring of $I$ (arising from [@HL] in the $\Z$ case, or from Theorem \[thm:main\] in the cycle case).
1. The process $(\mathbbm{1}[X_i\in\{1,2\}])_{i\in I}$ is equal in law to $(\mathbbm{1}[U_i>U_{i+1}])_{i\in I}$, where $(U_i)_{i\in I}$ are i.i.d. uniform on $[0,1]$.
2. The process $(\mathbbm{1}[Y_i=1])_{i\in I}$ is equal in law to $(\mathbbm{1}[U_{i-1}<U_i>U_{i+1}])_{i\in I}$, where $(U_i)_{i\in I}$ are i.i.d. uniform on $[0,1]$.
3. The process $(\mathbbm{1}[X_i=1])_{i\in I}$ is equal in law to $(\mathbbm{1}[B_i>B_{i+1}])_{i\in I}$, where $(B_i)_{i\in I}$ are i.i.d. taking values $0,1$ with
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
We propose the general construction formula of shape-color primitives by using partial differentials of each color channel in this paper. By using all kinds of shape-color primitives, shape-color differential moment invariants can be constructed very easily, which are invariant to the shape affine and color affine transforms. 50 instances of SCDMIs are obtained finally. In experiments, several commonly used color descriptors and SCDMIs are used in image classification and retrieval of color images, respectively. By comparing the experimental results, we find that SCDMIs get better results.
-color primitives, affine transform, partial differential, shape-color differential moment invariants
author:
- 'Hanlin Mo$^{1}$$^{(}$$^{)}$$^{,}$[^1]'
- 'Shirui Li$^{1}$'
- 'You Hao$^{1}$'
- 'Hua Li$^{1}$'
title: 'Shape-Color Differential Moment Invariants under Affine Transformations'
---
Introduction
============
Image classification and retrieval for color images are two hotspots in pattern recognition. How to extract effective features, which are robust to color variations caused by the changes in the outdoor environment and geometric deformations caused by viewpoint changes, is the key issue. The classical approach is to construct invariant features for color images. Moment invariants are widely used invariant features.
Moment invariants were first proposed by Hu[@1] in 1962. He defined geometric moments and constructed 7 geometric moment invariants which were invariant under the similarity transform(rotation, scale and translation). Researchers applied Hu moments to many fields of pattern recognition and achieved good results[@2; @3]. Nearly 30 years later, Flusser et al.[@4] constructed the affine moment invariants (AMIs) which are invariant under the affine transform. The geometric deformations of an object, which are caused by the viewpoint changes, can be represented by the projective transforms. However, general projective transforms are complex nonlinear transformations. So, it’s difficult to construct projective moment invariants. When the distance between the camera and the object is much larger than the size of the object itself, the geometric deformations can be approximated by the affine transform. AMIs have been used in many practical applications, such as character recognition[@5] and expression recognition[@6]. In order to obtain more AMIs, researchers designed all kinds of methods. Suk et al.[@7] proposed graph method which can be used to construct AMIs of arbitrary orders and degrees. Xu et al.[@8] proposed the concept of geometric primitives, including distance, area and volume. AMIs can be constructed by using various geometric primitives. This method made the construction of moment invariants have intuitive geometric meaning.
The above-mentioned moment invariants are all designed for gray images. With the popularity of color images, the moment invariants for color images began to appear gradually. Researchers wanted to construct moment invariants which are not only invariant under the geometric deformations but also invariant under the changes of color space. Geusebroek et al.\[9\] proved that the affine transform model was the best linear model to simulate changes in color resulting from changes in the outdoor environment. Mindru et al.[@10] proposed moment invariants which were invariant under the shape affine transform and the color diagonal-offset transform. The invariants were constructed by using the related concepts of Lie group. Some complex partial differential equations had to be solved. Thus, the number of them was limited and difficult to be generalized. Also, Suk et al.[@11] put forward affine moment invariants for color images by combining all color channels. But this approach was not intuitive and did not work well for the color affine transform. To solve these problems, Gong et al.[@12; @13; @14] constructed the color primitive by using the concept of geometric primitive proposed in [@8]. Combining the color primitive with some shape primitives, moment invariants that are invariant under the shape affine and color affine transforms can be constructed easily, which were named shape-color affine moment invariants(SCAMIs). In [@14], they obtained 25 SCAMIs which satisfied the independency of the functions. However, we find that a large number of SCAMIs with simple structures and good properties are missed in [@14].
In this paper, we propose the general construction formula of shape-color primitives by using partial differentials of each color channel. Then, we use two kinds of shape-color primitives to construct shape-color differential moment invariants(SCDMIs), which are invariant under the shape affine and color affine transforms. We find that the construction formula of SCAMIs proposed in [@14] is a special case of our method. Finally, commonly used image descriptors and SCDMIs are used for image classification and retrieval of color images, respectively. By comparing the experimental results, we find that SCDMIs proposed in this paper get better results.
Related Work
============
In order to construct image features which are robust to color variations and geometric deformations, researchers have made various attempts. Among them, SCAMIs proposed in [@14] are worthy of special attention. SCAMIs are invariant under the shape affine and color affine transforms. Two kinds of affine transforms are defined by $$\left(
\begin{array}{c}
x^{'}\\
y^{'}\\
\end{array}
\right)
=SA \cdot
\left(
\begin{array}{c}
x\\
y\\
\end{array}
\right)
+ST
=
\left(
\begin{array}{cc}
\alpha_{1}& \alpha_{2}\\
\beta_{1}& \beta_{2}\\
\end{array}
\right)\cdot
\left(
\begin{array}{c}
x\\
y\\
\end{array}
\right)
+
\left(
\begin{array}{c}
O_{x}\\
O_{y}\\
\end{array}
\right)$$ $$\left(
\begin{array}{c}
R^{'}(x,y)\\
G^{'}(x,y)\\
B^{'}(x,y)\\
\end{array}
\right)
=CA \cdot
\left(
\begin{array}{c}
R(x,y)\\
G(x,y)\\
B(x,y)\\
\end{array}
\right)
+CT
=
\left(
\begin{array}{ccc}
\ a_{1}&a_{2}&a_{2}\\
\ b_{1}&b_{2}&b_{2}\\
\ c_{1}&c_{2}&c_{2}\\
\end{array}
\right)\cdot
\left(
\begin{array}{c}
R(x,y)\\
G(x,y)\\
B(x,y)\\
\end{array}
\right)
+
\left(
\begin{array}{c}
O_{R}\\
O_{G}\\
O_{B}\\
\end{array}
\right)$$ where SA and CA are nonsingular matrices.
For the color image $I(R(x,y),G(x,y),B(x,y))$, let $(x_{p},y_{p}),(x_{q},y_{q}),(x_{r},y_{r})$ be three arbitrary points in the domain of $I$. The shape primitive and the color primitive are defined by $$S(p,q)=
\left|
\begin{array}{cc}
(x_{p}-\bar{x})&(x_{q}-\bar{x})\\
(y_{p}-\bar{y})&(y_{q}-\bar{y})\\
\end{array}
\right|$$ $$\begin{split}
&C(p,q,r)=
\left|
\begin{array}{ccc}
(R(x_{p},y_{p})-\bar{R})&(R(x_{q},y_{q})-\bar{R})&(R(x_{r},y_{r})-\bar{R})\\
(G(x_{p},y_{p})-\bar{G})&(G(x_{q},y_{q})-\bar{G})&(G(x_{r},y_{r})-\bar{G})\\
(B(x_{p},y_{p})-\bar{B})&(B(x_{q},y_{q})-\bar{B})&(B(x_{r},y_{r})-\bar{B})\\
\end{array}
\right|
\end{split}$$ where $\bar{A}$ represents the mean value of $A$, $A \in \{x,y,R,G,B\}$.
Then, using Eq.(3) and (4), the shape core can be defined by $$sCore(n,m;d_{1},d_{2},...,d_{n})=\underbrace{S(1,2)S(k,l)...S(r,n)}_m$$ where n and m represent that the sCore is the product of m shape primitives which are constructed by N points $(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})$. $k<l$, $r<n$, $k,l,r \in \left\{1,2,...n\right\}$. $d_{i}$ represents the number of point $(x_{i},y_{i})$
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We consider multiple time scales systems of stochastic differential equations with small noise in random environments. We prove a quenched large deviations principle with explicit characterization of the action functional. The random medium is assumed to be stationary and ergodic. In the course of the proof we also prove related quenched ergodic theorems for controlled diffusion processes in random environments that are of independent interest. The proof relies entirely on probabilistic arguments, allowing to obtain detailed information on how the rare event occurs. We derive a control, equivalently a change of measure, that leads to the large deviations lower bound. This information on the change of measure can motivate the design of asymptotically efficient Monte Carlo importance sampling schemes for multiscale systems in random environments.'
address: |
Department of Mathematics & Statistics\
Boston University\
Boston, MA 02215
author:
- Konstantinos Spiliopoulos
title: Quenched Large Deviations for Multiscale Diffusion Processes in Random Environments
---
Introduction {#S:Intro}
============
Let $0<\varepsilon,\delta\ll 1$ and consider the process $\left(X^{\epsilon}, Y^{\epsilon}\right)=\left\{\left(X^{\epsilon}_{t}, Y^{\epsilon}_{t}\right), t\in[0,T]\right\}$ taking values in the space $\mathbb{R}^{m}\times\mathbb{R}^{d-m}$ that satisfies the system of stochastic differential equation (SDE’s)
$$\begin{aligned}
dX^{\epsilon}_{t}&=&\left[ \frac{\epsilon}{\delta}b\left(Y^{\epsilon}_{t},\gamma\right)+c\left( X^{\epsilon}_{t},Y^{\epsilon}_{t},\gamma\right)\right] dt+\sqrt{\epsilon}\sigma\left( X^{\epsilon}_{t},Y^{\epsilon}_{t},\gamma\right)
dW_{t},\nonumber\\
dY^{\epsilon}_{t}&=&\frac{1}{\delta}\left[ \frac{\epsilon}{\delta}f\left(Y^{\epsilon}_{t},\gamma\right) +g\left( X^{\epsilon}_{t},Y^{\epsilon}_{t},\gamma\right)\right] dt+\frac{\sqrt{\epsilon}}{\delta}\left[
\tau_{1}\left( Y^{\epsilon}_{t},\gamma\right)
dW_{t}+\tau_{2}\left(Y^{\epsilon}_{t},\gamma\right)dB_{t}\right], \label{Eq:Main}\\
X^{\epsilon}_{0}&=&x_{0},\hspace{0.2cm}Y^{\epsilon}_{0}=y_{0}\nonumber\end{aligned}$$
where $\delta=\delta(\epsilon)\downarrow0$ such that $\epsilon/\delta\uparrow\infty$ as $\epsilon\downarrow0$. Here, $(W_{t}, B_{t})$ is a $2\kappa-$dimensional standard Wiener process. We assume that for each fixed $x\in\mathbb{R}^{m}$, $b(\cdot,\gamma), c(x,\cdot,\gamma),\sigma(x,\cdot,\gamma),f(\cdot,\gamma)$, $g(x,\cdot,\gamma), \tau_{1}(\cdot,\gamma)$ and $\tau_{2}(\cdot,\gamma)$ are stationary and ergodic random fields. We denote by $\gamma\in\Gamma$ the element of the related probability space. If we want to emphasize the dependence on the initial point and on the random medium, we shall write $\left(X^{\epsilon,(x_{0},y_{0}),\gamma}, Y^{\epsilon,(x_{0},y_{0}),\gamma}\right)$ for the solution to (\[Eq:Main\]).
The system (\[Eq:Main\]) can be interpreted as a small-noise perturbation of dynamical systems with multiple scales. The slow component is $X$ and the fast component is $Y$. We study the regime where the homogenization parameter goes faster to zero than the strength of the noise does. The goal of this paper is to obtain the quenched large deviations principle associated to the component $X$, that is associated with the slow motion. The case of large deviations for such systems in periodic media for all possible interactions between $\epsilon$ and $\delta$, i.e., $\epsilon/\delta\rightarrow 0, c\in(0,\infty)$ or $\infty$, was studied in [@Spiliopoulos2013], see also [@Baldi; @DupuisSpiliopoulos; @FS]. In [@Spiliopoulos2013] (see also [@DupuisSpiliopoulosWang]), it was assumed that the coefficients are periodic with respect to the $y-$variable and based on the derived large deviations principle, asymptotically efficient importance sampling Monte Carlo methods for estimating rare event probabilities were obtained. In the current paper, we focus on quenched (i.e. almost sure with respect to the random environment) large deviations for the case $\epsilon/\delta\uparrow\infty$ and the situation is more complex when compared to the periodic case since the coefficients are now random fields themselves and the fast motion does not take values in a compact space.
We treat the large deviations problem via the lens of the weak convergence framework, [@DupuisEllis], using entirely probabilistic arguments. This framework transforms the large deviations problem to convergence of a stochastic control problem. The current work is certainly related to the literature in random homogenization, see [@KomorowskiLandimOlla2012; @KosyginaRezakhanlouVaradhan; @Kozlov1979; @Kozlov1989; @LionsSouganidis2006; @Olla1994; @OllaSiri2004; @Osada1983; @Osada1987; @PapanicolaouVaradhan1982; @Papaanicolaou1994; @Rhodes2009a]. Our work is most closely related to [@KosyginaRezakhanlouVaradhan; @LionsSouganidis2006], where stochastic homogenization for Hamilton-Jacobi-Bellman (HJB) equations was studied. The authors in [@KosyginaRezakhanlouVaradhan; @LionsSouganidis2006] consider the case $\delta=\epsilon$ with the fast motion being $Y=X/\delta$ and with the coefficients $b=f=0$ in a general Hamiltonian setting. In both papers the authors briefly discuss large deviations for diffusions (i.e., when the Hamiltonian is quadratic) and the action functional is given as the Legendre-Fenchel transform of the effective Hamiltonian and the case studied there is $\delta=\epsilon$. Moreover, in [@Kushner1; @VeretennikovSPA2000] the large deviations principle for systems like (\[Eq:Main\]) is considered in the case $\epsilon=\delta$ with the coefficients $b=f=0$. In [@Kushner1; @VeretennikovSPA2000] the coefficients are deterministic (i.e., not random fields as in our case) and stability type conditions for the fast process $Y$ are assumed in order to guarantee ergodicity. Lastly, related annealed homogenization results (i.e. on average and not almost sure with respect to the medium) for uncontrolled multiscale diffusions as in (\[Eq:Main\]) in the case $\epsilon=1$, $\delta\downarrow 0$ and $Y=X/\delta$ have been recently obtained in [@Rhodes2009a]. Under different assumptions on the structure of the coefficients, the opposite case to ours where $\epsilon/\delta\downarrow 0$ has been partially considered in [@DupuisSpiliopoulos; @FS; @Souganidis1999; @Spiliopoulos2013].
In contrast to most of the aforementioned literature, in this paper, we study the case $\epsilon/\delta\uparrow \infty$ and we use entirely probabilistic arguments. Because $\epsilon/\delta\uparrow \infty$, we also need to consider the additional effect of the macroscopic problem (i.e., what is called cell problem in the periodic homogenization literature) due to the highly oscillating term $\frac{\epsilon}{\delta}\int_{0}^{T}b\left(Y^{\epsilon}_{t},\gamma\right)dt$. We use entirely probabilistic arguments and because the homogenization parameter goes faster to zero that the strength of the noise does, we are able to derive an explicit characterization of the quenched large deviations principle and detailed information on the change of measure leading to its proof, Theorem \[T:MainTheorem3\]. Due to the presence of the highly oscillatory term $\frac{\epsilon}{\delta}\int_{0}^{T}b\left(Y^{\epsilon}_{t},\gamma\right)dt$, the change of measure in question depends on the macroscopic problem and we determine this dependence explicitly. Additionally, in the course of the proof, we obtain quenched (i.e., almost sure with respect to the random environment) ergodic theorems for uncontrolled and controlled random diffusion processes that may be of independent interest, Theorem \[T:MainTheorem1\] and Appendix \[S:QuenchedErgodicTheorems\]. It is of interest to note that for the purposes of proving the Laplace principle, which is equivalent to the large deviations principle, one can constrain the variational problem associated with the stochastic control representation of exponential functionals to a class of $L^{2}$ controls with specific dependence on $\delta,\epsilon$, Lemma \[L:RestrictingTheControl\].
Partial motivation for this work comes from chemical physics, molecular dynamics and climate modeling, e.g., [@EVMaidaTimofeyev2001; @DupuisSpiliopoulosWang2; @SchutteWalterHartmannHuisinga2005
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
The spectrum of $^9$He was studied by means of the $^8$He($d$,$p$)$^9$He reaction at a lab energy of 25 MeV/n and small center of mass (c.m.) angles. Energy and angular correlations were obtained for the $^9$He decay products by complete kinematical reconstruction. The data do not show narrow states at $\sim $1.3 and $\sim $2.4 MeV reported before for $^9$He. The lowest resonant state of $^9$He is found at about 2 MeV with a width of $\sim $2 MeV and is identified as $1/2^-$. The observed angular correlation pattern is uniquely explained by the interference of the $1/2^-$ resonance with a virtual state $1/2^+$ (limit on the scattering length is obtained as $a
> -20$ fm), and with the $5/2^+$ resonance at energy $\geq 4.2$ MeV.
author:
- 'M.S. Golovkov'
- 'L.V. Grigorenko'
- 'A.S. Fomichev'
- 'A.V. Gorshkov'
- 'V.A. Gorshkov'
- 'S.A. Krupko'
- 'Yu.Ts. Oganessian'
- 'A.M. Rodin'
- 'S.I. Sidorchuk'
- 'R.S. Slepnev'
- 'S.V. Stepantsov'
- 'G.M. Ter-Akopian'
- 'R. Wolski'
- 'A.A. Korsheninnikov'
- 'E.Yu. Nikolskii'
- 'V.A. Kuzmin'
- 'B.G. Novatskii'
- 'D.N. Stepanov'
- 'S. Fortier'
- 'P. Roussel-Chomaz'
- 'W. Mittig'
date: '. [File: he9-11.tex ]{}'
title: 'New insight into the low-energy $^9$He spectrum'
---
*Introduction.* — Since the first observation of $^9$He in the experiment [@set87] it was studied in relatively small number of works compared to the neighbouring exotic neutron dripline nuclei. This can be connected, on one hand, to the facts that technical difficulty of the precision measurements rapidly grows with move away from the stability line. On the other hand, already in the first experiment (pion double charge exchange on the $^9$Be nucleus [@set87]) several narrow resonances were observed above the $^8$He+$n$ threshold. This observation was confirmed in Ref. [@boh99], where the $^9$Be($^{14}$C,$^{14}$O)$^9$He reaction was used, and now the experimental situation with the low-energy spectrum of $^9$He is considered to be well established. A new rise of interest to $^9$He was connected with the question of a possible $2s$ state location in the framework of the shell inversion problem in nuclei with large neutron excess. The recent experiment [@che01] was focused on the search for the virtual state in $^9$He. An upper limit on the scattering length $a<-10$ fm was established in this work. The properties of states in $^{9}$He were inferred in [@gol03] basing on studies of isobaric partners in $^{9}$Li. The available results are summarized in Table \[tab:exp\].
Interpretation of the $^9$He spectrum as provided in [@set87; @boh99] faces certain difficulties which were not unnoticed (e.g. Ref. [@bar04]). Indeed, the ground $1/2^-$ state is expected to be single particle state with width estimated as $0.8-1.3$ MeV at $E = 1.27$ MeV for typical channel radii $3 -
6$ fm. This requires spectroscopic factor $S\sim 0.1$ which contradicts single particle character of the state. F. Barker in Ref. [@bar04] concludes on this point that “some configuration mixing in either the $^{9}$He($1/2^-$) or $^{8}$He($0^+$) state or both is possible, but is unlikely to be large enough to reduce the calculated width to the experimental value”. The next, presumably $3/2^-$ state, should be a complicated particle-hole excitation as $p_{3/2}$ subshell is occupied. However, a much larger spectroscopic factor $S\sim 0.3-0.4$ is required for its widths found in a range $2.0-2.6$ MeV.
![Experimental setup, angles, and momenta.[]{data-label="fig:setup"}](setup){width="44.00000%"}
Having in mind the mentioned problematic issues we decided to study the $^9$He in the “classical” one-neutron transfer ($d$,$p$) reaction well populating single particle states. In contrast with the previous works complete kinematics studies were foreseen to reveal the low-energy $s$-wave mode. Following the experimental concept of [@gol04a; @gol05b], where correlation studies of $^5$H continuum were accomplished by means of the $^3$H($t$,$p$) transfer reaction, this work was performed in the so called “zero geometry”.
---------- ------------ ---------- ---------- ---------- ---------- ------------ ----------
1/2$^+$
Ref. $a$ (fm) $E$ $\Gamma$ $E$ $\Gamma$ $E$ $\Gamma$
[@set87] 1.13(10) small 2.3 small 4.9
[@boh99] 1.27(10) 0.1(0.6) 2.42(10) 0.7(2) 4.3 small
[@che01] $< \! -10$
[@gol03] 1.1 2.2 4.0
Our $> \! -20$ 2.0(0.2) 2 $\geq 4.2$ $> 1$
---------- ------------ ---------- ---------- ---------- ---------- ------------ ----------
: Experimental positions of states in $^9$He relative to the $^8$He+$n$ threshold (energies and widths are given in MeV).
\[tab:exp\]
{width="86.00000%"}
*Experiment.* — The experiment was done at the U-400M cyclotron of the Flerov Laboratory of Nuclear Reactions, JINR (Dubna, Russia). A 34 MeV/nucleon $^{11}$B primary beam delivered by the cyclotron hitted a 370 mg/cm$^2$ Be production target. The modified ACCULINNA fragment separator [@rod97] was used to produce a $^8$He secondary beam with a typical intensity of $2\times 10^4$ s$^{-1}$. The beam was focused on a cryogenic target [@yuk03] filled with deuterium at 1020 mPa pressure and cooled down to 25 K. The 4 mm thick target cell was supplied with 6 $\mu$m stainless steel windows, 30 mm in diameter.
Experimental setup and kinematical diagram for the $^{2}$H($^8$He,$p$)$^9$He reaction are shown in Fig.\[fig:setup\]. Slow protons escaping from the target in the backward direction hitted on an annular 300 $\mu$m silicon detector with an active area of the outer and inner diameters of 82 mm and 32 mm, respectively, and a 28 mm central hole. The detector was installed 100 mm upstream of the target. It was segmented in 16 rings on one side and 16 sectors on the other side providing a good position resolution. The detection threshold for the protons ($\sim $1.2 MeV) corresponded to a $\sim$5.5 MeV cutoff in the missing mass of $^9$He. We did not use here particle identification because, due to the kinematical constraints of the $^8$He+$^2$H collisions, only protons can be emitted in the backward direction. The main cause of the background was due to evaporation protons originating from the interaction of $^8$He beam with the material of target windows. This background was almost completely suppressed by the coincidence with $^8$He. The detection of such coincidences fixed the complete kinematics for the experiment. Energy-momentum conservation was used for cleaning the spectra. Finally the comparison with an empty target run has shown that only $\sim 2 \%$ events can be treated as a background.
The $^8$He nuclei resulting from the $^9$He decay, focused in narrow angular cone relative to the beam direction, were detected by a Si-CsI telescope mounted in air just behind the exit window of the scattering chamber. The 82 mm diameter exit window was closed by a 125 $\mu$m capton foil. The Si-CsI telescope consisted of two 1 mm thick silicon detectors and 16 CsI crystals with photodiode readouts. The $6 \times 6$ cm Si detectors were segmented in 32 strips both in horizontal and vertical directions, providing position resolution and particle identification by the $\Delta E$-$E$ method
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: '[ The dependence of the critical current of spin transfer torque-driven magnetization dynamics on the free-layer thickness was studied by taking into account both the finite penetration depth of the transverse spin current and spin pumping. We showed that the critical current remains finite in the zero-thickness limit of the free layer for both parallel and anti-parallel alignments. We also showed that the remaining value of the critical current of parallel to anti-parallel switching is larger than that of anti-parallel to parallel switching. ]{}'
author:
- 'Tomohiro Taniguchi${}^{1,2}$'
- 'Hiroshi Imamura${}^{1}$[^1]'
title: 'Dependence of critical current of spin transfer torque-driven magnetization dynamics on free layer thickness'
---
Spin transfer torque (STT)-driven magnetization dynamics is a promising technique to operate spin-electronics devices such as a non-volatile magnetic random access memory (MRAM) and a microwave generator [@slonczewski96; @berger96]. STT is the torque due to the transfer of the transverse (perpendicular to magnetization) spin angular momentum from the conducting electrons to the magnetization of the ferromagnetic metal. One of the most important quantities of STT-driven magnetization dynamics is the critical current over which the dynamics of the magnetization is induced. The typical value of the critical current density is on the order of $10^{6}-10^{8}$ \[A/cm${}^{2}$\] [@kiselev03; @seki06; @chen06]. Control of the value of the critical current is required to reduce the energy consumption of spin-electronics devices.
In Slonczewski’s theory of STT [@slonczewski96], the critical current of P-to-AP (AP-to-P) switching is expressed as [@sun00; @grollier03] $$I_{\rm c}^{{\rm P}\to{\rm AP}({\rm AP}\to{\rm P})}
=
\frac{2eMSd}{\hbar\gamma\eta_{\rm P(AP)}}
\alpha_{0}\omega_{\rm P(AP)}\ ,
\label{eq:critical_current}$$ where $e$ is the absolute value of the electron charge, $\hbar$ is the Dirac constant, and $M$, $\gamma$, $S$, $d$ and $\alpha_{0}$ are the magnetization, gyromagnetic ratio, cross section area, thickness and the intrinsic Gilbert damping constant of the free layer, respectively [@chen06]. $\omega_{\rm P(AP)}$ is the angular frequency of the magnetization around the equilibrium point. The coefficient $\eta_{\rm P,AP}$ characterizes the strength of STT, and depends only on the relative angle of the magnetizations of the fixed and free layer [@slonczewski96; @sun00; @grollier03]. According to Eq. (\[eq:critical\_current\]), the critical current vanishes in the zero-thickness limit of the free layer, $d\!\to\!0$.
{width="0.95\columnwidth"}
\[fig:fig1\]
However, recently, Chen [[ *et al.* ]{}]{}[@chen06] reported that the critical current of STT-driven magnetization dynamics of a CPP-GMR spin valve remains finite even in the zero-thickness limit of the free layer. What are missed in the above naive considerations based on Slonczewski’s theory are the effects of the finite penetration depth of the transverse spin current, $\lambda_{\rm t}$, [@zhang02; @zhang04; @taniguchi08a] and of spin pumping [@mizukami02b; @tserkovnyak02; @tserkovnyak03; @taniguchi07]. We investigated the critical current of STT-driven magnetization switching from AP to P alignment by taking into account both the finite penetration depth of the transverse spin current and the spin pumping, and showed that the critical current remains finite in the zero-thickness limit of the free layer [@taniguchi08b]. We also showed that the remaining value of the critical current is mainly determined by spin pumping. Although our results [@taniguchi08b] agree well with the experimental results of Chen [[ *et al.* ]{}]{}[@chen06], we investigated only the critical current of AP-to-P switching, $I_{\rm c}^{{\rm AP}\to{\rm P}}$. For the manipulation of spin-electronics devices, the thickness dependence of the critical current of P-to-AP switching, $I_{\rm c}^{{\rm P}\to{\rm AP}}$, should also be investigated.
In this paper, we study the critical current of STT-driven magnetization switching both from P to AP alignment and from AP to P alignment by taking into account both the finite penetration depth of the transverse spin current and the spin pumping. We show that both critical currents, $I_{\rm c}^{{\rm P}\to{\rm AP}}$ and $I_{\rm c}^{{\rm AP}\to{\rm P}}$, remain finite in the zero-thickness of the free layer. We also show that $I_{\rm c}^{{\rm P}\to{\rm AP}}$ is larger than $I_{\rm c}^{{\rm AP}\to{\rm P}}$ over the whole range of the free layer thickness, and thus, the remaining value of $I_{\rm c}^{{\rm P}\to{\rm AP}}$ is larger than that of $I_{\rm c}^{{\rm AP}\to{\rm P}}$. The difference between the remaining values of the critical currents, $I_{\rm c}^{{\rm P}\to{\rm AP}}$ and $I_{\rm c}^{{\rm AP}\to{\rm P}}$, can be explained by considering how the strength of STT, $\eta$, depends on the magnetic alignment.
A schematic view of the system we consider is shown in Fig. \[fig:fig1\]. Two ferromagnetic layers (F${}_{1}$ and F${}_{2}$) are sandwiched by the nonmagnetic layers N${}_{i}$ $(i=1-7)$. The F${}_{1}$ and F${}_{2}$ layers correspond to the free and fixed layers, respectively. $\mathbf{m}_{k}$ $(k=1,2)$ is the unit vector pointing in the direction of the magnetization of the F${}_{k}$ layer. $I$ is the electric current flowing perpendicular to the film plane.
The electric current and pumped spin current at the F${}_{k}$/N${}_{i}$ interface (into N${}_{i}$) is obtained by using the circuit theory [@tserkovnyak02; @brataas01]: $$\begin{aligned}
&
I^{{\rm F}_{k}/{\rm N}_{i}}
\!=\!
\frac{eg}{2h}
\left[
2(\mu_{{\rm F}_{k}}-\mu_{{\rm N}_{i}})
\!+\!
p\mathbf{m}_{k}\!\cdot\!(\bm{\mu}_{{\rm F}_{k}}-\bm{\mu}_{{\rm N}_{i}})
\right]\ ,
\label{eq:electric_current}
\\
&
\mathbf{I}_{s}^{\rm pump}
\!=\!
\frac{\hbar}{4\pi}
\left(
g_{\rm r}^{\uparrow\downarrow}
\mathbf{m}_{1}\!\times\!
\frac{{{\rm d}}\mathbf{m}_{1}}{{{\rm d}}t}
\!+\!
g_{\rm i}^{\uparrow\downarrow}
\frac{{{\rm d}}\mathbf{m}_{1}}{{{\rm d}}t}
\right)\ ,
\label{eq:pump_current}\end{aligned}$$ where $h\!=\!2\pi\hbar$ is the Planck constant, $g\!=\!g^{\uparrow\uparrow}\!+\!g^{\downarrow\downarrow}$ is the sum of the spin-up and spin-down conductances, $p\!=\!(g^{\uparrow\uparrow}\!-\!g^{\downarrow\downarrow})/(g^{\uparrow\uparrow}\!+\!g^{\downarrow\downarrow})$ is the spin polarization of the conductances, and $g_{\rm r(i)}$ is the real (imaginary) part of the mixing conductance. $\mu_{{\rm N}_{i},{\rm F}_{k}}$ and $\bm{\mu}_{{\rm N}_{i},{\rm F}_{k}}$ are the charge and spin accumulation, respectively. The spin current at each F${}_{k}$/N${}_{i}$ and N${}_{i}$/N${}_{j}$ interface (into N${}_{i}$) is given by [@taniguchi08a; @brataas01] $$\begin{aligned}
\!\!\!\!
&\mathbf{I}_{s}^{{\rm F}_{k}/{\rm N}_{i}}
\!=
\frac{1}{4\pi}\!
\left[
g
\left\{\!
p(\mu_{{\rm F}_{k}}\!-\!\mu_{{\rm N}_{i}})
\!
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The super-inflationary phase is predicted by the Loop Quantum Cosmology. In this paper we study the creation of gravitational waves during this phase. We consider the inverse volume corrections to the equation for the tensor modes and calculate the spectrum of the produced gravitons. The amplitude of the obtained spectrum as well as maximal energy of gravitons strongly depend on the evolution of the Universe after the super-inflation. We show that a further standard inflationary phase is necessary to lower the amount of gravitons below the present bound. In case of the lack of the standard inflationary phase, the present intensity of gravitons would be extremely large. These considerations give us another motivation to introduce the standard phase of inflation.'
author:
- Jakub Mielczarek
- 'Marek Szyd[ł]{}owski'
title: 'Relic gravitons from super-inflation'
---
Introduction {#sec:intro}
============
The cosmological creation of the gravitational waves was proposed by Grishchuk [@Grishchuk:1974ny] in the mid-seventies. Since that time this phenomenon has been studied extensively, especially in the context of the inflation. The accelerating expansion phase gives the conditions for the abundant creation of the gravitational waves. Gravitons produced during the inflation fill the entire space in the form of a stochastic background. Together with the scalar modes, produced during the inflation, they form primordial perturbations leading to the structure formation. The analysis of the cosmic microwave background (CMB) and large scale structures gives therefore the possibility of testing inflationary models. In the case of the CMB the impact of the gravitational waves comes from the primordial spectrum and from tensor Sachs-Wolfe effects. The Sachs-Wolfe effect is somehow secondary and leads to the CBM anisotropies as the result of the scattering of CMB photons on the relic gravitons. The form of this anisotropies is given by $$\left( \frac{\Delta \text{T}}{\text{T}} \right)_{\text{t}} =
-\frac{1}{2} \int_{\tau_1}^{\tau_{2}} d \tau \ h'_{ij} n^i n^j$$ where $h_{ij}$ describes tensor modes and $n^i$ is the vector parallel to the unperturbed geodesics. The influence of the gravitational waves for the CMB is however to weak to be observed directly with the present observational abilities. Another possible method to detect gravitational waves is to use of the antennas like LIGO, VIRGO, TAMA or GEO600 [@Abbott:2003vs; @Cella:2007jh] . Although these detectors are now very sensitive this is still not enough to detect directly the gravitational waves background [@Abbott:2007wd]. It may look pessimistic, we hope however that some further improvement of the observational skills bring us the observational evidence, so needful for the further theoretical improvements.
In this paper we consider a new type of the inflation which naturally occurs in the Loop Quantum Cosmology [@Bojowald:2006da]. This is so called the super-inflationary scenario [@Bojowald:2002nz; @Copeland:2007qt] and is a result of the quantum nature of spacetime in the Planck scales. The spacetime is namely discrete in the quantum regime and its evolution is governed by discrete equations. However for the scales greater than $a_i=\sqrt{\gamma}l_{\text{Pl}}$ ($\gamma $ is so called Barbero-Immirzi parameter) the evolution of the spacetime can be described by the Einstein equations with quantum corrections. For typical values of the quantum numbers, super-inflationary phase takes place in this semi-classical region.
Our goal is to describe the production of the gravitational waves during the super-inflation. This problem was preliminary analysed in Ref. [@Mielczarek:2007zy], but quantum corrections to the equation for tensor modes was not included to calculate the spectrum of the gravitons. In this paper we include so called inverse volume corrections to the equation for evolution of the tensor modes and then calculate the spectrum of produced gravitational waves. The equations for the tensor modes was recently derived by Bojowald and Hossain [@Bojowald:2007cd]. They had analysed the inverse volume corrections and corrections from holonomies. In this paper we concentrate on these first ones. The quantum corrections are generally complicated functions but they have simple asymptotic behaviours. To calculate the productions of gravitons during some process we need somehow to know only initial and final states, where asymptotic solutions are good approximation. In these regimes calculations can be done analytically. We use numerical solutions to match them.
The organization of the text is the following. In section II we fix the semi-classical dynamics. Then in section III we consider creation of the gravitons on the defined background. In section IV we summarize the results.
Background dynamics
===================
The formulation of Loop Quantum Gravity bases on the Ashtekar variables [@Ashtekar:1987gu] and holonomies. The Ashtekar variables replace the spatial metric field $q_{ab}$ in the canonical formulation as follow $$\begin{aligned}
A^i_a &=& \Gamma^i_a+\gamma K_a^i , \\
E^a_i &=& \sqrt{|\det q|} e^{a}_i\end{aligned}$$ where $\Gamma^i_a$ is the spin connection defined as $$\Gamma^i_a = -\epsilon^{ijk}e^b_j(\partial_{[a}e^k_{b]}+\frac{1}{2}e^c_k e^l_a \partial_{[c}e^l_{b]} )$$ and the $K_a^i$ is the intrinsic curvature. The $e^{a}_i$ is the inverse of the co-triad $e^i_a$ defined as $q_{ab}=e_a^ie_b^j$. In terms of the Ashtekar variables the full Hamiltonian for general relativity is a sum of constraints $$H_{\text{G}}^{\text{tot}}= \int d^3 {\bf x} \, (N^i G_i + N^a C_a + N h_{\text{sc}}),$$ where $$\begin{aligned}
C_a &= E^b_i F^i_{ab} - (1-\gamma^2)K^i_a G_i ,\nonumber \\
G_i &= D_a E^a_i\end{aligned}$$ and the scalar constraint has a form $$\begin{aligned}
\label{ham}
&H_{\text{G}}:=\int d^3{\bf x} \, N(x) h_{\rm sc}= \nonumber \\
&\frac{1}{16 \pi G} \int d^3{\bf x} \, N(x)\left( \frac{E^a_i
E^b_j}{\sqrt{|\det E|}} {\varepsilon^{ij}}_k F_{ab}^k -
2(1+\gamma^2) \frac{E^a_i E^b_j}{\sqrt{|\det E|}} K^i_a
K^j_b \right) \end{aligned}$$ with $F=dA + \frac{1}{2}[A,A]$. The full Hamiltonian of theory is a sum of the gravitational and matter part. With convenience as a matter part we choose the scalar field with the Hamiltonian $$H_{\phi}=\int d^3{\bf x} \, N(x)\left( \frac{1}{2}\frac{\pi^2_{\phi}}{\sqrt{|\det E|}} + \frac{1}{2} \frac{E^a_i
E^b_i \partial_a \phi \partial_b \phi }{\sqrt{|\det E|}} + \sqrt{|\det E|} V(\phi) \right).$$ We assume here that field $\phi$ is homogeneous and start his evolution from the minimum of potential $ V(\phi)$. The second assumption states that contribution from potential term is initially negligible. So the density of Hamiltonian $H_{\phi}$ is simplified to the form $\mathcal{H}_{\phi}=(1/2)\pi^2_{\phi}/\sqrt{|\det E|}$. The term $1/\sqrt{|\det E|}$ for the classical FRW universe corresponds to $1/a^3$ where $a$ is the scale factor. On the quantum level term $1/\sqrt{|\det E|}$ is quantised and have discrete spectrum. In the regime $a \gg a_i$ we can however use the approximation $1/\sqrt{|\det E|}=D/a^3$ where $$D(q)=q^{3/2} \left\{ \frac{3}{2l} \left( \frac{1}{l+2}\left[(q+1)^{l+2}-|q-1|^{l+2} \right]-
\frac{q}{1+l}\left[(q+1)^{l+1}-\mbox{sgn}(q-1)|q-1|^{l+1} \right] \right) \right\}^{3/(2-2l)}
\label{correction}$$ and $q=(a/a_*)^2$ with $a_*=\sqrt{\gamma j / 3}l_{\text{Pl}} $. Function (\[correction\]) depends on the ambiguity parameter $l$. As it was shown by Bojowald [@Bojowald:2002ny] the value of this parameter is quantised according to $l_k=1
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: '[We prove a conjecture by W. Bergweiler and A. Eremenko on the traces of elements of modular group in this paper. ]{}'
address:
- 'Department of Mathematics, Changshu Institute of Technology, Changshu 215500, China'
- 'Department of Mathematics, University of Texas of the Permian Basin, Odessa, TX, 79762'
author:
- Bin Wang and Xinyun Zhu
title: On the traces of elements of modular group
---
Introduction
============
W. Bergweiler and A. Eremenko made a remarkable conjecture on the traces of elements of modular group in [@E]. The main result of this paper is to prove their conjecture. We expect this result to have future applications in some fields such as control theory.
Let $A =\left( \begin{array}{cc}
1 & 2 \\
0 & 1
\end{array} \right)$ and $B =\left( \begin{array}{cc}
1 & 0 \\
-2 & 1
\end{array} \right)$. These two matrices generate the free group which is called $\Gamma(2)$, the principal congruence subgroup of level 2. With arbitrary integers $m_{j} \neq 0, n_{j} \neq 0$, consider the trace of the product $$p_k(m_1, n_1,..., m_k, n_k) = tr(A^{m_1}B^{n_1} \cdots A^{m_k}B^{n_{k}}).$$ It is easy to see that $p_k$ is a polynomial in $2k$ variables with integer coefficients. This polynomial can be written explicitly though the formula is somewhat complicated.
Choosing an arbitrary sequence $\sigma$ of $2k$ signs $\pm $, we make a substitution $$p_{k}^{\sigma}(x_1, y_1,...x_k, y_k)=p_k(\pm (1+x_1), \pm (1+y_1),\cdots , \pm (1+x_k), \pm (1+y_k)).$$ Our main theorem is the following one.
\[main\] The polynomial $p_k$, for every $k>0$, has the property that for every $\sigma$, all the coefficients of the polynomial $p_{k}^{\sigma}$ are of the same sign, that is, the sequence of coefficients of $p_{k}^{\sigma}$ has no sign changes.
which was conjectured by W. Bergweiler and A. Eremenko in [@E].
We prove the theorem by induction on $k$. However it is not easy to pass from level $k$to level $k+1$since that $p_k$ has the above property does not simply imply that $p_{k+1}$ has the same one. The idea here is to substitute $p_k$’s with a suitable set of polynomials containing the $p_k$’s so that the difficulty disappears. This idea is explained in section 2 (see Proposition \[p1\]) and the theorem is showed in section 3.
[**Acknowledgment**]{} We would like to thank Alex Eremenko for his helpful comments on the earlier draft of this paper. The first author is grateful to Jianming Chang for introducing the topic to him and for many helpful talks.
Traces
======
Good polynomials
----------------
Set $$F_k = \begin{pmatrix} f_k &h_k \\ t_k &g_k \end{pmatrix}
= A^{x_1}B^{y_1}A^{x_2}B^{y_2}\cdots A^{x_k}B^{y_k}$$ where $A= \begin{pmatrix} 1 &2 \\ 0 &1 \end{pmatrix},\,B= \begin{pmatrix} 1 &0 \\ -2 &1 \end{pmatrix}.$ Then the trace $p_k = trF_k = f_k + g_k$ and all $f_k, h_k, t_k, g_k$ are the polynomials in $2k$ variables $x_1, y_1, \cdots x_k, y_k$ with integer coefficients whose explicit formula can be found in [@E].
A sequence $\sigma$ of $2k$ signs $\pm$ can be viewed as a function $\sigma : \{1, 2, \cdots 2k \}\rightarrow \{1, -1\}$. For any polynomial $f$ in variables $x_1, y_1, \cdots x_k, y_k$, set $$f^{\sigma}=f(\si (1)(1+x_1), \si (2)(1+y_1), \cdots , \si (2k-1)(1+x_k), \si (2k)(1+y_{k}))$$
[A polynomial f in $2k$ variables is said to be good if for arbitrary sequence $\sigma$ of $2k$ signs, all the coefficients of $f^{\sigma}$ have the same sign. ]{}
Let $Mat(2,2)$ be the set of $2 \times 2$ matrices over $\R$, the set of real numbers. Denote by $F_k^{\sigma}$ the matrix $\begin{pmatrix} f_k^{\sigma} &h_k^{\sigma}\\ t_k^{\sigma} &g_k^{\sigma} \end{pmatrix}$. If $M= \begin{pmatrix} a &c \\ b &d \end{pmatrix} \in Mat(2,2)$, then $$\begin{array}{rl}
tr(F_kM) &= af_k + bh_k + ct_k +dg_k\\
tr(F_k^{\sigma}M) &= af_k^{\sigma} + bh_k^{\sigma} + ct_k^{\sigma} +dg_k^{\sigma}
\end{array}$$ Write
$$\begin{aligned}
A_1= \begin{pmatrix} 1 &0 \\ 0 &0 \end{pmatrix}&&
A_2= \begin{pmatrix} 2 &1 \\ 0 &0 \end{pmatrix}&&
A_3= \begin{pmatrix} 2 &-1 \\ 0 &0 \end{pmatrix}, \\
A_4= \begin{pmatrix} 3 &2 \\ -2 &-1 \end{pmatrix}&&
A_5= \begin{pmatrix} 5 &2 \\ 2 &1 \end{pmatrix}&&
A_6= \begin{pmatrix} 5 &-2 \\ -2 &1 \end{pmatrix}.\end{aligned}$$
Note that $$A_4 + A_5 = 4A_2, \, A_4 + A_6= 4A_3^t, \, A_4^t + A_5= 4A_2^t, \, A_4^t + A_6 = 4A_3, A_2 + A_3 = 4A_1,\label{semi}$$ $$A_4= -A^{-1}B^{-1},\, A^t_4=-AB, \, A_5= AB^{-1},\, A_6=A^{-1}B. \label{eq}$$ Let $S$ be a subset of $Mat(2,2)$, we have
\[p1\] If $S$ satisfies that
p1)
: $ a > 0$, for all $M= \begin{pmatrix} a &c \\ b &d \end{pmatrix} \in S$,
P2)
: $tr(CM)\geq 0$, for each $C \in \{A_4, A_{4}^t, A_5, A_6 \},\, M \in S$,where $D^t$ stands for the transpose of the matrix $D$,
P3)
: $CS\subseteq S, \text{for each} \,\, C \in \{A_4, A_{4}^t, A_5, A_6 \},$
then $af_k + bh_k + ct_k +dg_k$ is good, for every $M= \begin{pmatrix} a &c \\ b &d \end{pmatrix}\in S, k\geq 1.$
\[r1\] $S$ satisfies the conditions P1), P2) P3) if and only if so does the cone $Cone(S)\triangleq \{ \sum a_iM_i \,|\, a_i\geq 0, M_i \in S \}$. Furthermore, any set $S$ satisfying P1) possesses the property that $af_k + bh_k + ct_k +dg_k$ is good, for every $M= \begin{pmatrix} a &c \\ b &d \end{pmatrix}\in S, k\geq 1$ if and only if $Cone(S)$ satisfying P1) possesses the same property. [The first assertion is obvious and the second one follows from the fact that the sign of the leading term of $F_k^{\sigma}M) = af_k^{\sigma} + bh_k^{\sigma} + ct_k^{\sigma} +dg_k^{\sigma}$ with $a>0$ is independent of $a$ (see the proof of Lemma \[pass\])]{}
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The concept of a flock of a quadratic cone is generalized to arbitrary cones. Flocks whose planes contain a common point are called star flocks. Star flocks can be described in terms of their coordinate functions. If the cone is “big enough”, the star flocks it admits can be classified by means of a connection with minimal blocking sets of Rédei type. This connection can also be used to obtain examples of bilinear flocks of non-quadratic cones.'
address: 'Department of Mathematical and Statistical Sciences, University of Colorado Denver, Campus Box 170, P.O. Box 173364, Denver, CO 80217-3364, U.S.A.'
author:
- William Cherowitzo
title: 'Flocks of Cones: Star Flocks'
---
Introduction
============
This is the second (see [@WEC2]) in a series of articles devoted to providing a foundation for a theory of flocks of arbitrary cones in $PG(3,q)$. The desire to have such a theory stems from a need to better understand the very significant and applicable special case of flocks of quadratic cones in $PG(3,q)$. Flocks of quadratic cones have connections with several other geometrical objects, including certain types of generalized quadrangles, spreads, translation planes, hyperovals (in even characteristic), ovoids, inversive planes and quasi-fibrations of hyperbolic quadrics. This rich collection of interconnections is the basis for the strong interest in such flocks.
Cones and Flocks
================
Let $\pi_0$ be a plane and $V$ a point not on $\pi_0$ in $PG(3,q)$. Let $\mathcal{S}$ be any set of points in $\pi_0$ (including the empty set). A *cone*, $\Sigma =
\Sigma(V,\mathcal{S})$ is the union of all points of $PG(3,q)$ on the lines $VP$ where $P$ is a point of $\mathcal{S}$. $V$ is called the *vertex* and $\mathcal{S}$ is called the *carrier* of $\Sigma$. $\pi_0$ is the *carrier plane* and the lines $VP$ are the *generators* of $\Sigma$. In the event that $\mathcal{S} = \emptyset$ we call $\Sigma$ the *empty cone* and by convention consider it to consist of only the point $V$.
A *flock of planes* in $PG(3,q)$ is any set of $q$ *distinct* planes of $PG(3,q)$. As $q$ planes can not cover all the points of $PG(3,q)$, there always are points of the space which do not lie in any of the planes in a flock of planes. If $\Sigma$ is a cone of $PG(3,q)$, then a flock of planes, $\mathcal{F}$, is said to be a *flock of* $\Sigma$ when the vertex of $\Sigma$ lies in no plane of $\mathcal{F}$ and no two planes of $\mathcal{F}$ intersect at a point of $\Sigma$. Any flock of planes is a flock of a cone, possibly only the empty cone. In general, however, a given flock of planes will be a flock of several cones. In the literature on flocks of quadratic cones, the approach is always to consider a fixed quadratic cone and study the flocks of that cone. We will change the viewpoint and consider, for a fixed flock of planes, the various cones of which it is a flock. In the sequel we shall refer to a flock of planes simply as a *flock* and it shall be understood that it is always a flock of a cone, even if the cone is not explicitly indicated. Furthermore, we shall always assume, unless explicitly stated otherwise, that a flock is a flock of a non-empty cone.
Let $\mathcal{F}$ be a flock. We can introduce coordinates in $PG(3,q)$ so that the plane $x_3 = 0$ is one of the planes of the flock and the point $V = ( 0,0,0,1 )$ is not in any plane of the flock. Since $V$ is not in any plane of $\mathcal{F}$, each of the planes of this flock has an equation of the form $Ax_0 + Bx_1 + Cx_2 - x_3 = 0$. We parameterize the planes of $\mathcal{F}$ with the elements of $GF(q)$ in an arbitrary way except that we will require that $0$ is the parameter assigned to the plane $x_3
= 0$. We can now describe the flock as, $\mathcal{F} = \{\pi_t\colon f(t)x_0 + g(t)x_1 + h(t)x_2 - x_3 = 0
\mid t \in GF(q)\}$ with $\pi_0 \colon x_3 = 0$. The functions $f,g \text{ and } h$ are called the *coordinate functions* of the flock. Note that the requirement on the parameter $0$ means that $f(0) = g(0) = h(0) = 0$. If $f,g \text{
and } h$ are the coordinate functions of the flock $\mathcal{F}$ we shall write $\mathcal{F} = \mathcal{F}(f,g,h)$. We remark that the coordinate functions of a flock depend on the parameterization of the flock.
As all cones under consideration have vertex $V = (
0,0,0,1 )$ and the plane $\pi_0$ as the carrier plane, a cone is determined when its carrier $\mathcal{S}$, a point set in $\pi_0$, is specified. Given a flock $\mathcal{F}$, there is a largest set $\mathcal{S}_0$ of $\pi_0$ such that $\mathcal{F}$ is a flock of the cone with carrier $\mathcal{S}_0$. This cone is called the *critical cone* of $\mathcal{F}$. If $\mathcal{C}$ is any subset of the carrier of the critical cone of a flock $\mathcal{F}$, then clearly $\mathcal{F}$ is also a flock of the cone with carrier $\mathcal{C}$. Thus, determining the critical cone of a flock implicitly determines all cones for which this flock of planes is a flock.
\[Th:herd\] A point $( a,b,c,0 )$ is in the carrier of the critical cone of the flock $\mathcal{F} = \mathcal{F}(f,g,h)$ in $PG(3,q)$ if and only if the function $t \mapsto af(t) + bg(t) + ch(t)$ is a permutation of $GF(q)$.
A point on the line $\langle (0,0,0,1), (a,b,c,0) \rangle$ other than the vertex has coordinates $( a,b,c,\lambda )$ with $\lambda \in GF(q)$. Such a point is on the plane $\pi_t$ of $\mathcal{F}$ if and only if $\lambda = af(t) + bg(t) + ch(t)$. No two planes of $\mathcal{F}$ will meet at the same point of this line if and only if $t \mapsto af(t) + bg(t) + ch(t)$ is a permutation of $GF(q)$. Thus, under this condition, the point $( a,b,c,0 )$ will be in the carrier of the critical cone of $\mathcal{F}$.
The critical cone of a flock may be fairly “small”. Besides the empty cone, we will consider cones whose carriers consist of collinear points as being “small”. Cones of this type are called *flat cones*. For the most part, we shall regard flocks whose critical cones are flat as being uninteresting. For any nonempty set $S$ and any point $P$ of a projective plane, define $w_S(P)$ to be the number of lines through $P$ which contain an element of $S$. In the projective plane $\Pi$ we can define the *width* of a set $S$ to be $W_S = min \{ w_S(P) \mid P \in \Pi \}$. Clearly, $W_S = 1$ if and only if $S$ consists of a set of collinear points. If $S$ is an oval in a projective plane of order $q$ ($q+1$ points, no three of which are collinear), then $W_S = \frac{q+1}{2}$ if $q$ is odd or $W_S = \frac{q+2}{2}$ if $q$ is even. This can be written as $W_S = \lfloor \frac{q+2}{2} \rfloor$, independent of the parity of $q$. Using this idea we can provide an admittedly crude classification of critical cones. In $PG(3,q)$, if $S$ is the set of points of a carrier of a cone, then if $W_S < \lfloor \frac{q+2}{2} \rfloor$, we call the cone a *thin cone*. Cones which are not thin are called *wide* and a wide cone with at least $q+1$ points in a carrier are called *thick cones*. The class of thick cones contains the quadratic cones as well as all cones whose carrier contains any oval.
Definitions
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Given number fields $L \supset K$, smooth projective curves $C$ defined over $L$ and $B$ defined over $K$, and a non-constant $L$-morphism $h \colon C \to B_L$, we consider the curve $C_h$ defined over $K$ whose $K$-rational points parametrize the $L$-rational points on $C$ whose images under $h$ are defined over $K$. Our construction gives precise criteria for deciding the applicability of Faltings’ Theorem and the Chabauty method to find the points of the curve $C_h$. We provide a framework which includes as a special case that used in Elliptic Curve Chabauty techniques and their higher genus versions. The set $C_h(K)$ can be infinite only when $C$ has genus at most $1$; we analyze completely the case when $C$ has genus 1.'
address:
- 'Mathematical Institute, University of Oxford, 24–29 St. Giles, Oxford OX1 3LB, United Kingdom'
- 'Mathematics Institute, Zeeman Building, University of Warwick, Coventry CV4 7AL, United Kingdom'
author:
- 'E.V. Flynn'
- 'D. Testa'
date: '30 September, 2012'
title: Finite Weil restriction of curves
---
Introduction {#introduction .unnumbered}
============
Let $L$ be a number field of degree $d$ over $\mathbb{Q}$, let $f \in L[x]$ be a polynomial with coefficients in $L$, and define $$A_f := \bigl\{ x \in L ~ \mid ~ f(x) \in \mathbb{Q} \bigr\} .$$ Choosing a basis of $L$ over $\mathbb{Q}$ and writing explicitly the conditions for an element of $L$ to lie in $A_f$, it is easy to see that the set $A_f$ is the set of rational solutions of $d-1$ polynomials in $d$ variables. Thus we expect the set $A_f$ to be the set of rational points of a (possibly reducible) curve $C_f$; indeed, this is always true when $f$ is non-constant. A basic question that we would like to answer is to find conditions on $L$ and $f$ that guarantee that the set $A_f$ is finite, and ideally to decide when standard techniques can be applied to explicitly determine this set.
We formalize and generalize the previous problem as follows. Let $L \supset K$ be a finite separable field extension, let $B \to {{\rm Spec}}(K)$ be a smooth projective curve defined over $K$ and let $C$ be a smooth projective curve defined over $L$. Denote by $B_L$ the base-change to $L$ of the curve $B$, and suppose that $h \colon C \to B_L$ is a non-constant morphism defined over $L$. Then there is a (possibly singular and reducible) curve $C_h$ defined over $K$ whose $K$-rational points parametrize the $L$-rational points $p \in C(L)$ such that $h(p) \in B(K) \subset B_L(L)$. In this context, Theorem \[cocha\] identifies an abelian subvariety $F$ of the Jacobian of the curve $C_h$ and provides a formula to compute the rank of the Mordell-Weil group of $F$: these are the ingredients needed to apply the Chabauty method to determine the rational points of the curve $C_h$. To see the relationship of this general problem with the initial motivating question, we let $C:=\mathbb{P}^1_L$ and $B:=\mathbb{P}^1_\mathbb{Q}$. The polynomial $f$ determines a morphism $h \colon \mathbb{P}^1_L \to (\mathbb{P}^1_{\mathbb{Q}})_L$ and the $L$-points of $\mathbb{P}^1_L$ (different from $\infty$) with image in $\mathbb{P}^1(\mathbb{Q})$ correspond to the set $A_f$.
Over an algebraic closure of the field $L$, the curve $C_h$ is isomorphic to the fibered product of morphisms $h_i \colon C_i \to B_L$ obtained from the initial morphism $h$ by taking Galois conjugates (Lemma \[profi\]). Thus we generalize further our setup: we concentrate our attention on the fibered product of finitely many morphisms $h_1 \colon C_1 \to B$, …, $h_n \colon C_n \to B$, where $C_1 , \ldots , C_n$ and $B$ are smooth curves and the morphisms $h_1,\ldots,h_n$ are finite and separable. We determine the geometric genus of the normalization of $C_h$, as well as a natural abelian subvariety $J_h$ of the Jacobian of $C_h$. Due to the nature of the problem and of the arguments, it is immediate to convert results over the algebraic closure to statements over the initial field of definition.
Suppose now that $K$ is a number field. We may use the Chabauty method to find the rational points on $C_h$, provided the abelian variety $J_h$ satisfies the condition that the rank of $J_h(K)$ is less than the genus of $C_h$; by Chabauty’s Theorem (see [@prolegom; @chab1; @chab2]) this guarantees that $C_h(K)$ is finite. Chabauty’s Theorem has been developed into a practical technique, which has been applied to a range of Diophantine problems, for example in [@colemanchab; @colemanpadic; @flynnchab; @lortuck]. Further, if the set $C_h(K)$ is infinite, then, by Faltings’ Theorem [@faltings], the curve $C_h$ contains a component of geometric genus at most one; thus the computation of the geometric genus of the normalization of $C_h$ is a first step towards answering the question of whether $C_h(K)$ is finite or not. Moreover, since all the irreducible components of $C_h$ dominate the curve $C$, it follows that the set $C_h(K)$ can be infinite only in the case in which the curve $C$ has geometric genus at most one. In the case in which $C$ has genus zero, results equivalent to special cases of this question have already been studied ([@az; @bt; @pa; @sch; @za]). We shall analyze completely the case in which the genus of $C$ is one and the curve $C_h$ has infinitely many rational points. This covers as a special case the method called Elliptic Curve Chabauty (which is commonly applied to an elliptic curve $E$ defined over a number field $L \supset K$, satisfying that the rank of $E(L)$ is less than $[L : K]$ and we wish to find all $(x,y) \in E(L)$ subject to an arithmetic condition such as $x \in K$); see, for example, [@bruinth; @bruinchab; @flywet1; @flywet2; @wethth] and a hyperelliptic version in [@siksekchab]).
Let $K$ be a number field and let $E : y^2 = (a_2 x^2 + a_1 x + a_0)(x + b_1 + b_2 \sqrt{d})$, with $a_0,a_1,a_2,b_1,b_2,d \in K$, be an elliptic curve defined over $L = K(\sqrt{d})$; suppose also that $b_2 \not= 0$, $d \not= 0$, $d \not\in (K^*)^2$, so that $E$ is not defined over $K$. We are interested in $(x,y) \in E(L)$ with $x \in K$. Let $y = r + s\sqrt{d}$, with $r,s \in K$. Equating coefficients of $1,\sqrt{d}$ gives $$r^2 + d s^2 = (a_2 x^2 + a_1 x + a_0) (x + b_1),\ \
2 r s = (a_2 x^2 + a_1 x + a_0) b_2.$$ Let $t = s^2/(a_2 x^2 + a_1 x + a_0)$. Eliminate $r$ to obtain $b_2^2/(4 t) + d t = x + b_1$, so that $x = x(t) := b_2^2/(4 t) + d t - b_1$. Hence $s,t$ satisfy the curve $C : (st)^2 = t^3(a_2 x(t)^2 + a_1 x(t) + a_0)$, for which the right hand side is a quintic in $t
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The helicity of a vector field is a measure of the average linking of pairs of integral curves of the field. Computed by a six-dimensional integral, it is widely useful in the physics of fluids. For a divergence-free field tangent to the boundary of a domain in 3-space, helicity is known to be invariant under volume-preserving diffeomorphisms of the domain that are homotopic to the identity. We give a new construction of helicity for closed $(k+1)$-forms on a domain in $(2k+1)$-space that vanish when pulled back to the boundary of the domain. Our construction expresses helicity in terms of a cohomology class represented by the form when pulled back to the compactified configuration space of pairs of points in the domain. We show that our definition is equivalent to the standard one. We use our construction to give a new formula for computing helicity by a four-dimensional integral. We provide a Biot-Savart operator that computes a primitive for such forms; utilizing it, we obtain another formula for helicity. As a main result, we find a general formula for how much the value of helicity changes when the form is pushed forward by a diffeomorphism of the domain; it relies upon understanding the effect of the diffeomorphism on the homology of the domain and the de Rham cohomology class represented by the form. Our formula allows us to classify the helicity-preserving diffeomorphisms on a given domain, finding new helicity-preserving diffeomorphisms on the two-holed solid torus and proving that there are no new helicity-preserving diffeomorphisms on the standard solid torus. We conclude by defining helicities for forms on submanifolds of Euclidean space. In addition, we provide a detailed exposition of some standard ‘folk’ theorems about the cohomology of the boundary of domains in ${\mathbb{R}}^{2k+1}$.'
address:
- 'Department of Mathematics, University of Georgia, Athens, GA 30602'
- 'Department of Mathematics, Wake Forest University, Winston-Salem, NC 27109'
author:
- Jason Cantarella
- Jason Parsley
bibliography:
- 'helicity-forms.bib'
- 'cantarella.bib'
title: 'A new cohomological formula for helicity in ${\mathbb{R}}^{2k+1}$ reveals the effect of a diffeomorphism on helicity'
---
Introduction
============
The linking number of a pair of closed curves $a$ and $b$ in ${\mathbb{R}}^3$ is a topological measure of their entanglement. We can define the linking number as the degree of the Gauss map $g \co S^1 {\times}S^1 \rightarrow S^2$ given by $g(\theta,\phi) =\left( a(\theta) - b(\phi) \right) / {\left| a(\theta) - b(\phi) \right|}$. This degree can be written combinatorially, by counting signed crossings of $a$ and $b$, but we can also write this degree as an integral by pulling back the area form on $S^2$ via the Gauss map and integrating over the torus $S^1 {\times}S^1$. This “Gauss integral formula” for linking number yields $${\operatorname{Lk}}(a,b) = \frac{1}{{\operatorname{vol}}(S^2)} \int a'(\theta) {\times}b'(\phi) \cdot \frac{a(\theta) - b(\phi)}{{\left| a(\theta) - b(\phi) \right|}^3} d\theta \, d\phi.$$ The linking number is a knot invariant, so it is invariant under any ambient isotopy of ${\mathbb{R}}^3$ carrying the curves to new curves $\tilde{a}$ and $\tilde{b}$.
Given a divergence-free vector field $V$ on a domain $\Omega \subset {\mathbb{R}}^3$ with smooth boundary, we can define an analogous integral invariant known as *helicity*. The six-dimensional helicity integral, which measures the average linking number of pairs of integral curves of the field [@MR891881], is given by: $$\label{mhelicity}
{\operatorname{H}}(V) = \frac{1}{{\operatorname{vol}}(S^2)} \int_{\Omega \times \Omega} {V(x) \times V(y) \cdot \frac{x-y}{|x-y|^3} \; {\operatorname{dvol}}_x \, {\operatorname{dvol}}_y }$$ Just as the linking number of a pair of curves is a knot invariant, we might expect the helicity of a vector field to be a diffeomorphism invariant. This is not always true, as we will demonstrate below, but it is true in enough cases to make helicity an important quantity in fluid dynamics and plasma physics [@MR819398].
The helicity invariant for vector fields was used in plasma physics as early as 1958 by L. Woltjer [@MR0096542]. Woltjer showed that helicity was an invariant of the equations of ideal magnetohydrodynamics for an isolated system, and as such it was immediately useful in the study of astrophysical plasmas. J.J. Moreau in 1961 [@MR0128195] first used the invariant to study fluid dynamics. In an influential 1969 paper [@mof1], Keith Moffatt proved that helicity is an invariant of the equations of ideal fluid flow, even in the presence of an external force on the fluid.
The invariance of helicity has been reproved in various physical contexts ever since. For instance, Peradzynski showed that helicity was invariant under the equations of motion for superfluid helium [@peradzynski]. The same invariant was associated to foliations by Godbillon and Vey in 1970 [@MR0283816], by defining the foliation as the kernel of a 1-form and measuring the helicity of the form. In 1973, V.I. Arnol’d defined helicity for 2-forms in a 3-manifold [@MR891881] (the paper was published in English translation in 1986). His may be the first proof of the invariance of helicity under arbitrary volume-preserving diffeomorphisms (on a simply-connected domain)[^1].
The most general invariance theorem for helicity known is:
\[classicalinvariance\] The helicity of a divergence-free vector field $V$ on a domain $\Omega \subset {\mathbb{R}}^3$ is invariant under any volume-preserving diffeomorphism of $\Omega$ which is homotopic to the identity. If $\Omega$ is simply connected, then helicity is invariant under any volume-preserving diffeomorphism of $\Omega$.
If $V$ is a null-homologous vector field (meaning that its dual 2-form is exact) on a compact manifold $M^3$ without boundary, then helicity is invariant under any volume-preserving diffeomorphism of $M$.
Also, if $V$ on $\Omega$ is fluxless (cf. section \[sec:fluxless\]) on a domain in ${\mathbb{R}}^3$ with boundary, then its helicity is invariant under any volume-preserving diffeomorphism [@MR1770976]. These invariance results leave open some natural questions: are these all of the helicity-preserving diffeomorphisms? If not, can we classify the diffeomorphisms that do preserve helicity? What is the effect of an arbitrary diffeomorphism on helicity?
Figure \[tori\] depicts a diffeomorphism which does not preserve helicity. Here, the domain is a solid torus, and the vector field following its longitudes is divergence-free and null-homologous[^2]. Applying a Dehn twist will preserve the volume form but changes the helicity of the field, which we will calculate by Theorem \[torus-h\].
[straight\_field]{}
[twisted\_field]{}
To answer the questions above, we notice that in the theory developed so far, there exists an asymmetry between linking number and helicity – while there are several useful ways to obtain linking number, including a “purely homological” expression as the degree of a map and a combinational expression as the sum of signed crossing numbers as well as an integral expression, so far the helicity has only been expressed as an integral. In this paper we try to restore the balance between linking number and helicity by providing a purely cohomological definition for the helicity of $(k+1)$-forms on domains $\Omega$ in ${\mathbb{R}}^{2k+1}$ (Definition \[fthelicity\]). We work with forms $\omega$ that are closed and satisfy the following definition:
A smooth $p$-form $\alpha$ defined on the domain $\Omega$ is a *Dirichlet form* if $\alpha$ vanishes when restricted to the boundary, i.e., if $V_1, \ldots, V_p$ are all tangent to ${\partial}\Omega$, then $\alpha(V_1, \ldots, V_p)=0$.
For domains in ${\mathbb{R}}^3$, closed Dirichlet forms are dual to vector fields that are divergence-free and tangent to the boundary. In Appendix \[hodge\], we examine decompositions of differential forms; in particular we characterize the set of closed Dirichlet forms. Proposition \[alphaisexact\] guarantees that every closed Dirichlet form is exact.
Arnol’d defined helicity as the integral of the wedge product of an exact form with a primitive for that form [@MR1612569
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
The univalence axiom expresses the principle of extensionality for dependent type theory. However, if we simply add the univalence axiom to type theory, then we lose the property of *canonicity* — that every closed term computes to a canonical form. A computation becomes ‘stuck’ when it reaches the point that it needs to evaluate a proof term that is an application of the univalence axiom. So we wish to find a way to compute with the univalence axiom. While this problem has been solved with the formulation of cubical type theory, where the computations are expressed using a nominal extension of lambda-calculus, it may be interesting to explore alternative solutions, which do not require such an extension.
As a first step, we present here a system of propositional higher-order minimal logic (PHOML). There are three kinds of typing judgement in PHOML. There are *terms* which inhabit *types*, which are the simple types over $\Omega$. There are *proofs* which inhabit *propositions*, which are the terms of type $\Omega$. The canonical propositions are those constructed from $\bot$ by implication $\supset$. Thirdly, there are *paths* which inhabit *equations* $M =_A N$, where $M$ and $N$ are terms of type $A$. There are two ways to prove an equality: reflexivity, and *propositional extensionality* — logically equivalent propositions are equal. This system allows for some definitional equalities that are not present in cubical type theory, namely that transport along the trivial path is identity.
We present a call-by-name reduction relation for this system, and prove that the system satisfies canonicity: every closed typable term head-reduces to a canonical form. This work has been formalised in Agda.
author:
- Robin Adams
- Marc Bezem
- Thierry Coquand
bibliography:
- '../../../../type.bib'
title: 'A Normalizing Computation Rule for Propositional Extensionality in Higher-Order Minimal Logic '
---
Introduction
============
The *univalence axiom* of Homotopy Type theory (HoTT) [@hottbook] postulates a constant $${\ensuremath{\mathsf{isotoid}}}: A \simeq B \rightarrow A = B$$ that is an inverse to the obvious function $A = B \rightarrow A \simeq B$. However, if we simply add this constant to Martin-Löf type theory, then we lose the important property of *canonicity* — that every closed term of type $A$ computes to a unique canonical object of type $A$. When a computation reaches a point where we eliminate a path (proof of equality) formed by ${\ensuremath{\mathsf{isotoid}}}$, it gets ’stuck’.
As possible solutions to this problem, we may try to do with a weaker property than canonicity, such as *propositional canonicity*: that every closed term of type $\mathbb{N}$ is *propositionally* equal to a numeral, as conjectured by Voevodsky. Or we may attempt to change the definition of equality to make ${\ensuremath{\mathsf{isotoid}}}$ definable [@Polonsky14a], or add a nominal extension to the syntax of the type theory (e.g. Cubical Type Theory [@cchm:cubical]).
We could also try a more conservative approach, and simply attempt to find a reduction relation for a type theory involving ${\ensuremath{\mathsf{isotoid}}}$ that satisfies all three of the properties above. There seems to be no reason *a priori* to believe this is not possible, but it is difficult to do because the full Homotopy Type Theory is a complex and interdependent system. We can tackle the problem by adding univalence to a much simpler system, finding a well-behaved reduction relation, then doing the same for more and more complex systems, gradually approaching the full strength of HoTT.
In this paper, we present a system we call PHOML, or predicative higher-order minimal logic. It is a type theory with three kinds of typing judgement. There are *terms* which inhabit *types*, which are the simple types over $\Omega$. There are *proofs* which inhabit *propositions*, which are the terms of type $\Omega$. The canonical propositions are those constructed from $\bot$ by implication $\supset$. Thirdly, there are *paths* which inhabit *equations* $M =_A N$, where $M$ and $N$ are terms of type $A$.
There are two canonical forms for proofs of $M =_\Omega N$. For any term $\varphi : \Omega$, we have ${\ensuremath{\mathrm{ref} \left( {\varphi} \right)}} : \varphi =_\Omega \varphi$. We also add univalence for this system, in this form: if $\delta : \varphi \supset \psi$ and $\epsilon : \psi \supset\varphi$, then ${\ensuremath{\mathrm{univ}_{{\varphi}, {\psi}} \left({\delta} , {\epsilon} \right)}} : \varphi =_\Omega \psi$.
This entails that in PHOML, two propositions that are logically equivalent are equal. Every function of type $\Omega \rightarrow \Omega$ that can be constructed in PHOML must therefore respect logical equivalence. That is, for any $F$ and logically equivalent $x$, $y$ we must have that $Fx$ and $Fy$ are logically equivalent. Moreover, if for $x:\Omega$ we have that $Fx$ is logically equivalent to $Gx$, then $F =_{\Omega\to\Omega} G$. Every function of type $(\Omega \rightarrow \Omega) \rightarrow \Omega$ must respect this equality; and so on. This is the manifestation in PHOML of the principle that only homotopy invariant constructions can be performed in homotopy type theory. (See Section \[section:exampletwo\].)
We present a call-by-name reduction relation for this system, and prove that every typable term reduces to a canonical form. From this, it follows that the system is consistent.
For the future, we wish to include the equations in $\Omega$, allowing for propositions such as $M =_A N \supset N =_A M$. We wish to expand the system with universal quantification, and expand it to a 2-dimensional system (with equations between proofs). We then wish to add more inductive types and more dimensions, getting ever closer to full homotopy type theory.
Another system with many of the same aims is cubical type theory [@cchm:cubical]. The system PHOML is almost a subsystem of cubical type theory. We can attempt to embed PHOML into cubical type theory, mapping $\Omega$ to the universe $U$, and an equation $M =_A N$ to either the type ${\ensuremath{\mathsf{Path} \, {A} \, {M} \, {N}}}$ or to $\mathrm{Id}\ A\ M\ N$. However, PHOML has more definitional equalities than the relevant fragment of cubical type theory; that is, there are definitionally equal terms in PHOML that are mapped to terms that are not definitionally equal in cubical type theory. In particular, ${\ensuremath{\mathrm{ref} \left( {x} \right)}}^+ p$ and $p$ are definitionally equal, whereas the terms $\mathrm{comp}^i x [] p$ and $p$ are not definitionally equal in cubical type theory (but they are propositionally equal). See Section \[section:cubical\] for more information.
The proofs in this paper have been formalized in Agda. The formalization is available at `https://github.com/radams78/TYPES2016`.
Predicative Higher-Order Minimal Logic with Extensional Equality
================================================================
We call the following type theory PHOML, or *predicative higher-order minimal logic with extensional equality*.
Syntax
------
Fix three disjoint, infinite sets of variables, which we shall call *term variables*, *proof variables* and *path variables*. We shall use $x$ and $y$ as term variables, $p$ and $q$ as proof variables, $e$ as a path variable, and $z$ for a variable that may come from any of these three sets.
The syntax of PHOML is given by the grammar:
$$\begin{array}{lrcl}
\text{Type} & A,B,C & ::= & \Omega \mid A \rightarrow B \\
\text{Term} & L,M,N, \varphi,\psi,\chi & ::= & x \mid \bot \mid \varphi \supset \psi \mid \lambda x:A.M \mid MN \\
\text{Proof} & \delta, \epsilon & ::= & p \mid \lambda p:\varphi.\delta \mid \delta \epsilon \mid P^+ \mid P^- \\
\text{Path} & P, Q & ::= & e \mid {\ensuremath{\mathrm{ref} \left( {M} \right)}} \mid P \supset^* Q \mid {\ensuremath{\mathrm{univ}_{{\varphi}, {\psi}} \left({P} , {Q} \right)}} \mid \\
& & & {\ensuremath{\lambda \!\! \lambda \!\! \lambda}}e : x =_A y. P \mid P_{MN} Q \\
\text{Context}
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- 'A. Rothkegel'
- 'A. Rothkegel'
- 'K. Lehnertz'
title: 'Synchronization in populations of sparsely connected pulse-coupled oscillators'
---
The collective dynamics of interacting oscillatory systems has been studied in many different contexts in the natural and life sciences [@Winfree1967; @Kuramoto1984; @Pikovsky_Book2001; @Arenas2008]. In the thermodynamic limit, evolution equations for the population density proved to be a useful description [@Desai1978; @Omurtag2000; @Acebron2005], in particular to characterize the stability of synchronous and asynchronous states (see, e.g., [@Mirollo1990; @Strogatz1991; @Treves1993; @Abbott1993; @Strogatz2000; @Vreeswijk2000; @Gerstner2000; @Ly2010; @Newhall2010; @Louca2013]). Usually, dense or all-to-all-coupled networks are considered for these descriptions. Motivated by natural systems in which constituents interact with few others only, investigations of complex networks have revealed a large influence of the degree and sparseness of connectivity on network dynamics [@Hopfield1995; @Golomb2000; @Boergers2003; @Zillmer2006; @Zillmer2009; @Rothkegel2011; @Luccioli2012; @Tessone2012]. Especially when the knowledge about the connection structure is limited, it suggests itself to assume random connections (as in [Erdős-Rényi ]{}networks) or random interactions (where excitations are assigned randomly to target oscillators [@Omurtag2000; @Sirovich2006; @Dumont2013; @Nicola2013]). Both approaches often yield comparable dynamics (e.g. [@Ferreira2012; @Tattini2012]) whereas random interactions represents a substantial simplification from a mathematical point of view, allowing one to describe the networks in terms of evolution equations for the phase density. These equations are usually posed as starting point for the commonly applied mean- or the fluctuation-driven limits. However, rarely are they studied in full although it can be expected that sparseness largely influences the collective dynamics as has been discussed for excitable systems [@Sirovich2006].
In this Letter, we propose a population model of $\delta$-pulse coupled oscillators with sparse connectivity, derive the governing equations from a general definition of the density flux, and characterize existence and uniqueness of stationary solutions. For integrate-and-fire-like oscillators, the latter may either disappear with diverging firing rate or lose stability at a supercritical Andronov-Hopf bifurcation (AHB). This is in contrast to the global convergence to complete synchrony for all-to-all coupling that has been shown for finite [@Mirollo1990] and for infinite [@Mauroy2013] number of oscillators.
Consider a population of oscillators $n \in N$ with cyclic phases $\phi_n(t) \in [ 0,1 )$ and intrinsic dynamics $\dot{\phi}_n(t) = 1$. If for some $t_f$ and some oscillator $n$ the phase reaches 1, the oscillator fires and we introduce a phase jump in all oscillators $n'$ with probability $p= m/N$ [@DeVille2008; @Olmi2010]. Here, $m$ is the number of recurrent connections per oscillator. The height of the phase jump is defined by the phase response curve $\Delta(\phi)$ (PRC) (or equivalently by the phase transition curve $R(\phi)$): $$\label{eq:interactionSingleOsci}
\phi_{n'}(t_f^+) = \phi_{n'}(t_f) + \Delta\left(\phi_{n'}(t_f)\right) = R( \phi_{n'} ( t_f)).$$ The model can be interpreted as an all-to-all coupled network in which connections are not reliable and mediate interactions between oscillators only with a small probability ($p$). It can also be interpreted as an approximation to an [Erdős-Rényi ]{}network in which the quenched disorder, imposed by its construction, is replaced by a dynamic coupling structure which takes the form of an ongoing random influence.
For the limit of large sparse networks ($N \rightarrow \infty, m = \mbox{const.}$), we represent the network dynamics by a continuity equation for the phase density $\rho(\phi,t)$ $$\label{eq:continuity}
\partial_t \rho(\phi,t) + \partial_\phi J(\phi,t) = 0$$ with $\rho (\phi,t) \geq 0$ and $\int_0^1 \rho(\phi,t) d\phi = 1$. We assume the probability flux $J(\phi,t)$ to be continuous and define both $\rho$ and $J$ at phases $\phi \in [0,1)$. Evaluations at $\phi = 1$ are meant as left-sided limits towards $\phi = 1$. $J(1,t)$ is the firing rate. Every oscillator is subject to Poisson excitations $\eta_{\lambda(t)}$ with inhomogeneous rate $\lambda(t) = m J(1,t)$ and we can describe its phase variable by the stochastic differential equation $\partial_t \phi(t) = 1 + \eta_{\lambda(t)}$. To shorten our notation, we will omit in the following the time $t$ as argument of $\rho$, $\lambda$, and $J$. As we expect $R(\phi)$ to be non-invertible and to map intervals to a single phase, we have to take care in which way $\rho$ and $J$ are interpreted at these phases. Given some distribution of oscillators phases, we consider $\rho(\phi,t) d\phi$ as the fraction of oscillators which are contained in a small interval whose left boundary is fixed to $\phi$. With this definition, $\rho(\phi,t)$ is continuous for right-sided limits and the corresponding $J(\phi,t)$ is defined by the oscillators which pass an imaginary boundary which is infinitely close to $\phi$ and right to $\phi$. The flux can be formalized in the following way: $$\label{eq:fluxGeneralForm}
J(\phi) = \rho(\phi) + \lambda \left(\int \limits_{I_>(\phi)} \rho(\tilde{\phi}) d\tilde{\phi} - \int \limits_{I_\leq(\phi)} \rho(\tilde{\phi}) d\tilde{\phi} \right),$$ where $I_>(\phi) := \{ \tilde{\phi} < \phi | R(\tilde{\phi}) > \phi\}$ is the set of phases smaller than $\phi$ which is mapped by $R(\phi)$ to a phase larger than $\phi$, and $I_\leq(\phi) := \{ \tilde {\phi} > \phi | R(\tilde{\phi}) \leq\phi\} $ is defined analogously (the order relations in in these formulas are interpreted for unwrapped phases). The first term of the r.h.s. of represents convection due to the intrinsic dynamics of oscillators. The integrals represent the fractions of oscillators which are moved across phase $\phi$ by an excitation, either to smaller or larger values (cf. ).
PRCs which are derived from limit cycle oscillators by phase reduction usually have invertible phase transition curves [@Brown2004a]. However, even holds if $R(\phi)$ is not invertible and has no or uncountably many inverse images. For phases $\phi$ at which $R(\phi)$ has at most countably many inverse images, we can represent the sets $I_>(\phi)$ and $I_\leq(\phi)$ by a product of two Heaviside functions and derive, differentiating the latter to $\delta$-functions, the following expression: $$\partial_\phi J(\phi) = \partial_\phi \rho(\phi) + \lambda \int_0^1 \rho(\tilde{\phi})\left( \delta ( \phi - \tilde{\phi} )- \delta (\phi - R(\tilde{\phi}) \right) d\tilde{\phi}.$$ Denoting with $(R_i^{-1}(\phi) | i \in I)$ an enumeration of the inverse images of $R(\phi)$ at phase $\phi$ for an appropriate index set $I$, the continuity equation reads: $$\label{eq:sparseLimit}
\partial_t \rho(\phi) = - \partial_\phi \rho(\phi) - \lambda \rho(\phi) + \lambda \sum_{i \in I} \frac{\rho(R_i^{-1}(\phi)) }{R'(R^{-1}(\phi))}.$$ For uncountably many inverse images of some phase $\varphi$, they will be contained in $I_>(\varphi)$ or $I_\leq (\varphi)$ but not in $I_>(\varphi^-)$ and $I_\leq(\varphi^-)$. In this case, we obtain a discontinuity between $\rho(\varphi^-)$ and $\rho(\varphi)$ which can be expressed by requiring continuity of the flux for left-sided limits at $\phi = \varphi$ ($J(\varphi^-) = J(\varphi)$). Note that the definition in automatically ensures continuity for right-sided limits. Setting $\phi = 1$ in , we obtain the following relationship for the excitation rate $\lambda= m J(1)$ $$\label{eq:firingRate}
\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We consider distributed and dynamic caching of coded content at small base stations (SBSs) in an area served by a macro base station (MBS). Specifically, content is encoded using a maximum distance separable code and cached according to a time-to-live (TTL) cache eviction policy, which allows coded packets to be removed from the caches at periodic times. Mobile users requesting a particular content download coded packets from SBSs within communication range. If additional packets are required to decode the file, these are downloaded from the MBS. We formulate an optimization problem that is efficiently solved numerically, providing TTL caching policies minimizing the overall network load. We demonstrate that distributed coded caching using TTL caching policies can offer significant reductions in terms of network load when request arrivals are bursty. We show how the distributed coded caching problem utilizing TTL caching policies can be analyzed as a specific single cache, convex optimization problem. Our problem encompasses static caching and the single cache as special cases. We prove that, interestingly, static caching is optimal under a Poisson request process, and that for a single cache the optimization problem has a surprisingly simple solution.'
author:
- |
Jesper Pedersen, Alexandre Graell i Amat, , Jasper Goseling, ,\
Fredrik Brännström, , Iryna Andriyanova, , and Eirik Rosnes, [^1] [^2] [^3] [^4] [^5]
bibliography:
- 'confs-jrnls.bib'
- 'IEEEabrv.bib'
- 'library.bib'
title: Dynamic Coded Caching in Wireless Networks
---
Caching, content delivery networks, erasure correcting codes, TTL.
Introduction
============
Distributed wireless caching has attracted a significant amount of attention in the last few years as a promising technology to alleviate the load on backhaul links [@Boccardi2014]. Content may be cached in a distributed fashion across small base stations (SBSs) such that users can download requested content directly from them. For distributed caching, the use of erasure correcting codes has been shown to reduce the download delay as well as the network load [@Shanmugam2013; @Bioglio2015]. Content may also be cached directly in mobile devices such that users can download content from neighboring devices using device-to-device communication. Similar to the SBS caching case, the use of erasure correcting codes has been demonstrated to reduce the network load also for this scenario [@Pedersen2016; @Pedersen2019; @Wang2017]. Caching furthermore facilitates index-coded broadcasts to multiple users requesting different content, which has been shown to drastically reduce the amount of data that has to be transmitted over the SBS-to-device downlink [@Maddah-Ali2014]. All these works consider the cached content to be static for a period of time (e.g., a day) according to a given file popularity distribution.
Dynamic cache eviction policies, e.g., first-in-first-out (FIFO), least-recently-used (LRU), least-frequently-used (LFU), and random (RND), may be beneficial to use when the file library or file popularity profile is dynamic, or when users request content according to a renewal process [@Gelenbe1973]. Due to the complexity in analyzing such policies, timer-based policies that are significantly more tractable have been suggested. One such policy is time-to-live (TTL) where a request for a particular piece of content triggers it to be cached and then evicted after the expiration of a timer. The TTL policy has been shown to yield similar performance to FIFO, LRU, LFU, and RND policies in [@Che2002; @Fricker2012; @Bianchi2013; @Dehghan2019]. Goseling and Simeone extended the TTL policy to cache fractions of files, referred to as fractional TTL (FTTL), and showed that this can improve performance under a renewal request process [@Goseling2019]. Decreasing the fraction of a file that is cached over time, termed soft TTL (STTL), can further improve the performance. Optimal STTL caching policies are obtained through a convex optimization problem [@Goseling2019]. All previous works on TTL policies assume either a single cache or a number of caches, e.g., structured into lines or hierarchies, where users access a single cache. For these scenarios, coded caching does not bring any benefits. However, if users can access several caches, the use of erasure correcting codes can be beneficial. Hence, merging distributed coded caching with the TTL schemes in [@Goseling2019], which have both independently been shown to bring performance improvements, is an intriguing prospect.
In this paper, we generalize the TTL policies in [@Goseling2019] to a distributed coded caching scenario. Specifically, we consider the scenario where content is encoded using a maximum distance separable (MDS) code and cached in a distributed fashion across several SBSs. Coded content is evicted from the caches in accordance with the TTL policies in [@Goseling2019]. Users requesting a particular piece of content download coded packets from SBSs within communication range and, if necessary, download additional packets from a macro base station (MBS). We formulate a network load minimization problem, where the network load is defined as a sum of data rates over various network links, weighted by a cost representing, e.g., transmission delay or energy consumption of transmitting data over these links. We then rewrite the optimization problem as a mixed integer linear program (MILP) that is efficiently solved numerically. We furthermore prove that the distributed coded caching problem can equivalently be analyzed as a single cache problem with a specific decreasing and convex cost function. This is an important result because it shows that such a function, previously studied for the single cache case due to its analytical tractability [@Goseling2019], arises naturally in a distributed caching scenario. For SBSs deployed according to a Poisson point process [@Chiu2013 Ch. 2.3], we derive the cost function explicitly. We analyze two important special cases of the network load minimization problem. In particular, we show that our problem has the static coded caching problem where content is never updated (considered in, e.g., [@Shanmugam2013; @Bioglio2015]), as a special case. We furthermore prove that static coded caching is optimal under the assumption of a Poisson request process. Moreover, for the special case of users accessing a single cache, we prove that the STTL problem is a fractional knapsack problem with a greedy optimal solution. The performance of TTL, FTTL, and STTL, in terms of network load, is evaluated for a renewal process, specifically when the times between requests follow a Weibull distribution. We show that distributed coded caching using TTL caching policies can offer significant reductions in network load, especially for bursty renewal request processes.
Distributed caching of coded content utilizing TTL cache eviction policies was also investigated in [@Chen2019]. Compared to the problem studied in this paper, the work in [@Chen2019] is significantly different in a number of ways. Specifically, we consider an STTL policy with optimized TTL timers under a renewal request process, which was not considered in [@Chen2019]. Furthermore, a dynamic library of files with location-dependent popularity is considered in [@Chen2019], which is typically considered to be more general than a static file library and is not in the scope of our work. However, it is reasonable to consider scenarios where the file library remains fixed for a considerable amount of time, e.g., a day, and focus on an area with homogeneous file popularity.
System Model {#sec:model}
============
We consider an area served by an MBS that always has access to a file library of $N$ files, where file ${i}= 1, 2, \ldots, N$ has size $s_i$. Mobile users request files from the library according to independent renewal processes. Specifically, we denote the independent and identically distributed times between requests for file ${i}$ by $X_i$, the cumulative distribution function (CDF) of $X_i$ by $${F_{X_i}(t)} \triangleq \Pr(X_i \le t),$$ and the request rate of file ${i}$ by $$\omega_i \triangleq \operatorname*{\mathbb{E}}[X_i]^{-1}.$$ We let $p_i = \omega_i/\omega$, for some aggregate request rate in the area, $\omega = \sum_{i=1}^N \omega_i$. For a Poisson request process, i.e., exponentially distributed $X_i$, $p_i$ can be interpreted as the probability that file ${i}$ is requested. The request rates $\omega_i$ are assumed to be constant over a sufficiently long period of time, e.g., not changing during the course of one day. For such scenarios, file popularity predictions and content allocation optimization can be carried out during periods of low network traffic, e.g., during night time. ${B}$ SBSs are deployed in the area and each SBS has a cache with storage capacity $C$. We assume that a user can download content from an SBS if it is within a range ${r_\text{SBS}}$ and we denote by $\gamma_b$ the probability that a user is within range of $b$ SBSs at any given time. The model considered in this paper is illustrated in Fig. \[fig:model\].
Caching Policy {#sec:policy}
--------------
Each file ${i
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Blebs are cell protrusions generated by local membrane–cortex detachments followed by expansion of the plasma membrane. Blebs are formed by some migrating cells, for example primordial germ cells of the zebrafish. While blebs occur randomly at each part of the membrane in unpolarized cells, a polarization process guarantees the occurrence of blebs at a preferential site and thereby facilitates migration towards a specified direction. Little is known about the factors involved in development and maintenance of a polarized state, yet recent studies revealed the influence of an intracellular flow and the stabilizing role of the membrane-cortex linker molecule Ezrin. Based on this information, we develop and analyse a coupled bulk-surface model describing a potential cellular mechanism by which a bleb could be induced at a controlled site. The model rests upon intracellular Darcy flow and a diffusion-advection-reaction system, describing the temporal evolution from an unpolarized to a stable polarized Ezrin distribution. We prove the well-posedness of the mathematical model and show that simulations qualitatively correspond to experimental observations, suggesting that indeed the interaction of an intracellular flow with membrane proteins can be the cause of the cell polarization.'
author:
- Carolin Dirks
- Paul Striewski
- Benedikt Wirth
- Anne Aalto
- 'Adan Olguin-Olguin'
- Erez Raz
bibliography:
- 'mybibliography.bib'
title: A mathematical model for cell polarization in zebrafish primordial germ cells
---
Introduction
============
Several recent studies investigated the directional cell migration process via local membrane protrusions, so-called blebs. While the mechanisms of the actual bleb formation are quite well understood, the process of cell polarization leading to a stable *directional* blebbing remains still unexplained. In some recent works (such as [@Ref:PaluchRaz2013 Paluch, Raz, 2013], [@Ref:FritzscheEtAl2014 Fritzsche, Thorogate et al., 2014]), researchers suggested the role of the membrane-cortex linker Ezrin in inhibiting the probability for bleb formation in regions with a high Ezrin concentration. In addition, a directed intracellular flow has been observed during cell polarization that seems to be related to the occurrence of so-called actin brushes, filamentous actin structures forming at the front side of the cell [@Ref:KardashEtAl2010 Kardash, Reichmann-Fried, 2010]. In this article, we take up these observations and hypothesize that shear stresses induced by the intracellular flow may lead to a local destabilization of the Ezrin linkages between membrane and cortex, resulting in a redistribution of membranous Ezrin and bleb formation. This hypothesis is tested using a mathematical model for the time interval between actin brush formation and the onset of blebbing. The model incorporates an intracellular flow driven by actin brushes and a description of the flow-controlled membranous Ezrin concentration including turnover rates from active (membrane-bound) and inactive (cytosolic) Ezrin. The experimentally observed Ezrin depletion in the front and accumulation in the back of the cell can be reproduced by the model. Thereby our model positively answers the question whether there could be a mechanical basis for Ezrin polarization, in our case an actin-induced flow.
This work is organized as follows. We start with providing a brief overview of the biological context and the related work, and we introduce the notation used throughout this article. In \[sec:Model\], we describe our model used to simulate the temporal behaviour of the active Ezrin. The corresponding model analysis is presented in \[sec:Analysis\], where we prove well-posedness of the surface equation. Finally, we describe the numerical treatment of the coupled bulk-surface equation system and compare simulation results to experiments in \[sec:Experiments\].
Biological setting
------------------
The process of directional cell migration is an important and extensively studied mechanism in early embryonic development. A widely used model for in vivo studies are primordial germ cells (PGCs). These cells are specified within the embryo and have to travel a certain distance to reach their destination, namely the site where the gonad develops [@Ref:DoitsidouEtAl2002 Doitsidou, Reichmann-Fried et al., 2002]. This migration process is performed via blebs, local detachments of the cell membrane from the cortex which move the cell to a certain direction specified by a chemical gradient. Little is known about the signaling process within the cell in the time interval between the arrival of the chemical signal and the actual directed movement, in which the cell changes from an unpolarized to a polarized state. However, several factors have been shown to play a role in the polarization process [@Ref:PaluchRaz2013 Paluch, Raz, 2013].
Blebbing is produced by an increase in the intracellular pressure coupled to detachment of the cell membrane from the cell cortex. While migrating, PGCs go through two different phases, named “run” and “tumble”. During the “tumble” state, PGCs are apolar and blebs are formed at random sites around the cell perimeter. When the PGCs are in the “run” state the cells are polarized such that blebs form predominantly in one direction which is defined as the leading edge [@Ref:KardashEtAl2010 Kardash, Reichmann-Fried, 2010], [@Ref:PaksaRaz2015 Paksa, Raz, 2015].
Although the entire process of PGC polarization has not yet been fully understood, some factors have been identified to wield a strong influence. In the polarized state, a preferential polymerization of filamentous actin structures, so-called actin brushes, at the front edge of cell was reported, whereas such structures are absent in unpolarized cells [@Ref:KardashEtAl2010 Kardash, Reichmann-Fried, 2010]. The actin brushes are considered to be responsible for a recruitment of myosin, that leads to an increase of the contractility, favouring the corresponding side of the cell as the leading edge [@Ref:PaluchRaz2013 Paluch, Raz, 2013]. The accumulation of actin brushes is furthermore assumed to be correlated with a flow of cytoplasm towards the expanding bleb on the one hand and a strong retrograde flow of cortical actin on the other [@Ref:KardashEtAl2010 Kardash, Reichmann-Fried, 2010], [@Ref:ReigEtAl2014 Reig, Pulgar et al., 2014]. Moreover, a frequently reported feature of polarized blebbing cells is a local decrease of the membrane-cortex attachment at the front edge in combination with an increase at the back. Hence, a negative correlation between the propensity for blebbing and the stability of membrane-cortex attachment is assumed. A presumable candidate for regulating the membrane-cortex attachment is the linker molecule Erzin [@Ref:PaluchRaz2013 Paluch, Raz, 2013]. Experiments have shown that in polarized cells, Ezrin accumulates at the back [@Lorentzen1256 Lorentzen, Bamber et al.,2011]. Besides, the linker molecule is able to switch between an active and an inactive state. During its active form, it links the cell cortex to the membrane via two binding terminals (the membrane-binding N-terminal and the actin-binding C-terminal), whereas in its inactive form, these terminals interact with each other causing the molecule to diffuse within the cytoplasm. Ezrin constantly keeps turning from one state to the other, resulting in a frequent change between binding to and detaching from the membrane [@Ref:FritzscheEtAl2014 Fritzsche, Thorogate et al., 2014], [@Ref:BruecknerEtAl2015 Brückner, Pietuch et al., 2015].
To get a better understanding of the intracellular events involved in the polarization processes, we develop a mathematical model expressing the interaction of different factors which are known to play a role in the emergence and maintenance of a polarized state. The model focuses on the role and regulation of the active and inactive Ezrin concentration, including the influence of the cytoplasmic flow driven by localised actin-myosin contraction. In particular, we present a potential model for the binding and unbinding dynamics along the cell membrane by incorporating the reported information together with reviewing some additional hypotheses.
Related work
------------
A variety of models for cell polarization have been proposed, many of them based on reaction-diffusion equations, suggesting that diffusive instabilities are involved in the process of cell polarization [@Ref:Levine Levine et al. 2006], [@Ref:OnsumRao Onsum, Rao, 2007], [@Ref:RaetzRoeger2012 Rätz, Röger, 2012], [@Ref:RaetzRoeger2014 Rätz, Röger, 2014]. Cell polarization induced by active transport of polarization markers was for example studied by [@Ref:Hawkins Hawkins, Bénichou et al., 2009], [@Ref:Calvez Calvez, Hwakins et al., 2012]. In either article, the presented models account for active transport of polarization markers along the cytoskeleton. [@hausberg2018well Hausberg, Röger, 2018] described the activity of GTPases by a system of three coupled bulk-surface advection-reaction-diffusion equations. The system models the interconversion of active and inactive GTPase, lateral drift and diffusion of molecules along the membrane and also the diffusion of inactive molecules into the cytosol. In contrast to our approach, Hausberg and Röger suggest flow-independent reaction terms and assume the geometry of the cell to be more regular than we do.
[@Ref:GarckeKampmann2015 Garcke, Kampmann et al., 2015] proposed a model for lipid raft formation in cellular membranes and their interaction with intracellular cholesterol. Although not directly linked to cell polarization, their model comprises phase separation and interaction energies, which are similar to those presented in this article. In their work, a
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Let $\mathbb{N}$ be a set of the natural numbers. Symmetric inverse semigroup $R_\infty$ is the semigroup of all infinite 0-1 matrices $\left[ g_{ij}\right]_{i,j\in \mathbb{N}}$ with at most one 1 in each row and each column such that $g_{ii}=1$ on the complement of a finite set. The binary operation in $R_\infty$ is the ordinary matrix multiplication. It is clear that infinite symmetric group $\mathfrak{S}_\infty$ is a subgroup of $R_\infty$. The map $\star:\left[ g_{ij}\right]\mapsto\left[ g_{ji}\right]$ is an involution on $R_\infty$. We call a function $f$ on $R_\infty$ positive definite if for all $r_1, r_2, \ldots, r_n\in R_\infty$ the matrix $\left[ f\left( r_ir_j^\star\right)\right]$ is Hermitian and non-negatively definite. A function $f$ said to be indecomposable if the corresponding $\star$-representation $\pi_f$ is a factor-representation. A class of the $R_\infty$-central functions (characters) is defined by the condition $f(rs)=f(sr)$ for all $r,s\in R_\infty$. In this paper we classify all factor-representations of $R_\infty$ that correspond to the $R_\infty$-central positive definite functions.'
author:
- 'N.I. Nessonov'
---
Introduction
============
Let $R_n$ be the set of all $n\times n$ matrices that contain at most one entry of one in each column and row and zeroes elsewhere. Under matrix multiplication, $R_n$ has the structure of a semigroup, a set with an associative binary operation and an identity element. The number of ${\rm rank}\,k$ matrices in $R_n$ is ${{\displaystyle{{{n}\choose{k}}}^2}}k!$ and hence $R_n$ has a total of $\sum\limits_{k=0}^n{{\displaystyle{{{n}\choose{k}}}^2}}k!$ elements. Note that the set of ${\rm rank}\,n$ matrices in the semigroup $R_n$ is isomorphic to $\mathfrak{S}_n$, the symmetric group on $n$ letters.
The semigroup $R_\infty$ is the inductive limit of the chain $R_n$, $ n =
1, 2, . . .$ , with the natural embeddings: $ R_n\ni r=\left[ r_{ij} \right]\mapsto \hat{r}=\left[ \hat{r}_{ij} \right]\in R_{n+1}$, where $r_{ij}=\hat{r}_{ij}$ for all $i,j\leq n$ and $\hat{r}_{n+1\,n+1}=1$. Respectively, the group $\mathfrak{S}_\infty\subset R_\infty$ is the inductive limit of the chain $\mathfrak{S}_n$, $n=1,2,\ldots$. For convenience we will use the matrix representation of the elements of $R_\infty$. Namely, if $r=\left[ r_{ij} \right]\in R_\infty$ then the matrix $\left[ r_{ij} \right]$ contains at most one entry of one in each column and row and $r_{nn}=1$ for all sufficiently large $n$. Denote by $D_\infty\subset R_\infty$ the abelian subsemigroup of the diagonal matrices. For any subset $\mathbb{A}\subset\mathbb{N}$ denote by $\epsilon_\mathbb{A}$ the matrix $\left[ \epsilon_{ij} \right]\in D_\infty$ such that $\epsilon_{ii} =\left\{
\begin{array}{rl}
0, &\text{ if } i\in \mathbb{A}\\
1, &\text{ if } i\notin \mathbb{A}.
\end{array}\right.$ For example, $\epsilon_{\{2\}}=
\left[\begin{matrix}
1&0&0&\cdots&\\
0&0&0&\cdots\\
0&0&1&\cdots\\
\cdots&\cdots&\cdots&\cdots
\end{matrix}\right]$.
The ordinary transposition of matrices define an involution on $R_\infty:\left[ r_{ij} \right]^{\star}=\left[ r_{ji} \right]$.
Let $\mathcal{B}(\mathcal{H})$ be the algebra of all bounded operators in a Hilbert space $\mathcal{H}$. By a $\star$-representation of $R_\infty$ we mean a homomorphism $\pi$ of $R_\infty$ into the multiplicative semigroup of the algebra $\mathcal{B}(\mathcal{H})$ such that $\pi(r^*)=\left( \pi(r) \right)^*$, where $\left( \pi(r) \right)^*$ is the Hermitian adjoint of operator $\pi(r) $. It follows immediately that $\pi(s)$ is an unitary operator, when $s\in\mathfrak{S}_\infty$, and $\pi(d)$ is self-adjoint projection for $d\in D_\infty$.
Recall the notion of the quasiequivalent representation.
Let $\mathcal{N}_1$ and $\mathcal{N}_2$ be the $w^*$-algebras generated by the operators of the representations $\pi_1$ and $\pi_2$, respectively, of group or semigroup $G$. $\pi_1 $ and $\pi_2$ are quasiequivalent if there exists isomorphism $\theta:\mathcal{N}_1\to\mathcal{N}_2$ such that $\theta\left(\pi_1(g)\right)=\pi_2(g)$ for all $g\in G$.
\[support\_of\_el\_semigroup\] Given an element $r=[r_{mn}]\in R_\infty$, let ${\rm supp}\,r$ be the complement of a set $\left\{n\in \mathbb{N}:r_{nn}=1\right\}$.
By definition of $R_\infty$, ${\rm supp}\,r$ is a finite set. Let $c=\left( n_1\;n_2,\;\cdots\;n_k\right)$ be the cycle from $\mathfrak{S}_\infty$. If $\mathbb{A}\subseteq{\rm supp}\,c$, then $q=c\cdot \epsilon_{\mathbb{A}}$ we call as a [*quasicycle*]{}. Notice, that $\epsilon_{\{k\}}$ is a quasicycle. Two quasicycles $q_1$ and $q_2$ are called [*independent*]{}, if $({\rm supp}\,q_1)\cap({\rm supp}\,q_2)=\emptyset$. Each $r\in R_\infty$ can be decomposed in the product of the independent quasicycles\[quasicycle\]: $$\begin{aligned}
\label{decomposition_into_product}
r=q_1\cdot q_2\cdots q_k, \text{ where } {\rm supp}\, q_i\,\cap \,{\rm supp}\, q_j=\emptyset \text{ for all } i\neq j.\end{aligned}$$ In general, this decomposition is not unique.
In this paper we study the $\star$-representations of the $R_\infty$. The main results are the construction of the list of ${\rm II}_1$-factor representations and the proof of its fullness.
Finite semigroup $R_n$, its semigroup algebra and the corresponding representations theory was investigated by various authors in [@Munn_1; @Munn_2; @Solomon_1]. The irreducible representations of $R_n$ are indexed by the set of all Young diagrams with at most $n$ cells. An analog of the Specht modules for finite symmetric semigroup was built by C. Grood in [@Grood].
The main motivation of this paper is due to A. M. Vershik and P. P. Nikitin. Using the branching rule for the representations of the semigroups $R_n$, they found some class of the characters of $R_\infty$ [@VN]. Our elementary approach is based on the study of limiting operators, proposed by A. Okounkov [@Ok1; @Ok2].
The examples of $\star$-representations of $R_\infty$.
------------------------------------------------------
Given $r\in R_\infty$ define in the space $l^2(\mathbb{N})$ the map $$\begin{aligned}
l^2(\mathbb{N})\ni(c_1,c_2,\ldots,c_n,\ldots)\stackrel{\mathfrak{N}(r)}{\mapsto} (c_1,c_2,\ldots,c_n,\ldots)r\in l^2(\mathbb{N}).
\end{aligned}$$ It is easy to check that the next statement holds.
The operators $\mathfrak{N}(r)$ generate the [*irreducible*]{} $\star$-representation of $R_\infty$.
The next important representation is called the [*left regular representation* ]{} of $R_\infty$ [@APat]. The formula for the action of the corresponding operators in the space $l^2(R_\infty)$ is given by $$\begin{aligned}
\label{Left_Reg_Repr}
\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We present a classification of non-hermitian random matrices based on implementing commuting discrete symmetries. It contains 43 classes. This generalizes the classification of hermitian random matrices due to Altland-Zirnbauer and it also extends the Ginibre ensembles of non-hermitian matrices [@engarde].'
author:
- Denis Bernard
- André LeClair
---
Random matrix theory originates from the work of Wigner and Dyson on random hamiltonians [@Dyson]. Since then it has been applied to a large variety of problems ranging from enumerative topology, combinatorics, to localization phenomena, fluctuating surfaces, integrable or chaotic systems, etc... Non-hermitian random matrices also have applications to interesting quantum problems such as open choatic scattering, dissipative quantum maps, non-hermitian localization, etc... See e.g. ref.[@revue] for an introduction. The aim of this short note is to extend the Dyson [@Dyson] and Altland-Zirnbauer [@AltZirn] classifications of hermitian random matrix ensembles to the non-hermitian ones.
What are the rules?
===================
As usual, random matrix ensembles are constructed by selecting classes of matrices with specified properties under discrete symmetries [@Dyson; @Mehta]. To define these ensembles we have to specify (i) what are the discrete symmetries, (ii) what are the equivalence relations among the matrices, and (iii) what are the probablility measures for each class.
(i)\
Let $h$ denote a complex matrix. We demand that the transformations specifying random matrix classes are involutions — their actions are of order two. So we consider the following set of symmetries: &:&h = \_c c h\^T c\^[-1]{},c\^Tc\^[-1]{}= \[Csym\]\
[P sym.]{}&:&h = - p h p\^[-1]{}, p\^2=[**1**]{} \[Psym\]\
[Q sym.]{}&:&h = q h\^ q\^[-1]{}, q\^q\^[-1]{}=[**1**]{} \[Qsym\]\
[K [sym.]{}]{}&:&h = k h\^\* k\^[-1]{}, kk\^\*= \[Ksym\] $h^T$ denotes the transposed matrix of $h$, $h^*$ its complex conjugate and $h^{\dag}$ its hermitian conjugate. The factor $\ep_c$ is just a sign $\ep_c=\pm$. We could have introduced similar signs in the definitions of type $Q$ and type $K$ symmetries; however they can be removed by redefining $h \to ih$.
We also demand that these transformations are implemented by [*unitary*]{} transformations: cc\^=[**1**]{}, pp\^=[**1**]{}, qq\^=[**1**]{}, kk\^=[**1**]{}
In the case of hermitian matrices one refers to type $C$ symmetries as particle/hole symmetries or time reversal symmetries depending on whether $\ep_c=-$ or $\ep_c=+$ respectively. Matrices with type $P$ symmetry are said to be chiral. Both type $Q$ and type $K$ symmetries impose reality conditions on $h$ and they are redundant for hermitian matrices.
(ii)\
We consider matrices up to [*unitary*]{} changes of basis, huhu\^ \[equiv\] In other words, matrices linked by unitary similarity transformations are said to be gauge equivalent. For the symmetries (\[Csym\]–\[Ksym\]), this gauge equivalence translates into: cucu\^T, pupu\^[-1]{}, ququ\^, kuku\^[\*-1]{} \[symgauge\]
The classification relies heavily on this rule and on the assumed unitary implementations of the discrete symmetries.
We shall only classify minimal classes, which by definition are those whose matrices do not commute with a fixed matrix.
(iii)\
Since each of the classes we shall describe below is a subset of the space of complex matrices, the simplest probability measure $\mu(dh)$ one may choose is obtained by restriction of the gaussian one defined by (dh) = [N]{} ( - [Tr]{}hh\^ ) dh \[gauss\] with $\CN$ a normalization factor. It is invariant under the map (\[equiv\]).
There is of course some degree of arbitrariness in formulating these rules, in particular concerning the choice of the gauge equivalence (\[equiv\]). It however originates on one hand by requiring the gaussian measure (\[gauss\]) be invariant, and one the another hand from considering auxiliary hermitian matrices $\cal H$ obtained by doubling the vector spaces on which the matrices $h$ are acting. These doubled matrices are defined by: = \[double\] They are always chiral as they anticommute with $\ga_5={\rm diag}(1,-1)$. Any similarity transformations $h\to uhu^{-1}$ are mapped into $\CH\to\CU\CH\CU^{\dag}$ with $\CU={\rm diag}(u,u^{\dag\,-1})$. So, demanding that these transformations also act by similarity on $\CH$ imposes $u$ to be unitary.
On $\CH$, both type $P$ and $Q$ symmetries act as chiral transformations, $\CH\to -\CP\CH\CP^{-1}$ with $\CP={\rm diag}(P,P)$ and $\CH\to \CQ\CH\CQ^{-1}$ with $\CQ=\pmatrix{0&Q\cr Q&0\cr}$, and $\CH$ may be block diagonalized if $h$ is $Q$ or $P$ symmetric. Indeed, if $h$ is $Q$ symmetric then $\CQ$ and $\CH$ may be simultaneously diagonalized since they commute. If $h$ is $P$ symmetric, $\CH$ commutes with the product $\ga_5\CP$.
Type $C$ and $K$ symmetries both act as particle/hole symmetries relating $\CH$ to its transpose $\CH^T$. The classification of the doubled hamiltonians $\CH$ thus reduces to that of chiral random matrices, cf. [@AltZirn]. However, the spectra of $h$ and $\CH$ may differ significantly so that we need a finer classification involving $h$ per se.
Intrinsic definition of classes.
================================
To specify classes we demand that the matrices belonging to a given class be invariant under one or more of the symmetries (\[Csym\]–\[Ksym\]). It is important to bear in mind that when imposing two or more symmetries it is the group generated by these symmetries which is meaningful. Indeed, these groups may be presented in various ways depending on which generators one picks. For instance, if a matrix possesses both a type $P$ and a type $C$ symmetry, then it automatically has another type $C$ symmetry with $c'=pc$ and $\ep'_c=-\ep_c$.
The intrinsic classification concerns the classification of the symmetry groups generated by the transformations (\[Csym\]–\[Ksym\]).
We demand, as usual, that the transformations (\[Csym\]–\[Ksym\]) commute. For any pair of symmetries the commutativity conditions read: c=pcp\^T &;p\^\*=k\^[-1]{}pk&; q=pqp\^\[com1\]\
q\^T=c\^q\^[-1]{}c &;q\^\*=k\^[-1]{}qk\^[-1]{} &;k\^Tc\^[-1]{}kc\^\*= The signs $\pm$ are arbitrary; they shall correspond to different groups.
This arises if no type $P$ and type $Q$ symmetry is imposed so that no reality condition is specified and $h$ is simply a complex matrix. We may then impose either a type $P$ or a type $C$ symmetry or both. Not all groups generated by a type $P$ and a type $C$ symmetry are distinct since, as mentioned above, the product of these symmetries is another type $C$ symmetry but with an opposite sign $\ep_c$. The list of inequivalent symmetry groups, together with the inequivalent choices of the sign $\ep_c$, is the following:
[ccc]{} Generators & Discrete symmetry group & Number\
& Defining relations & of classes\
\
No sym & no condition & 1\
$P$ sym & $p^2=1$ & 1\
$C$ sym & $c^T=\pm c,\ \ep_c$ & 4\
$P$, $C$ sym & $p^2=1,\ c^T=\pm c,\ pcp^T=c$ & 2\
$P$, $C$ sym & $p^2=1,\ c^T=\pm \ep_c \, c,\ pcp^T=-c$ & 2\
& \
If the sign $\ep_c$ does not appear as an entry it means that the value of this sign is irrelevent — opposite values correspond to identical groups. The sign factors $\pm$ written explicitly are relevant — meaning that e.g. the groups generated by a type $C$ symmetry with $c^T=c$ or $c^T=-c$ are inequivalent. The equivalences among the defining relations for groups generated a type $P$ and a type $C$ symmetry are the following:
------------------------------------------------- --------- -------------------------------------------------
$(p^2=1,c^T=\pm c, c = pcp^T)_{\ep_c}$ $\cong$ $(p^2
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
Identification schemes are interactive protocols typically involving two parties, a *prover*, who wants to provide evidence of his or her identity and a *verifier*, who checks the provided evidence and decide whether it comes or not from the intended prover.
In this paper, we comment on a recent proposal for quantum identity authentication from Zawadzki [@Zawadzki19], and give a concrete attack upholding theoretical impossibility results from Lo [@Lo97] and Buhrman et al. [@Buhrman12]. More precisely, we show that using a simple strategy an adversary may indeed obtain non-negligible information on the shared identification secret. While the security of a quantum identity authentication scheme is not formally defined in [@Zawadzki19], it is clear that such a definition should somehow imply that an external entity may gain no information on the shared identification scheme (even if he actively participates injecting messages in a protocol execution, which is not assumed in our attack strategy).
author:
- 'Carlos E. González-Guillén[^1]'
- 'María Isabel González Vasco[^2]'
- 'Floyd Johnson[^3]'
- 'Ángel L. Pérez del Pozo[^4]'
bibliography:
- 'QIA.bib'
title: Concerning Quantum Identity Authentication Without Entanglement
---
Introduction
============
One of the major goals of cryptography is authentication in different flavours, namely, providing guarantees that certain interaction is actually involving certain parties from a designated presumed set of users. In the two party scenario, cryptographic constructions towards this goal are called *identity authentication schemes*, and have been extensively studied in classical cryptography. The advent of quantum computers spells the possible end for many of these protocols however.
Since Wiesner proposed using quantum mechanics in cryptography in the 1970’s multiple directions using this concept have undergone serious research. One major role quantum mechanics has played in cryptography is the development of quantum key distribution (QKD) where two parties can securely share a one time pad using quantum mechanics, for example the seminal protocol BB84 [@BB84]. One drawback most of these protocols share is the need for authentication, which is traditionally done over an authenticated classical channel.
Classically, there are different ways of defining so-called *identification schemes*, for mutual authentication of peers, mainly depending on whether the involved parties share some secret information (such as a password) or should rely on different (often certified) keys provided by a trusted third party. In the quantum scenario, different identification protocols have been introduced following the first approach, e.g., assuming that two parties may obtain authentication evidence from the common knowledge of a shared secret. These kind of constructions, often called *quantum identity authentication schemes* (or just *quantum identification schemes*), are thus closely related to protocols for *quantum equality tests* and *quantum private comparison*. All these constructions are concrete examples of two-party computations with asymmetric output, i.e. allowing only one of the two parties involved to learn the result of a computation on two inputs. Without posing restrictions on an adversary it was shown by Lo in [@Lo97] and and Buhrman et al. in [@Buhrman12] that these constructions are impossible, even in a quantum setting. As a consequence, constructions for generic unrestricted adversaries in the quantum setting are doomed to failure.
All in all, the necessity for authentication in QKD has led to many authors considering approaches which are strictly quantum in nature, such as those in [@Penghao16; @Zeng00; @Huang11] which are based off entanglement or more recently [@Zawadzki19; @Hong17] which do not rely on entanglement. These are known as *quantum identity authentication* (QIA) protocols. For protocols such as BB84 that do not rely on entanglement it would be more appealing to not rely on entanglement for entity authentication purposes.
[*Our Contribution.*]{} Recently, an original work about authentication without entanglement by Hong et. al. in [@Hong17] was improved by Zawadzki using tools from classical cryptography in [@Zawadzki19]. We start this contribution by summarizing in section \[sec:impossibility\] the impossibility results from Lo [@Lo97] and Buhrman et al. [@Buhrman12], concerning generic quantum two party protocols. Further, we present and discuss the Zawadzki protocol in section \[sec:zawadzki\_protocol\] and show how it succumbs under a simple attack, which we outline in section \[sec:attack\]. Our attack evidences the practical implications of the proven impossibility of identification schemes as conceived in Zawadki’s design, and thus we stress that fundamental changes in the original proposal, beyond preventing our attack, would be needed in order to derive a secure identification scheme.
Quantum Equality Tests are Impossible {#sec:impossibility}
=====================================
A *one sided equality test* is a cryptographic protocol in which one party, Alice, convinces another, Bob, that they share a common key by revealing nothing to either party except equality (or inequality) to Bob. Formally we define a key space $K$ and a function $F:K^2\to \{0,1\}$ which checks for equality. Let $i\in K$ be Alice’s key and $j\in K$ be Bob’s key. The goals of a one sided equality test are as follows:
1\) $F(i,j)=1$ if and only if $i=j$.
2\) Alice learns nothing about $j$ nor about $F(i,j)$.
3\) Bob learns $F(i,j)$ with certainty. If $F(i,j)=0$ then Bob learns nothing about $i$ except $i\neq j$. The above is a specific case of a one-sided two-party secure computation protocol as described in [@Lo97]. In this work, a very general result is proven indicating that any protocol realising a one-sided two party secure computation task is impossible, even in a quantum setting. In particular, Lo shows in [@Lo97] that if a protocol satisfies 1) and 2) then Bob can know the output of $F(i,j)$ for any $j$. Furthermore, the one sided equality test with some small relaxations on points 1) and 3) is also proven impossible. Hence, any one-sided QIA protocol which validates identities using equality tests by use of quantum mechanics is impossible without imposing restrictions on an adversary.
Note that the above argument says nothing about protocols with built in adversarial assumptions such as those presented in [@Damgard14; @Bouman13]. Further, note that many of the QIA schemes end up with a round where Bob accepts or rejects, which makes Alice aware of the success or failure of the protocol. Indeed, those schemes can be straightforwardly turned into one-sided equality tests by suppressing Bob’s final message announcing the result. Hence, they are clearly insecure against a dishonest Bob. However, note that if any such protocol can be modified so that Alice may obtain information on the identification output at some point before the last protocol round, it is unclear how Lo’s impossibility result would apply. However, if they are built upon equality tests we can get impossibility from another well know result by Buhrman el al.[@Buhrman12]. Certainly, two-sided QIA schemes, in which both Alice and Bob learn the result of the protocol, are a particular case of two-sided two-party computations. It is shown in [@Buhrman12] that a correct quantum protocol for a classical two-sided two-party computation that is secure against one of the parties is completely insecure against the other. For equality tests, if one of the parties, say Alice, learns nothing else than $F(i,j)$, the other party, Bob, will indeed be able to compute $F(i,j)$ for all possible inputs $j$. Thus, any two-sided QIA protocol which validates identities using equality tests is also impossible without imposing further restrictions on the adversary.
QIA without Entanglement {#sec:zawadzki_protocol}
========================
Here we will outline the protocol proposed in [@Zawadzki19] with some minor modifications, discussed afterword. Suppose Alice and Bob have keys $k_a$ and $k_b$ respectively. Bob wishes to verify that $k_b=k_a$ without leaking any information about $k_b$ or $k_a$. Bob randomly generates a nonce $r$ from a designated domain and generates a universal hash function $H:\{0,1\}^N\to \{0,1\}^{2d}$. This hash function may be chosen by Bob or sampled at random, in the below description we sample from a space of universal hash functions with image $\{0,1\}^{2d}$ called $\mathbb{H}$. Bob sends Alice $r$ and $H$. Alice then calculates the value $h_a=H(r||k_a)$. Alice then acts on pairs in $h_a$ with an embedding function $Q:\{0,1\}^2\to \CC ^2$. This function $Q$ uses the first of the two binary values to determine the measurement basis (horizontal/vertical or diagonal/antidiagonal) and the second to determine the specific qubit in $\{|0\rangle, |1\rangle , |+\rangle,
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study the ground states of the single- and two-qubit asymmetric Rabi models, in which the qubit-oscillator coupling strengths for the counterrotating-wave and corotating-wave interactions are unequal. We take the transformation method to obtain the approximately analytical ground states for both models and numerically verify its validity for a wide range of parameters under the near-resonance condition. We find that the ground-state energy in either the single- or two-qubit asymmetric Rabi model has an approximately quadratic dependence on the coupling strengths stemming from different contributions of the counterrotating-wave and corotating-wave interactions. For both models, we show that the ground-state energy is mainly contributed by the counterrotating-wave interaction. Interestingly, for the two-qubit asymmetric Rabi model, we find that, with the increase of the coupling strength in the counterrotating-wave or corotating-wave interaction, the two-qubit entanglement first reaches its maximum then drops to zero. Furthermore, the maximum of the two-qubit entanglement in the two-qubit asymmetric Rabi model can be much larger than that in the two-qubit symmetric Rabi model.'
author:
- 'Li-Tuo Shen'
- 'Zhen-Biao Yang'
- Mei Lu
- 'Rong-Xin Chen'
- 'Huai-Zhi Wu'
title: Ground state of the asymmetric Rabi model in the ultrastrong coupling regime
---
Introduction
============
The Rabi model [@PR-49-324-1936], describing the interaction between a two-level system and a quantized harmonic oscillator, is a fundamental model in quantum optics. For the cavity quantum electrodynamics (QED) experiments, the qubit-oscillator coupling strength of the Rabi model is far smaller than the oscillator’s frequency and the corotating-wave approximation (RWA) works well, bringing in the ubiquitous Jaynes-Cummins model [@IEEE-51-89-1963; @JMO-40-1195-1993; @PRL-87-037902-2001; @PRA-71-013817-2005]. With recent experiment progresses in Rabi models [@PT-58-42-2005; @Science-326-108-2009; @PR-492-1-2010; @RPP-74-104401-2011; @Nature-474-589-2011; @RMP-84-1-2012; @RMP-85-623-2013; @arxiv1308-6253-2014] in the ultrastrong coupling regime [@PRB-78-180502-2008; @PRB-79-201303-2009; @Nature-458-178-2009; @Nature-6-772-2010; @PRL-105-237001-2010; @PRL-105-196402-2010; @PRL-106-196405-2011; @Science-335-1323-2012; @PRL-108-163601-2012; @PRB-86-045408-2012; @NatureCommun-4-1420-2013], in which the qubit-oscillator coupling strength becomes a considerable fraction of the oscillator’s or qubit’s frequency, the RWA breaks down but relatively complex quantum dynamics arises, bringing about many fascinating quantum phenomena [@NJP-13-073002-2011; @PRL-109-193602-2012; @PRA-81-042311-2010; @PRA-87-013826-2013; @PRA-59-4589-1999; @PRA-62-033807-2000; @PRB-72-195410-2005; @PRA-74-033811-2006; @PRA-77-053808-2008; @PRA-82-022119-2010; @PRL-107-190402-2011; @PRL-108-180401-2012; @PLA-376-349-2012].
Explicitly analytic solution to the Rabi model beyond the RWA is hard to obtain due to the non-integrability in its infinite-dimensional Hilbert space. Since it is difficult to capture the physics through numerical solution [@JPA-29-4035-1996; @EPL-96-14003-2011], various approximately analytical methods for obtaining the ground states of the symmetric Rabi models (SRM) have been tried [@RPB-40-11326-1989; @PRB-42-6704-1990; @PRL-99-173601-2007; @EPL-86-54003-2009; @PRA-80-033846-2009; @PRL-105-263603-2010; @PRA-82-025802-2010; @EPJD-66-1-2012; @PRA-86-015803-2012; @PRA-85-043815-2012; @PRA-86-023822-2012; @EPJB-38-559-2004; @PRB-75-054302-2007; @EPJD-59-473-2010; @arXiv-1303-3367v2-2013; @arXiv-1305-1226-2013; @PRA-87-022124-2013; @PRA-86-014303-2012; @arXiv-1305-6782-2013]. Especially, Braak [@PRL-107-100401-2011] used the method based on the $Z_{2}$ symmetry to analytically determine the spectrum of the single-qubit Rabi model, which was dependent on the composite transcendental function defined through its power series but failed to derive the concrete form of the system’s ground state. In Ref. [@PRA-81-042311-2010], Ashhab *et al.* applied the method of adiabatic approximation to treat two extreme situations to obtain the eigenstates and eigenenergies in the single-qubit SRM, i.e., the situation with a high-frequency oscillator or a high-frequency qubit. Ashhab [@PRA-87-013826-2013] used different order parameters to identify the phase regions of the single-qubit SRM and found that the phase-transition-like behavior appeared when the oscillator’s frequency was much lower than the qubit’s frequency. Lee and Law [@arXiv-1303-3367v2-2013] used the transformation method to seek the approximately analytical ground state of the two-qubit SRM in the near-resonance regime, and found that the two-qubit entanglement drops as the coupling strength further increased after it reached its maximum.
Previous studies consider the ground state of the SRM, i.e., the qubit-oscillator coupling strengths of the counterrotating-wave and corotating-wave interactions are equal. In this paper, we study the asymmetic Rabi models (ASRM), i.e., the coupling strengths for the counterrotating-wave and corotating-wave interactions are unequal, which helps to gain deep insight into the fundamentally physical property of such models. Different from Refs. [@PRA-81-042311-2010; @PRA-87-013826-2013], we here use the transformation method to obtain the ground state of the single-qubit ASRM under the near-resonance situation, where the oscillator’s frequency approximates the qubit’s frequency. Differ further from Ref. [@arXiv-1303-3367v2-2013], our investigation for the two-qubit ASRM intuitively identifies the collective contribution to its ground-state entanglement caused by the corotating-wave and counterrotating-wave interactions.
We investigate the single- and two-qubit ASRMs and show that their approximately analytical ground states agree well with the exactly numerical solutions for a wide range of parameters under the near-resonance situation, and the ground-state energy has an approximately quadratic dependence on the coupling strengths stemming from contributions of the counterrotating-wave and corotating-wave interactions. Besides, we show that the ground-state energy is mainly contributed by the counterrotating-wave interaction in both models. For the two-qubit ASRM, we obtain the approximately analytical negativity. Interestingly, for the two-qubit ASRM, we find that, with the increase of the coupling strength in the counterrotating-wave or corotating-wave interaction, the two-qubit entanglement first reaches its maximum then drops to zero.
The advantages of our result are the collective contributions to the ground state of the ASRM caused by the corotating-wave interaction and counterrotating-wave interaction can be determined approximately, and the contribution of the counterrotating-wave interaction on the ground state energy is larger than that of the corotating-wave interaction. We find that the maximal two-qubit entanglement of the ASRM is larger than that in the case of SRM. However, the transformation method here is applicable to the ASRM only under the near-resonant regime, where the oscillator’s frequency approximates the qubit’s frequency. When the corotating-wave and counterrotating-wave coupling constants are large enough in the ASRM, the result obtained by the transformation method has a big error compared with that obtained by the exactly numerical method. Such an investigation can also be generalized to the complex cases of three- and more-qubit ASRM. Note that the ASRM can be realized by using two unbalanced Raman channels between two atomic ground states induced by a cavity mode and two classical fields in theory [@PRA-75-013804
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We consider the $\Lambda N\to NN$ weak transition, responsible for a large fraction of the non-mesonic weak decay of hypernuclei. We follow on the previously derived effective field theory and compute the next-to-leading one-loop corrections. Explicit expressions for all diagrams are provided, which result in contributions to all relevant partial waves.'
author:
- 'A. Pérez-Obiol'
- 'D. R. Entem'
- 'B. Juliá-Díaz'
- 'A. Parreño'
title: 'One-loop contributions in the EFT for the $\Lambda N \to NN$ transition'
---
Introduction
============
One of the major challenges in nuclear physics is to understand the interactions among hadrons from first principles. For more than twenty years, many research groups have directed their efforts to develop Effective Field Theories (EFT), working with the idea of separating the nuclear force in long-range and short-range components. The underlying premise was that low-energy processes, as the ones encountered in nuclear physics, should not be affected by the specific details of the high-energy physics.
The typical energies associated to nuclear phenomena suggest that the appropriate degrees of freedom are nucleons and pions (or the ground state baryon and pseudo scalar octets for processes involving strangeness), interacting derivatively as it is dictated by the effective chiral Lagrangian. The nuclear interaction is characterized by the presence of very different scales, going from the values of the masses of the light pseudo-scalar bosons to the ones of the ground-state octet baryons. The EFT formalism makes use of this separation of scales to construct an expansion of the Lagrangian in terms of a parameter built up from ratios of these scales. For example, in the study of the low-energy nucleon-nucleon interaction, a clear separation of scales is seen between the external momentum of the interacting nucleons, a soft scale which typically takes values up to the pion mass, and a hard scale corresponding to the nucleon mass. While the long-range part of this interaction is governed by the light scale through the pion-exchange mechanism, short-range forces are accounted for by zero-range contact operators, organized according to an increasing number of derivatives. These contact terms, which respect chiral symmetry, have values which are not constrained by the chiral Lagrangian, and therefore, their relative strength (encapsulated in the size of the low-energy coefficients, LECs) has to be obtained from a fit to nuclear observables. The large amount of experimental data for the interaction among pions and nucleons has made possible to perform successful EFT calculations of the strong nucleon-nucleon interaction up to fourth order in the momentum expansion (${\cal
O}(p^4)$), at next-to-next-to-next-to-leading order (N$^3$LO) in the heavy-baryon formalism [@Epelbaum:2012vx; @entem]. In the weak sector, the study of nucleon-nucleon Parity Violation (PV) with an Effective Field Theory at leading order has been undertaken in Ref. [@nucPV05], where the authors discuss existing and possible few-body measurements that can help in constraining the relevant (five) low-energy constants at order $p$ in the momentum expansion and the ones associated with dynamical pions.
In the strange sector, the experimental situation is less favorable due to the short life-time of hyperons, unstable against the weak interaction. This fact complicates the extraction of information regarding the strong interaction among baryons in free space away from the nucleonic sector. Nevertheless, SU(3) extensions of the EFT for nucleons and pions have been developed at leading order (LO) [@SW96; @KDT01; @H02; @BBPS05] and next-to-leading (NLO) order [@PHM06]. In the present work we consider the weak four-body $\Lambda N \to NN$ interaction, which is accessible experimentally by looking at the decay of $\Lambda-$hypernuclei, bound systems composed by nucleons and one $\Lambda$ hyperon. These aggregates decay weakly through mesonic ($\Lambda \to N \pi$) and non-mesonic ($\Lambda N \to NN$) modes, the former being suppressed for mass numbers of the order or larger than 5, due to the Pauli blocking effect acting on the outgoing nucleon. In contrast to the weak NN PV interaction, which is masked by the much stronger Parity Conserving (PC) strong NN signal, the weak $|\Delta S|=1 \, \Lambda N$ interaction has the advantage of presenting a change of flavor as a signature, favoring its detection in the presence of the strong interaction.
The first studies of the weak $\Lambda N$ interaction using a lowest order effective theory were presented in Refs. [@Jun; @PBH05; @PPJ11] . These works included the exchange of the lighter pseudoscalar mesons while parametrizing the short-range part of the interaction with contact terms at order ${\cal O}(q^0)$, where $q$ denotes the momentum exchanged between the interacting baryons. While the results of Ref. [@PPJ11] show that it is possible to reproduce the hypernuclear decay data with the lowest order effective Lagrangian, the stability of the momentum expansion has to be checked by including the next order in the EFT. If an effective field theory can be built for the weak $\Lambda N \to NN$ transition, the values for the LECs of the theory, which encode the high-energy components of the interaction, should vary within a reasonable and natural range when one includes higher orders in the calculation. Compared to the LO calculation, which involves two LECs, the unknown baryon-baryon-kaon vertices and the pseudoscalar cut-off parameter in the form-factor, the NLO calculation introduces additional unknowns. Namely, the parameters associated to the new contact terms (three when one neglects the small value of the momentum of the initial particles, a nucleon and a hyperon bound in the hypernucleus, in front of the momentum of the two outgoing nucleons) and the couplings appearing in the two-pion exchange diagrams. Therefore, in order to constrain the EFT at NLO, one needs to collect enough data, either through the accurate measure of hypernuclear decay observables, or through the measure of the inverse reaction in free space, $n p \to
\Lambda p$. Unfortunately, the small values of the cross-sections for the weak strangeness production mechanism, of the order of $10^{-12}$ mb [@Haidenbauer1995; @Parreno1998; @Inoue2001], has prevented, for the time being, its consideration as part of the experimental data set, despite the effort invested in extracting different polarization observables for this process [@Kishimoto2000; @Ajimura2001]. At present, quantitative experimental information on the $|\Delta
S|=1$ weak interaction in the baryonic sector comes from the measure of the total and partial decay rates of hypernuclei, and an asymmetry in the number of protons detected parallel and antiparallel to the polarization axis, which comes from the interference between the PC and PV weak amplitudes. Since observables from one hypernucleus to another can be related through hypernuclear structure coefficients, one has to be careful in selecting the data that can be used in the EFT calculation. For example, while one may indeed expect measurements from different p-shell hypernuclei, say, A=12 and 16, to provide with the same constraint, the situation is different when including data from s-shell hypernuclei like A=5. For the latter, the initial $\Lambda N$ pair can only be in a relative s-state, while for the former, relative p-states are allowed as well. In this paper we present the analytic expressions to be included at next-to-leading order in the effective theory for the weak $\Lambda N$ interaction. These expressions have been derived by considering four-fermion contact terms with a derivative operator insertion together with the two-pion exchange mechanism.
The paper is organized as follows. In Section II we introduce the Lagrangians and the power counting scheme we use to calculate the relevant Feynman diagrams. In Sections \[ss:loc\] and \[ss:nloc\] we present the LO and NLO potentials for the $\Lambda N\rightarrow NN$ transition, and a comparison between both contributions is performed in Section \[sec:bc\]. We conclude and summarize in Section \[sec:conclusions\].
Interaction Lagrangians and counting scheme {#sec2}
===========================================
The non-mesonic weak decay of the $\Lambda$ involves both the strong and electroweak interactions. The $\Lambda$ decay is mediated by the presence of a nucleon which in the simplest meson-exchange picture, exchanges a meson, e.g. $\pi$, $K$, with the $\Lambda$. Thus, computing the transition requires the knowledge of the strong and weak Lagrangians involving all the hadrons entering in the process. In this section we describe the strong and weak Lagrangians entering at leading order (LO) and next-to-leading order (NLO) in the $\Lambda N\to NN$ interaction.
![Weak vertices for the $\Lambda N\pi$, $\Sigma N\pi$ and $NNK$ stemming from the Lagrangians in Eq. (\[eq:weakl\]). The weak vertex is represented by a solid black circle. \[vf2\]](nnpw "fig:") ![Weak vertices for the $\Lambda N\pi$, $\Sigma N\pi$ and $NNK$ stemming from the Lagrangians in Eq. (\[eq:weakl\]).
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In the paper by means of Fourier transform method and similarity method we solve the Dirichlet problem for a multidimensional equation wich is a generalization of the Tricomi, Gellerstedt and Keldysh equations in the half-space, in which equation have elliptic type, with the boundary condition on the boundary hyperplane where equation degenerates.The solution is presented in the form of an integral with a simple kernel which is an approximation to the identity and self-similar solution of Tricomi-Keldysh type equation . In particular, this formula contains a Poisson’s formula, which gives the solution of the Dirichlet problem for the Laplace equation for the half-space. If the given boundary value is a generalized function of slow growth, the solution of the Dirichlet problem can be written as a convolution of this function with the kernel (if a convolution exists).'
author:
- '**Oleg D. Algazin**'
date: Bauman Moscow State Technical University
title: 'Exact solution to the Dirichlet problem for degenerating on the boundary elliptic equation of Tricomi -Keldysh type in the half-space'
---
MSC2010: 35Q99, 35J25, 35J70
**Keywords**: Fourier transform, Tricomi equation, Dirichlet problem, approximation to the identity, self-similar solution, the similarity method, the generalized functions of slow growth.
Introduction {#introduction .unnumbered}
============
In the paper is considered the multidimensional elliptic equation in the half-space $$y^m\Delta_xu+u_{yy}=0,~~ y>0,~~ m>-2,\eqno{(\textup{T})}$$ where $x=(x_1,x_2,\dots,x_n)\in\mathbb{R}^n, u=u(x,y)$ is a function of variabels $(x,y)\in\mathbb{R}^{n+1}$, $$\Delta_x=\frac{\partial^2 }{\partial x_1^2}+\dots+\frac{\partial^2 }{\partial x_n^2}$$ is the Laplace operator on the variable $x$.
1. If $n=1, m=1$ we obtain the Tricomi equation $$yu_{xx}+u_{yy}=0.$$
2. If $n=1, m>0$ we obtain the Gellerstedt equation $$y^mu_{xx}+u_{yy}=0,~~m>0.$$
3. If $n=1, m<0$ the equation (T) can be written as $$u_{xx}+y^{-m}u_{yy}=0,~~0<-m<2,$$ it is a special case of the Keldych equation [@Kel].
These equations are used in transonic gas dynamics [@Ber], and in mathematical models of cold plasma [@Otw].
4. If $m=0$ we obtain the Laplace equation $$\Delta u(x,y)=0.$$
A bounded (as $y\to\infty$) solution of the Dirichlet problem for the Laplace equation for the half-space $$\Delta u(x,y)=0,~~x\in\mathbb{R}^n,~~y>0,$$ $$u(x,0)=\psi(x),~~x\in\mathbb{R}^n,$$ is given by the Poisson integral [@Bit],[@Ste] $$u(x,y)=\frac{\Gamma((n+1)/2)}{\pi^{(n+1)/2}}\int_{\mathbb{R}^n}\frac{y\psi(t)}{(|x-t|^2+y^2)^{(n+1)/2}}dt.$$ A similar formula is derived by us in this paper for the solution of the Dirichlet problem for an Tricomi-Keldysh type equation (T) by means of the Fourier transform to the variables in the boundary hyperplane $y=0$ in the case $m=1$, and by the similarity method in the case of $m>-2$. In this formula, in particular, is contained the Poisson’s integral formula ($m=0$) , which can also be obtained using the Fourier transform. For the case $ -2 <m <0$ this formula was earlier obtained by L.S.Parasyuk by Fourier transform [@Par]. In the case of $m> 0$ in the calculation of multidimensional Fourier transformations there are great difficulties (except the case $m = 1$, wich we consider in section 2). Therefore, we apply the similarity method. With it, in section 3, we find a self-similar solution of the equation of Tricomi -Keldysh type for any $m> -2$, which is an approximation to the identity in the space of integrable functions. Solution of the Dirichlet problem is represented as a convolution of the self-similar solution of equation of Tricomi-Keldysh type with boundary function (if thise convolution exists). The general properties of the approximation to the identity implies that in the case of bouded piecewise continuous boundary function this convolution is written in the form of an integral and gives the classical solution of the Dirichlet problem, i.e. the boundary values of integral coincide with the boundary function at all points of continuity. In the case of the boundary function, which is a generalized function of slow growth, the convolution gives a generalized solution of the Dirichlet problem, i.e. weakly converges in the space of generalized functions of slow growth to a given boundary generalized function. In particular, the kernel is a solution of the Dirichlet problem, where the boundary function is the Dirac delta function.
We note that by the similarity method was obtained the fundamental solutions for the Tricomi operator ($m = 1$) in the works of J.Barros-Neto and I.M. Gelfand [@Gel1],[@Gel2],[@Gel3] and by Fourier transform method ($m = 1$) in the work of J.Barros-Neto and F.Cardoso [@Car].
Earlier, using the Fourier transform method we solved in our joint works the Dirichlet and Dirichlet-Neumann problem for the Laplace and Poisson equations in a multidimensional infinite layer [@Alg1],[@Alg2].
Notations and statement of the problem
======================================
We introduce the following notations: $$x=(x_1+\dots+x_n)\in\mathbb{R}^n,~~(x,y)=(x_1+\dots+x_n,y)\in\mathbb{R}^{n+1},~~y\in\mathbb{R},$$ $$|x|=\sqrt{x_1^2+\dots+x_n^2},~~xt=x_1t_1+\dots+x_nt_n,~~dx=dx_1\dots dx_n,$$ $$F(t)=\mathscr{F}[f](t)=\int_{\mathbb{R}^n}f(x)e^{ixt} dx-$$ Fourier transform of an integrable function $f(x)$ . If integrable in $x$ function $f(x,y)$ depends on the variables $x$ and $y$, then its Fourier transform with respect to $x$ will be denoted $$\mathscr{F}_x[f](t,y)=\int_{\mathbb{R}^n}f(x,y)e^{ixt} dx.$$ Similarly we define the inverse Fourier transform of an integrable function $F(t)$ $$f(x)=\mathscr{F}^{-1}[F](x)=\frac{1}{(2\pi)^n}\int_{\mathbb{R}^n}F(t)e^{ixt} dt$$ and an integrable in $t$ function $F(t,y)$ $$\mathscr{F}_t^{-1}[F](x,y)=\frac{1}{(2\pi)^n}\int_{\mathbb{R}^n}F(t,y)e^{ixt} dt.$$ The definition of the Fourier transform of generalized functions of slow growth, see.[@Vla].
Consider the Dirichlet problem for the Tricomi-Keldysh type equation: $$y^m\Delta_xu+u_{yy}=0,~~x\in\mathbb{R}^n,~~ y>0,~~ m>-2,\eqno{(1.1)}$$ $$u(x,0)=\psi(x),~~x\in\mathbb{R}^n,\eqno{(1.2)}$$ $$u(x,y)~\text{is bounded as}~y\to\infty.\eqno{(1.3)}$$
Solution the Dirichlet problem for an equation of Tricomi type in the case $m=1$ by means the Fourier transform method
======================================================================================================================
$$y\Delta_xu+u_{yy}=0,~~x\in\mathbb{R}^n,~~ y>0,\eqno{(2.1)}$$ $$u(x,0)=\psi(x),~~x\in\mathbb{R}^n,\eqno{(2.2)}$$ $$u(x,y)~\text{is bounded as}~y\to\infty.\eqno{(2.3)}$$ Applying the Fourier transform with respect to x to the equation (2.1) and denoting $$U(t,y)=\mathscr{F}_x[u](t,y),~~\Psi(t)=\mathscr{F}[\psi](t).$$ we get the boundary value problem for ordinary differential equation with the parameter $t\in\mathbb{R}^n$: $$-y|t|^2U(t,y)+U_{yy}(t,y)=0,$$ $$U(
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
Internet-scale distributed systems often replicate data within and across data centers to provide low latency and high availability despite node and network failures. Replicas are required to accept updates without coordination with each other, and the updates are then propagated asynchronously. This brings the issue of conflict resolution among concurrent updates, which is often challenging and error-prone. The Conflict-free Replicated Data Type (CRDT) framework provides a principled approach to address this challenge.
This work focuses on a special type of CRDT, namely the Conflict-free Replicated Data Collection (CRDC), e.g. list and queue. The CRDC can have complex and compound data items, which are organized in structures of rich semantics. Complex CRDCs can greatly ease the development of upper-layer applications, but also makes the conflict resolution notoriously difficult. This explains why existing CRDC designs are tricky, and hard to be generalized to other data types. A design framework is in great need to guide the systematic design of new CRDCs.
To address the challenges above, we propose the Remove-Win Design Framework. The remove-win strategy for conflict resolution is simple but powerful. The remove operation just wipes out the data item, no matter how complex the value is. The user of the CRDC only needs to specify conflict resolution for non-remove operations. This resolution is destructed to three basic cases and are left as open terms in the CRDC design skeleton. Stubs containing user-specified conflict resolution logics are plugged into the skeleton to obtain concrete CRDC designs. We demonstrate the effectiveness of our design framework via a case study of designing a conflict-free replicated priority queue. Performance measurements also show the efficiency of the design derived from our design framework.
author:
- |
Yuqi Zhang, Yu Huang, Hengfeng Wei, Jian Lu\
\
\
bibliography:
- 'rwf.bib'
title: 'Remove-Win: a Design Framework for Conflict-free Replicated Data Collections'
---
\#1
|
{
"pile_set_name": "ArXiv"
}
| null |
David G. CHARLTON$^{a}$, Guido MONTAGNA$^{b,c}$,\
Oreste NICROSINI$^{c,b}$ and Fulvio PICCININI$^{c}$
[*$^a$Royal Society University Research Fellow, School of Physics and Space Research, University of Birmingham, Birmingham B15 2TT, UK*]{}\
[*$^b$Dipartimento di Fisica Nucleare e Teorica, Università di Pavia, via A. Bassi n. 6 - 27100 PAVIA - ITALY*]{}\
[*$^c$ INFN, Sezione di Pavia, via A. Bassi n. 6 - 27100 PAVIA - ITALY*]{}
Program classification: 11.1\
[The Monte Carlo program [WWGENPV]{}, designed for computing distributions and generating events for four-fermion production in $e^+ e^- $ collisions, is described. The new version, 2.0, includes the full set of the electroweak (EW) tree-level matrix elements for double- and single-$W$ production, initial- and final-state photonic radiation including $p_T / p_L$ effects in the Structure Function formalism, all the relevant non-QED corrections (Coulomb correction, naive QCD, leading EW corrections). An hadronisation interface to [JETSET]{} is also provided. The program can be used in a three-fold way: as a Monte Carlo integrator for weighted events, providing predictions for several observables relevant for $W$ physics; as an adaptive integrator, giving predictions for cross sections, energy and invariant mass losses with high numerical precision; as an event generator for unweighted events, both at partonic and hadronic level. In all the branches, the code can provide accurate and fast results. ]{}
[*Program obtainable from:*]{} CPC Program Library, Queen’s University of Belfast, N. Ireland (see application form in this issue)
[*Reference to original program:*]{} [WWGENPV]{}; [*Cat. no.:*]{} ACNT; [*Ref. in CPC:* ]{} [**90**]{} (1995) 141
[*Authors of original program:*]{} Guido Montagna, Oreste Nicrosini and Fulvio Piccinini
[*The new version supersedes the original program*]{}
[*Computer for which the new version is designed:*]{} DEC ALPHA 3000, HP 9000/700 series; [*Installation:*]{} INFN, Sezione di Pavia, via A. Bassi 6, 27100 Pavia, Italy
[*Keywords:*]{} $e^+ e^-$ collisions, LEP, $W$-mass measurement, radiative corrections, QED corrections, QCD corrections, Minimal Standard Model, four-fermion final states, electron structure functions, Monte Carlo integration/simulation, hadronisation.
The precise measurement of the $W$-boson mass $M_W$ constitutes a primary task of the forthcoming experiments at the high energy electron–positron collider LEP2 ($2 M_W \leq \sqrt{s} \leq 210$ GeV). A meaningful comparison between theory and experiment requires an accurate description of the fully exclusive processes $e^+ e^- \to 4f$, including the main effects of radiative corrections, with the final goal of providing predictions for the distributions measured by the experiments.
Same as in the original program, as far as weighted event integration and unweighted event generation are concerned. Adaptive Monte Carlo integration for high numerical precision purposes is added. Optional hadronic interface in the generation branch is supplied.
The most promising methods for measuring the $W$-boson mass at LEP2 are the so called “threshold” and “direct reconstruction” methods \[5\]. For the first one, a precise evaluation of the threshold cross section is required. For the second one, a precise description of the invariant-mass shape of the hadronic system in semileptonic decays is mandatory. In order to meet these requirements, the previous version of the program has been improved by extending the class of the tree-level EW diagrams taken into account, by including $p_T / p_L $ effects both in initial- and final-state QED radiation, by supplying an hadronic interface in the generation branch.
While the semileptonic decay channels are complete at the level of the Born approximation EW diagrams ([CC11/CC20]{} diagrams), neutral current backgrounds are neglected in the fully hadronic and leptonic decay channels. QED radiation is treated at the leading logarithmic level. Due to the absence of a complete ${\cal O}
(\alpha)$/${\cal O} (\alpha_s)$ diagrammatic calculation, the most relevant EW and QCD corrections are effectively incorporated according to the recipe given in \[6\]. No anomalous coupling effects are at present taken into account.
As adaptive integrator, the code provides cross section and energy and invariant-mass losses with a relative accuracy of about 1% in 8 min on HP 9000/735. As integrator of weighted events, the code produces about $10^5$ events/min on the same system. The generation of a sample of $10^3$ hadronised unweighted events requires about 8 min on the same system.
Subroutines from the library of mathematical subprograms [NAGLIB]{} \[3\] for the numerical integrations are used in the program, when the adaptive integration branch is selected.
\[1\] CERN Program Library, CN Division, CERN, Geneva. \[2\] NAG Fortran Library Manual Mark 16 (Numerical Algorithms Group, Oxford, 1991). \[3\] T. Sjöstrand, Comp. Phys. Commun. [**82**]{} (1994) 74; Lund University Report LU TP 95-20 (1995). \[4\] F. James, Comput. Phys. Commun. 79 (1994) 111. \[5\] [*Physics at LEP2*]{}, CERN Report 96-01, Theoretical Physics and Particle Physics Experiments Divisions, G. Altarelli, T. Sjöstrand and F. Zwirner, eds., Vols. 1 and 2, Geneva, 19 February 1996. \[6\] W. Beenakker, F. Berends et al., “WW cross-sections and distributions”, in \[5\], Vol. 1, pag. 79.
Introduction
============
The precise measurement of the $W$-boson mass $M_W$ constitutes a primary task of the forthcoming experiments at the high energy electron–positron collider LEP2 ($2 M_W \leq \sqrt{s} \leq 210$ GeV). A meaningful comparison between theory and experiment requires an accurate description of the fully exclusive processes $e^+ e^- \to 4f$, including the main effects of radiative corrections, with the final goal of providing predictions for the distributions measured by the experiments. A large effort in the direction of developing tools dedicated to the investigation of this item has been spent within the Workshop “Physics at LEP2”, held at CERN during 1995. Such an effort has led to the development of several independent four-fermion codes, both semianalytical and Monte Carlo, extensively documented in [@wweg]. [WWGENPV]{} is one of these codes, and the aim of the present paper is to describe in some detail the developments performed with respect to the original version [@cpcww], where a description of the formalism adopted and the physical ideas behind it can be found.
As discussed in [@wmass], the most promising methods for measuring the $W$-boson mass at LEP2 are the so called “threshold” and “direct reconstruction” methods. For the first one, a precise evaluation of the threshold cross section is required. For the second one, a precise description of the invariant-mass shape of the hadronic system in semileptonic and hadronic decays is mandatory. In order to meet these requirements, the previous version of the program has been improved, both from the technical and physical point of view.
On the technical side, in addition to the “weighted event integration” and “unweighted event generation” branches, the present version can also be run as an “adaptive Monte Carlo” integrator, in order to obtain high numerical precision results for cross sections and other relevant observables. In the “weighted event integration” branch, a “canonical” output can be selected, in which several observables are processed in parallel together with their most relevant moments [@wweg]. Moreover, the program offers the possibility of generating events according to a specific flavour quantum number assignment for the final-state fermions, or of generating “mixed samples”, namely a fully leptonic, fully hadronic or semileptonic sample.
On the physical side, the class of tree-level EW diagrams taken into account has been extended to include all the single resonant diagrams ([CC11/CC20]{}), in such a way that all the charged current processes are covered. Motivated by the physical relevance of keeping under control the effects of the transverse degrees of freedom of photonic radiation, both for the $W$ mass measurement and for the detection of anomalous couplings, the contribution of QED radiation has been fully developed in the leading logarithmic approximation, going beyond the initial-state, strictly collinear approximation, to include $p_T / p_L $ effects both for initial- and final-state photons. Last, an hadronic interface to [JETSET]{} in the generation branch has been added.
In the present version the neutral current backgrounds are neglected in the fully hadronic and leptonic decay channels, but this is not a severe limitation of the program since, at least
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- '[Nikolai Nadirashvili[^1],.4 cm Serge Vlăduţ[^2] ]{}'
title: '[Singular Solutions of Hessian Elliptic Equations in Five Dimensions]{}'
---
§ Ø
Introduction
============
In this paper we study a class of fully nonlinear second-order elliptic equations of the form $$F(D^2u)=0\leqno(1.1)$$ defined in a domain of ${ \R}^n$. Here $D^2u$ denotes the Hessian of the function $u$. We assume that $F$ is a Lipschitz function defined on the space $ S^2({ \R}^n)$ of ${n\times n}$ symmetric matrices satisfying the uniform ellipticity condition, i.e. there exists a constant $C=C(F)\ge 1$ (called an [*ellipticity constant*]{}) such that $$C^{-1}||N||\le F(M+N)-F(M) \le C||N||\;
\leqno(1.2)$$ for any non-negative definite symmetric matrix $N$; if $F\in C^1(S^2({ \R}^n))$ then this condition is equivalent to $$\frac{1}{ C'}|\xi|^2\le F_{u_{ij}}\xi_i\xi_j\le C' |\xi |^2\;,
\forall\xi\in { \R}^n\;.\leqno(1.2')$$ Here, $u_{ij}$ denotes the partial derivative $\pt^2 u/\pt x_i\pt x_j$. A function $u$ is called a [*classical*]{} solution of (1) if $u\in C^2(\Om)$ and $u$ satisfies (1.1). Actually, any classical solution of (1.1) is a smooth ($C^{\alpha +3}$) solution, provided that $F$ is a smooth $(C^\alpha )$ function of its arguments.
For a matrix $S \in S^2({ \R}^n)$ we denote by $\lambda(S)=\{
\lambda_i : \lambda_1\leq...\leq\lambda_n\}
\in { \R}^n$ the (ordered) set of eigenvalues of the matrix $S$. Equation (1.1) is called a Hessian equation (\[T1\],\[T2\] cf. \[CNS\]) if the function $F(S)$ depends only on the eigenvalues $\lambda(S)$ of the matrix $S$, i.e., if $$F(S)=f(\lambda(S)),$$ for some function $f$ on ${ \R}^n$ invariant under permutations of the coordinates.
In other words the equation (1.1) is called Hessian if it is invariant under the action of the group $O(n)$ on $S^2({ \R}^n)$: $$\forall O\in O(n),\; F({^t O}\cdot S\cdot O)=F(S) \;.\leqno(1.3)$$ The Hessian invariance relation (1.3) implies the following:
\(a) $F$ is a smooth (real-analytic) function of its arguments if and only if $f$ is a smooth (real-analytic) function.
\(b) Inequalities (1.2) are equivalent to the inequalities $${\mu\over C_0} \leq { f ( \lambda_i+\mu)-f ( \lambda_i) } \leq C_0 \mu,
\; \forall \mu\ge 0,$$ $\forall i=1,...,n$, for some positive constant $C_0$.
\(c) $F$ is a concave function if and only if $f$ is concave.
Well known examples of the Hessian equations are Laplace, Monge-Ampère, Bellman, Isaacs and Special Lagrangian equations.
Bellman and Isaacs equations appear in the theory of controlled diffusion processes, see \[F\]. Both are fully nonlinear uniformly elliptic equations of the form (1.1). The Bellman equation is concave in $D^2u \in S^2({ \R}^n)$ variables. However, Isaacs operators are, in general, neither concave nor convex. In a simple homogeneous form the Isaacs equation can be written as follows: $$F(D^2u)=\sup_b \inf_a L_{ab}u =0, \leqno (1.4)$$ where $L_{ab}$ is a family of linear uniformly elliptic operators of type $$L= \sum a_{ij} {\partial^2 \over \partial x_i \partial x_j } \leqno (1.5)$$ with an ellipticity constant $C>0$ which depends on two parameters $a,b$.
Consider the Dirichlet problem $$\label{dir}\begin{cases} F(D^2u, Du, u, x)=0 &\text{in}\;
\Om \cr \quad \quad u=\vph & \text{on}\; \pt\Om\;,\cr\end{cases}$$ where $\Omega \subset {\R}^n$ is a bounded domain with a smooth boundary $\partial \Omega$ and $\vph$ is a continuous function on $\pt\Om$.
We are interested in the problem of existence and regularity of solutions to the Dirichlet problem (1.6) for Hessian equations and Isaacs equation. The problem (1.6) has always a unique viscosity (weak) solution for fully nonlinear elliptic equations (not necessarily Hessian equations). The viscosity solutions satisfy the equation (1.1) in a weak sense, and the best known interior regularity (\[C\],\[CC\],\[T3\]) for them is $C^{1+\epsilon }$ for some $\epsilon
> 0$. For more details see \[CC\], \[CIL\]. Until recently it remained unclear whether non-smooth viscosity solutions exist. In the recent papers \[NV1\], \[NV2\], \[NV3\], \[NV4\] the authors first proved the existence of non-classical viscosity solutions to a fully nonlinear elliptic equation, and then of singular solutions to Hessian uniformly elliptic equation in all dimensions beginning from 12. Those papers use the functions $$w_{12,\delta}(x)={P_{12}(x)\over |x|^{\delta }},\;w_{24,\delta}(x)= {P_{24} (x)\over |x|^{\delta }},\:
\delta\in [1,2[,$$ with $P_{12}(x),P_{24}(x)$ being cubic forms as follows: $$P_{12}(x)=Re (q_1q_2q_3),\; x=(q_1,q_2,q_3)\in {\H}^3={ \R}^{12},$$ $ {\H}$ being Hamiltonian quaternions, $$P_{24}(x)={Re((o_1\cdot o_2)\cdot o_3)}={Re(o_1\cdot(o_2\cdot o_3))},\; x=(o_1,o_2,o_3)\in {\O}^3={\R}^{24}$$ $\O$ being the algebra of Caley octonions.
Finally, the paper \[NTV\] gives a construction of non-smooth viscosity solution in 5 dimensions which is order 2 homogeneous, also for Hessian equations, the function $$w_5(x)={P_{5} (x)\over |x|},\;$$ being such solution for the Cartan minimal cubic $$P_{5}(x)=x_1^3+\frac{3 x_1}2\left(z_1^2 + z_2^2-2 z_3^2-2x_2^2\right)+\frac{3\sqrt 3}2\left(x_2z_1^2-x_2z_2^2 + 2z_1z_2z_3\right)$$ in 5 dimensions.
However, the methods of \[NTV\] does not work for the function $w_{5,\delta}(x)=P_{5} (x)/ |x|^{\delta }, \;\delta>1$, and thus does not give singular (i.e. not in $C^{1,1}$) viscosity solutions to fully nonlinear equations in 5 dimensions.
In the present paper we fill the gap and prove
[*The function $$w_{5,\delta}(x)=P_{5} (x)/ |x|^{1+\delta }, \;\delta\in [0,1[$$ is a viscosity solution to a uniformly elliptic Hessian equation $(1.1)$ with a smooth functional $F$ in a unit ball $B\subset {\R}^{5}$ for the isoparametric Cartan cubic form $$P_{5}(x)=x_1^3+\frac{3 x_1}2\left(z_1^2 + z_2^2-2 z_3^2-2x_2^2\right)+\frac{3\sqrt 3}2\left(x_2z_1^2-x_2z_2^2 + 2z_1z_2z_3\right)$$ with $x=(x_1,x_2,z_1,z_2,z_3)$.*]{}
In particular one gets the optimality of the interior $C^{1,\alpha}$-regularity of viscosity solutions to
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'This paper presents the natural extension of Buckley-Feuring method proposed in [@BuckleyFeuring99] for solving fuzzy partial differential equations (FPDE) in a non-polynomial relation, such as the operator $\varphi(D_{x_1}, D_{x_2})$, which maps to the quotient between both partials. The new assumptions and conditions proceedings from this consideration are given in this document.'
address:
- 'Facultad de Matemáticas. Universidad de Sevilla, 41012 Sevilla, Spain'
- '*Keywords: Fuzzy differential equations, Buckley-Feuring solution, non-polynomial*'
- '*2000 Mathematics Subject Classification: 03E72, 46S40*'
author:
- |
D. Gálvez and J. L. Pino\
[Departamento de Estadística e Investigación Operativa]{}\
[Universidad de Sevilla]{}
title: 'The extension of Buckley-Feuring solutions for non-polynomial fuzzy partial differential equations'
---
Introduction {#introduction .unnumbered}
============
Many approaches for obtaining non-numerical solutions of fuzzy differential equations (FDE) have been developed from the introduction of fuzzy set concept by Zadeh [@Zadeh65] . These ones give a diversity of definitions for FDE solution based on different notions of fuzzy derivative, such as Seikkala derivative, Buckley-Feuring derivative, Puri-Ralescu derivative, Kandel-Friedman-Ming derivative, Goetschel-Voxman derivative, or Dubois-Prade derivative . Some relations between these derivatives are presented by Buckley and Feuring in [@BuckleyFeuring00] . However, only a few of these fuzzy derivatives are valid in some contexts as FDE solution. For example, the Goetschel-Voxman derivative, or the Dubois-Prade derivative provide solutions that cannot be a fuzzy number, whereas Puri-Ralescu derivative, and Kandel-Friedman-Ming derivative, always exist and provide a fuzzy number as solution of the FDE, but making use of abstract subtractions of fuzzy concepts in their definitions, making difficult the interpretation of this solutions in some real applications.
This paper uses the Buckley-Feuring derivative for solving FPDE. This derivative does not always exist, but if it does, provides a fuzzy number solution easily understandable in the context in which a specific FPDE has been developed.
The authors which proposed this concept of derivative, developed a methodology for solving constant coefficients polynomial FPDE in [@BuckleyFeuring99] . This paper present the extension of this methodology to a non-polynomial expression in partial fuzzy derivatives.\
\
In the following lines, the components of a FPDE are enumerated:
- $x_i,\quad i=1, 2,\quad
x_1\in S_1\subset I_1=(0, M_1],\quad x_2\in S_2\subset I_2=(0, M_2]$. Other domain limits can be established in this subsets, such as $ x_1>x_2$.
- $\tilde{\boldsymbol{\beta}}= (\tilde{\beta}_1,
\tilde{\beta}_2,...,\tilde{\beta}_k)$, a triangular fuzzy number vector.
- $\mu(\beta_j)$ is the membership function of $\beta_j\in \tilde{\beta_j}.$
- $\mu_{\beta_j}(\alpha)=\{\beta_j\mid \mu(\beta_j)\geq\alpha,\quad\alpha\in(0,1)\}$ set called $\alpha$-cut.\
\
These sets are closed and bounded, so that is possible to define, for a fuzzy number $\tilde{\beta_j}: \tilde{\beta_j}[\alpha]=
[b_1(\alpha), b_2(\alpha)]$, where:
- $b_1(\alpha)$ is the lower value $\beta_j$ in which $\mu(\beta_j)\geq \alpha,\quad \beta_j\in \tilde{\beta}_j$.
- $b_2(\alpha)$ is the higher value $\beta_j$ in which $\mu(\beta_j)\geq \alpha,\quad \beta_j\in \tilde{\beta}_j$.
and $\tilde{\boldsymbol{\beta}}[\alpha]=
\prod_j\tilde{\beta_j}[\alpha]$
- $\tilde{V}(x_1,x_2,\tilde{\boldsymbol{\beta}})$ is a positive and continuous function in $(x_1, x_2)\in S_1 \times S_2$ with partials $D_{x_1},
D_{x_2}$. This function must be also strictly increasing or strictly decreasing in $x_2\in S_2$, that is $\tilde{V}(k,x_2,\tilde{\boldsymbol{\beta}})$ is strictly increasing or strictly decreasing for all constant $k\in\mathbb{R}$.\
The fuzzy character of $\tilde{V}(x_1,x_2,\tilde{\boldsymbol{\beta}})$ shown by the tilde placed over $V$ is fixed by $\tilde{\boldsymbol{\beta}}$, and support the use of Buckley-Feuring derivative for solving FPDE.
- $\varphi(D_{x_1}, D_{x_2})$ is an expression with constant coefficients in $(D_{x_1},
D_{x_2})$ applied to $\tilde{V}(x_1,x_2,\tilde{\boldsymbol{\beta}})$.
- $F(x_1, x_2,\tilde{\boldsymbol{\beta}})$ continuous function in $(x_1, x_2)\in
S_1 \times S_2.$
The specific FPDE treated in this paper has the following form according with this notation:
$$\varphi(D_{x_1}, D_{x_2})\tilde{V}(x_1,x_2,\tilde{\boldsymbol{\beta}})= \frac{\partial \tilde{V}/\partial x_1}{\partial \tilde{V}/\partial
x_2}=F(x_1, x_2,\tilde{\boldsymbol{\beta}})$$
The Buckley-Feuring (B-F) solution {#sec:B-F}
==================================
The Buckley-Feuring (B-F) solution uses a solution of the crisp partial differential equation:
$$V(x_1,x_2)= G(x_1,x_2,\boldsymbol{\beta}),$$ with $G$ continuous $\forall (x_1,x_2)\in S_1\times S_2$.\
The next step is the fuzzification of $G$: $$\tilde{Y}(x_1,x_2)= \tilde{G}(x_1,x_2,\tilde{\boldsymbol{\beta}}),$$ with $\tilde{G}$ continuous $\forall (x_1,x_2)\in S_1 \times S_2$ and strictly monotone for $x_2\in S_2$. Note that $\tilde{Y}_i$ is only the fuzzy representation of $G$, but not necessary the solution of the fuzzy partial differential equation. If it finally happens and $\tilde{Y}(x_1,x_2)$ is a B-F solution, then $\tilde{V}(x_1,x_2,\tilde{\boldsymbol{\beta}})=\tilde{Y}(x_1,x_2)$.
With this notation, it is possible to see that:\
\
$\tilde{Y}(x_1,x_2)[\alpha]= [y_1(x_1,x_2,\alpha),
y_2(x_1,x_2,\alpha)]$, and\
\
$\tilde{F}(x_1,x_2,\tilde{\boldsymbol{\beta}})[\alpha]=
[f_1(x_1,x_2,\alpha), f_2(x_1,x_2,\alpha)], \forall\alpha$.\
and, by definition:
$y_1(x_1,x_2,\alpha)= \min\{G(x_1,x_2,\boldsymbol{\beta}),\quad
\boldsymbol{\beta}\in\tilde{\boldsymbol{\beta}}[\alpha]\}$,\
$y_2(x_1,x_2,\alpha)= \max\{G(x_1,x_2,\boldsymbol{\beta}),\quad
\boldsymbol{\beta}\in\tilde{\boldsymbol{\beta}}[\alpha]\}$ ,and\
\
$f_1(x_1,x_2,\alpha)= \min\{F(x_1,x_2,\boldsymbol{\beta}),\quad
\boldsymbol{\beta}\in\tilde{\boldsymbol{\beta}}[\alpha]\}$,\
$f_2(x_1,x_2,\alpha)= \max\{F(x_1,x_2,\boldsymbol{\beta}),\quad
\boldsymbol{\beta}\in\tilde{\boldsymbol{\beta}}[\alpha]\}$,\
$\forall x_1, x_2, \alpha$.\
If it is possible to apply the $\varphi(D_{x_1}, D_{x_2})$ operator to $y_i, i=1,
|
{
"pile_set_name": "ArXiv"
}
| null |
.0in 8.5in 6.2in 0.12in 3.0ex
PS. \#1[[**\#1**]{}]{} \#1[Nucl. Phys., ]{} \#1[Comm. Math. Phys., ]{} \#1[Phys. Lett., ]{} \#1[Phys. Rev., ]{} \#1[Phys. Rev. Lett., ]{} \#1[Proc. Roc. Soc., ]{} \#1[Prog. Theo. Phys., ]{} \#1[Sov. J. Nucl. Phys., ]{} \#1[Theor. Math. Phys., ]{} \#1[Annals of Phys., ]{} \#1[Proc. Natl. Acad. Sci. USA, ]{}
**$Sp(N_c)$ Gauge Theories and M Theory Fivebrane**
Changhyun Ahn$^{a,}$[^1], Kyungho Oh$^{b,}$[^2] and Radu Tatar$^{c,}$[^3]
[*$^a$ Dept. of Physics, Seoul National University, Seoul 151-742, Korea*]{}
[ *$^b$ Dept. of Mathematics, University of Missouri-St. Louis, St. Louis, Missouri 63121, USA*]{}
[*$^c$ Dept. of Physics, University of Miami, Coral Gables, Florida 33146, USA*]{}
Abstract
0.2in We analyze M theory fivebrane in order to study the moduli space of vacua of $N=1$ supersymmetric $Sp(N_c)$ gauge theories with $N_f$ flavors in four dimensions. We show how the $N=2$ Higgs branch can be encoded in M theory by studying the orientifold which plays a crucial role in our work. When all the quark masses are the same, the surface of the M theory spacetime representing a nontrivial ${\bf S^1}$ bundle over ${\bf R^3}$ develops $A_{N_f-1}$ type singularities at two points where D6 branes are located. Furthermore, by turning off the masses, two singular points on the surface collide and produce $A_{2N_f-1}$ type singularity. The sum of the multiplicities of rational curves on the resolved surface gives the dimension of $N=2$ Higgs branch which agrees with the counting from the brane configuration picture of type IIA string theory. By rotating M theory fivebranes we get the strongly coupled dynamics of $N=1$ theory and describe the vacuum expectation values of the meson field parameterizing Higgs branch which are in complete agreement with the field theory results. Finally, we take the limit where the mass of adjoint chiral multiplet goes to infinity and compare with field theory results. For massive case, we comment on some relations with recent work which deals with $N=1$ duality in the context of M theory.
Introduction
============
One of the most interesting tools used to study nonperturbative dynamics of low energy supersymmetric gauge theories is to understand the D(irichlet) brane dynamics where the gauge theory is realized on the worldvolume of D brane.
This work was pioneered by Hanany and Witten [@hw] where the mirror symmetry of $N=4$ gauge theory in 3 dimensions was interpreted by changing the position of the Neveu-Schwarz(NS)5 brane in spacetime. (see also [@bo1][@bo2]). They took a configuration of type IIB string theory which preserves 1/4 of the supersymmetry and consists of parallel NS5 branes with D3 branes suspended between them and D5 branes located between them. A new aspect of brane dynamics was the creation of D3 brane whenever a D5 brane and NS5 brane are crossing through each other. This was due to the conservation of the linking number(defined as a total magnetic charge for the gauge field coupled with the worldvolume of the both types of NS and D branes).
By T-dualizing the above configuration on one space coordinate, the passage to $N=2$ gauge theory in 4 dimensions can be described as two parallel NS5 branes and D4 branes suspended between them in a flat space in type IIA string theory. When one change the relative orientation of the two NS5 branes [@bar] while keeping their common 4 spacetime dimensions intact, the $N=2$ supersymmetry is broken to $N=1$. The brane configuration[@egk; @egkrs] preserves 1/8 of the supersymmetry and this corresponds to turning on a mass of adjoint field because the distances between D4 branes suspended between the NS5 branes relate to the vacuum expectation values(vevs). The configuration of D4 branes gives the gauge group while the D6 branes give the global flavor group. Using this configuration they described and checked a stringy derivation of Seiberg’s duality for $N=1$ supersymmetric gauge theory with $SU(N_c)$ gauge group with $N_f$ flavors in the fundamental representation which was previously conjectured in [@se1]. This result was generalized to brane configurations with orientifolds which then give $N=1$ supersymmetric theories with gauge group $SO(N_c)$ or $Sp(N_c)$ [@eva; @egkrs]. In this case the NS5 branes have to pass over each other and some strong coupling phenomena have to be considered. Similar results were obtained in [@bh; @bsty; @t] where the moduli space of the supersymmetric gauge theories is geometrically encoded in the brane setup.
Another approach was initiated by Ooguri and Vafa [@ov] where they considered the compactification of IIA string theory on a double elliptically fibered Calabi-Yau threefold. The wrapped D6 branes around three cycles of Calabi-Yau threefold filling also a 4 dimensional spacetime. The transition between electric theory and its magnetic dual appears when a change in the moduli space of Calabi-Yau threefold occurs. Their results were generalized in the papers of [@ao; @a; @ar; @aot] to various other models which reproduce field theory results studied previously.
So far the branes in string theory were considered to be rigid without any bendings. When the branes are intersecting each other, a singularity occurs. In order to avoid that kind of singularities, a very nice simplification was obtained by reinterpreting brane configuration in string theory from the point of view of M theory as was showed by Witten in [@w1]. Then both the D4 branes and NS5 branes come from the fivebranes of M theory (the former is an M theory fivebrane wrapped over $\bf{S^1}$ and the latter is an M theory fivebrane on $\bf{R^{10} \times S^1}$). That is, D4 brane’s worldvolume projects to a five manifold in $\bf{R^{10}}$ and NS5 brane’s worldvolume is placed at a point in $\bf{S^1}$ and fills a six manifold in $\bf{R^{10}}$. To obtain D6 branes one has to use a multiple Taub-NUT space whose metric is complete and smooth. The $N=2$ supersymmetry in four dimensions requires that the worldvolume of M theory fivebrane is $\bf{R^{1,3}}\times \Sigma$ where $\Sigma$ is uniquely identified with the curves [@sw] that appear in the solutions to Coulomb branch of the field theory. The configurations involving orientifolds were considered in[@lll; @bsty1]. The method of brane dynamics was used to study supersymmetric field theories in several dimensions by many authors [@ah; @ba; @k; @cvj1; @hov; @hz; @mmm; @fs; @w2; @biksy; @gomez; @cvj2; @hk; @hsz; @nos; @hy; @noyy; @ss; @bo; @mi].
The original work [@w1] was suited to study the moduli space for $N=2$ supersymmetric theories. By rotating one of the NS5 branes the $N=2$ supersymmetry is broken to $N=1$ [@bar]. In [@w2; @hoo] (see also [@biksy; @ss; @bo]) this was seen from the point of the M theory interpretation, by considering the possible deformation of the curve $\Sigma$. In field theory, the supersymmetry is broken by giving a mass to the adjoint field and if this mass is finite, the $N=1$ field theory can be compared with the previous results obtained in [@ads]. These papers considered the case of unitary groups.
Recently, the exact low energy description of $N=2$ supersymmetric $SU(N_c)$ gauge theories with $N_f$ flavors in 4 dimensions in the framework of M theory fivebrane have been found in [@hoo]. They constructed M fivebrane configuration which encodes the information of Affleck-Dine-Seiberg superpotential [@ads] for $N_f < N_c$. Later, this approach has been used to study the moduli space of vacua of confining phase of $N=1$ supersymmetric gauge theories in four dimensions [@bo]. In terms of brane configuration of IIA string theory, this corresponds to the picture of [@egk] by taking multiples of NS’5 branes rather than a single NS’5 brane.
In the present paper we generalize to the case of symplectic group $Sp(N_c)$ with $N_f$ flavors. The new ingredient that is introduced is the orientifold. We find an interesting picture which differs from the one obtained for unitary group $SU(N
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We show that the Right-Angled Coxeter group $C=C(G)$ associated to a random graph $G\sim \mathcal{G}(n,p)$ with $\frac{\log n + \log\log n + \omega(1)}{n} \leq p < 1- \omega(n^{-2})$ virtually algebraically fibers. This means that $C$ has a finite index subgroup $C''$ and a finitely generated normal subgroup $N\subset C''$ such that $C''/N \cong \mathbb{Z}$. We also obtain the corresponding hitting time statements, more precisely, we show that as soon as $G$ has minimum degree at least 2 and as long as it is not the complete graph, then $C(G)$ virtually algebraically fibers. The result builds upon the work of Jankiewicz, Norin, and Wise and it is essentially best possible.'
author:
- Gonzalo Fiz Pontiveros
- Roman Glebov
- Ilan Karpas
bibliography:
- 'legal.bib'
nocite: '[@*]'
title: 'Virtually Fibering Random Right-Angled Coxeter Groups'
---
Introduction
============
A group $K$ *virtually algebraically fibers* if there is a finite index subgroup $K'$ admitting a surjective homomorphism $K'\to \ZZ$ with finitely generated kernel. This notion arises from topology: a $3$-manifold $M$ is virtually a surface bundle over a circle precisely when the fundamental group of $M$ virtually algebraically fibers (see the result of Stallings [@Sta61]).
A *Right-Angled Coxeter group* (RACG) $K$ is a group given by a presentation of the form $$\left\langle x_1, x_2, \ldots x_n \;|\;x_i^2, [x_i, x_j]^{\sigma_{ij}}\;: 1\leq i< j\leq n\right\rangle$$ where $\sigma_{ij}\in \{0,1\}$ for each $1\leq i <j\leq n$. One can encode this information with a graph $\Gamma_{K}$ whose vertices are the generators $x_1,\ldots, x_n$ and $x_i\sim x_j$ if and only if $\sigma_{ij}=1$. Conversely given a graph $G$ on $n$ vertices, we will denote the corresponding RACG by $K(G)$.
Random Coxeter groups have been of heightened recent interest, see for instance Charney and Farber [@charney2012random], Davis and Kahle [@davis2014random], and Behrstock, Falgas-Ravry, Hagen, and Susse [@behrstock2015global].
Recently, Jankiewicz, Norin, and Wise [@Jankiewicz_Virtually] developed a framework to show virtual fibering of a RACG using Betsvina-Brady Morse theory [@Bestvina_Morse_1997] and ultimately translated the virtual fibering problem for $K$ into a combinatorial game on the graph $\Gamma_K$. The method was successful on many special cases and also allowed them to construct examples where Betsvina-Brady cannot be applied to find a virtual algbraic fibering.
A natural question to consider is whether this approach is successful for a ‘generic’ RACG, i.e., given a probability measure $\mu_n$ on the set of RACG’s of rank at most $n$, is it true that a.a.s. as $n\to \infty$, a group sampled from $\mu_n$ virtually algebraically fibers. This question is also considered in [@Jankiewicz_Virtually], specifically they consider sampling $\Gamma_K$ from the Erdős-Renyi random graph model $\mathcal{G}(n,p)$ and they prove the following result:
\[JNW\] Assume that $$\frac{(2\log{n})^{\frac{1}{2}}+\omega(n)}{n^{\frac{1}{2}}}\leq p < 1 -\omega(n^{-2}),$$ and let $G$ be sampled from $\mathcal{G}(n,p)$. Then, asymptotically almost surely, the associated Right-Angled Coxeter group $K(G)$ virtually algebraically fibers.
In this paper we extend this result to the smallest possible range of $p$, in fact we prove a hitting time type result. Namely we show that as soon as $\Gamma_K$ has minimum degree $2$ then a.a.s. $K$ virtually algebraically fibers.
\[Main\] Let $G_0,G_1,\ldots, G_{\binom{n}{2}}$ denote the random graph graph process on $n$ vertices where $G_{i+1}= G_i\cup \{e_i\}$ and $e_i$ is picked uniformly at random from the non-edges of $G_i$. Let $T=\min_{t}\;\{t\;: \delta(G_t)=2\}$, then a.a.s. the random graph process is such that $K(G_m)$ virtually algebraically fibers if and only if $T\leq m <\binom{n}{2}$. In particular for any $p$ satisfying $$\frac{\log{n}+\log\log{n}+\omega(n)}{n}\leq p < 1-\omega(n^{-2})$$ and $G~\mathcal{G}(n,p)$, the random Right-Angled Coxeter group $K(G)$ virtually algebraically fibers a.a.s.
The paper is structured as follows. In Section \[sec:legalsystems\], we establish the graph-theoretic framework used in the remainder of the paper, and show that the minimum degree condition is in fact necessary for $n\geq 3$ and hence Theorem \[Main\] is best possible.
In Section \[sec:dense\], we look at the opposite extreme and prove Theorem \[Main\] for very large $p$. The proof presented in Section \[WeakBound\] mainly serves to provide the reader with the concepts and the intuition used later; it shows Theorem \[Main\] for most of the range of the edge probability. In Section \[sec:construction\], we present the construction used for the final part of the proof of Theorem \[Main\]. Then in Section \[sec:pseudorandom\] we prove Theorem \[Main\] in the remaining case in the pseudorandom setting, i.e., we prove the statement for every graph satisfying certain (deterministic) properties. Finally, in Section \[sec:proof\] we put the pieces together, and show that indeed in the remaining interval for $p$ in Theorem \[Main\], the random graph a.a.s. satisfies the conditions required in Section \[sec:pseudorandom\], thus completing the proof.
??
Notation
--------
$V$ always denotes the vertex set; floor/ceiling; $G(n,p)$ and relation to the random graph process; $\log$ is base $e$
Legal Systems {#sec:legalsystems}
=============
In this section we follow the definitions in [@Jankiewicz_Virtually] to present the combinatorial game introduced in [@Jankiewicz_Virtually] used to construct virtual algebraic fiberings of Right-Angled Coxeter groups.
Let $G=(V,E)$ be a graph. We say that a subset $S\subset V$ is a *legal state* if both $S$ and $V\setminus S$ are non-empty [*connected subsets*]{} of $V$, i.e., the corresponding induced graphs are connected and non-empty.
For each $v \in V$, a *move at $v$* is a set $M_v\subseteq V$ satisfying the following:
- $v\in M_v$
- $N(v)\cap M_v=\emptyset$
Let $\mathcal{M}=\{M_v \; : v \in V\}$ denote a set of moves.
We will identify subsets of $V$ as elements of ${\mathbb{Z}_{2}}^{V}$ in the obvious way. Thus each state and each move correspond to elements of ${\mathbb{Z}_{2}}^{V}$ and we will think of moves acting on states via group multiplication (or addition in this case).
For a graph $G$, a state $S\subseteq V(G)$, and a set of moves $\mathcal{M}=\{M_v \; : v \in V\}$, the triple $(G, S, \mathcal{M})$ is a *legal system* if for any element $g \in \langle\mathcal{M}\rangle$, $g(S)$ is a legal state of $G$.
Let $(G,S,\mathcal{M})$ be a legal system, then the RACG $K(G)$ must virtually algebraically fiber.
To elucidate the notion of a legal system, let us look at some toy examples (see Figure \[fig:examples\]) and ask whether each of these graphs contains a legal system.
\[fig:examples\]
{width="50.00000%"}
\[cherry\] Let $G=(V,E)$ be a graph with three vertices $V=\{v,u_1,u_2\}$ and two edges $E=\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We prove that each superinjective simplicial map of the complex of curves of a compact, connected, nonorientable surface is induced by a homeomorphism of the surface, if $g + n \leq 3$ or $g + n \geq 5$, where $g$ is the genus of the surface and $n$ is the number of the boundary components.'
author:
- Elmas Irmak
title: Superinjective Simplicial Maps of the Complexes of Curves on Nonorientable Surfaces
---
Key words: Mapping class groups, simplicial maps, nonorientable surfaces
MSC: 32G15, 20F38, 30F10, 57M99
Introduction
============
Let $N$ be a compact, connected, nonorientable surface of genus $g$ (connected sum of $g$ copies of projective planes) with $n$ boundary components. Mapping class group, $Mod_N$, of $N$ is defined to be the group of isotopy classes of all self-homeomorphisms of $N$. The *complex of curves*, $\mathcal{C}(N)$, on $N$ is an abstract simplicial complex defined as follows: A simple closed curve on $N$ is called [*nontrivial*]{} if it does not bound a disk, a mobius band, and it is not isotopic to a boundary component of $N$. The vertex set, $A$, of $\mathcal{C}(N)$ is the set of isotopy classes of nontrivial simple closed curves on $N$. A set of vertices forms a simplex in $\mathcal{C}(N)$ if they can be represented by pairwise disjoint simple closed curves. The geometric intersection number $i([a], [b])$ of $[a]$, $[b] \in A$ is the minimum number of points of $x \cap y$ where $x \in [a]$ and $y \in [b]$. A simplicial map $\lambda : \mathcal{C}(N) \rightarrow \mathcal{C}(N)$ is called [*superinjective*]{} if the following condition holds: if $[a], [b]$ are two vertices in $\mathcal{C}(N)$ such that $i([a], [b]) \neq 0$, then $i(\lambda([a]), \lambda([b])) \neq 0$. The main result of this paper:
Let $N$ be a compact, connected, nonorientable surface of genus $g$ with $n$ boundary components. Suppose that either $g + n \leq 3$ or $g + n \geq 5$. If $\lambda : \mathcal{C}(N) \rightarrow \mathcal{C}(N)$ is a superinjective simplicial map, then $\lambda$ is induced by a homeomorphism $h : N \rightarrow N$ (i.e $\lambda([a]) = [h(a)]$ for every vertex $[a]$ in $\mathcal{C}(N)$).
------------------------------------------------------------------------
[The author was supported by Faculty Research Incentive Grant, BGSU.]{}
The mapping class groups and complex of curves on orientable surfaces are defined similarly as follows: Let $R$ be a compact, connected, orientable surface. Mapping class group, $Mod_R$, of $R$ is defined to be the group of isotopy classes of orientation preserving homeomorphisms of $R$. Extended mapping class group, $Mod_R^*$, of $R$ is defined to be the group of isotopy classes of all self-homeomorphisms of $R$. The complex of curves, $\mathcal{C}(R)$, on $R$ is defined as an abstract simplicial complex. The vertex set is the set of isotopy classes of nontrivial simple closed curves, where here nontrivial means it does not bound a disk and it is not isotopic to a boundary component of $R$. A set of vertices forms a simplex in $\mathcal{C}(R)$ if they can be represented by pairwise disjoint simple closed curves.
Ivanov proved that the automorphism group of the curve complex is isomorphic to the extended mapping class group on orientable surfaces. As an application he proved that isomorphisms between any two finite index subgroups are geometric. Ivanov’s results were proven by Korkmaz in [@K1] for lower genus cases. Luo gave a different proof of these results for all cases in [@L]. After Ivanov’s work, mapping class group was viewed as the automorphism group of various geometric objects on orientable surfaces. These objects include Schaller’s complex (see [@Sc] by Schaller), the complex of pants decompositions (see [@M] by Margalit), the complex of nonseparating curves (see [@Ir3] by Irmak), the complex of separating curves (see [@BM1] by Brendle-Margalit, and [@MV] by McCarthy-Vautaw), the complex of Torelli geometry (see [@FIv] by Farb-Ivanov), the Hatcher-Thurston complex (see [@IrK] by Irmak-Korkmaz), and complex of arcs (see [@IrM] by Irmak-McCarthy). As applications, Farb-Ivanov proved that the automorphism group of the Torelli subgroup is isomorphic to the mapping class group in [@FIv], and McCarthy-Vautaw extended this result to $g \geq 3$ in [@MV].
On orientable surfaces: Irmak proved that superinjective simplicial maps of the curve complex are induced by homeomorphisms of the surface to classify injective homomorphisms from finite index subgroups of the mapping class group to the whole group (they are geometric except for closed genus two surface) for genus at least two in [@Ir1], [@Ir2], [@Ir3]. Behrstock-Margalit and Bell-Margalit proved these results for lower genus cases in [@BhM] and in [@BeM]. Brendle-Margalit proved that superinjective simplicial maps of separating curve complex are induced by homeomorphisms, and using this they proved that an injection from a finite index subgroup of $K$ to the Torelli group, where $K$ is the subgroup of mapping class group generated by Dehn twists about separating curves, is induced by a homeomorphism in [@BM1]. Shackleton proved that injective simplicial maps of the curve complex are induced by homeomorphisms in [@Sc] (he also considers maps between different surfaces), and he obtained strong local co-Hopfian results for mapping class groups.
On nonorientable surfaces: For odd genus cases, Atalan proved that the automorphism group of the curve complex is isomorphic to the mapping class group if $g + r \geq 6$ in [@A]. Irmak proved that each injective simplicial map from the arc complex of a compact, connected, nonorientable surface with nonempty boundary to itself is induced by a homeomorphism of the surface in [@Ir4]. She also proved that the automorphism group of the arc complex is isomorphic to the quotient of the mapping class group of the surface by its center. Atalan-Korkmaz proved that the automorphism group of the curve complex is isomorphic to the mapping class group in [@AK] if $g + r \geq 5$. They also proved that two curve complexes are isomorphic if and only if the underlying surfaces are homeomorphic. In this paper we use some results from [@AK]. Our techniques give simpler proofs of the some of the results in [@AK]. Since an automorphism of $\mathcal{C}(N)$ is a superinjective simplicial map, our result implies that automorphisms of $\mathcal{C}(N)$ are induced by homeomorphisms of $N$ which was proved in [@AK].
Some small genus cases
======================
=1.2in =2.37in
In this section we will prove our main results for $(g, n) \in \{(1, 0), (1, 1), (1, 2),$ $ (2, 0), (2, 1)\}$. We note that since simplicial maps preserve geometric intersection zero, superinjective simplicial maps preserve geometric intersection zero and nonzero properties.
\[A\] Let $N$ be a compact, connected, nonorientable surface of genus $g$ with $n$ boundary components. Suppose that either $(g, n) \in \{(1, 0), (1, 1), (1, 2),$ $(2, 0), (2, 1)\}$. If $\lambda : \mathcal{C}(N) \rightarrow \mathcal{C}(N)$ is a superinjective simplicial map, then $\lambda$ is induced by a homeomorphism $h : N \rightarrow N$ (i.e $\lambda([a]) = [h(a)]$ for every vertex $[a]$ in $\mathcal{C}(N)$).
If $(g, n) = (1, 0)$, $N$ is the projective plane. There is only one element (isotopy class of a 1-sided curve) in the curve complex. Hence, any superinjective simplicial map is induced by the identity homeomorphism. If $(g, n) = (1, 1)$, $N$ is Mobius band. There is only one element (isotopy class of a 1-sided curve) in the curve complex. Hence, any superinjective simplicial map is induced by the identity homeomorphism.
If $(g, n) = (1, 2)$, there are only two elements in the curve complex (see [@Sc]). They are the isotopy classes of $a$ and $b$ as shown in Figure \[figure1\]. We see that $i([a], [b]) = 1$. So, the superinjective simplicial map cannot send both
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Monolayer Graphene contains two inequivalent local minimum, valleys, located at $K$ and $K''$ in the Brillouin zone. There has been considerable interest in the use of these two valleys as a doublet for information processing. Herein I propose a method to resolve valley currents spatially, using only a weak magnetic field. Due to the trigonal warping of the valleys, a spatial offset appears in the guiding centre co-ordinate, and is strongly enhanced due to collimation. This can be exploited to spatially separate valley states. Based on current experimental devices, spatial separation is possible for densities well within current experimental limits. Using numerical simulations, I demonstrate the spatial separation of the valley states.'
author:
- 'Samuel S. R. Bladwell'
bibliography:
- 'Biblio.bib'
title: Valley separation via trigonal warping
---
[*Introduction*]{}: Due to the particular symmetry of the honeycomb lattice of monolayer graphene, the valence and conduction bands meet at 6 points. In the immediate vicinity of these points, the dispersion is linear and the Fermi surface consists of two inequivalent cones at points $K$ and $K'$ in the Brillouin zone. These valleys are independent and degenerate, and several works have proposed using these valleys as a doublet for information processing; referred to as valleytronics, in analogy with spintronics[@Rycerz2007; @Schaibley2016]. Since the first proposal over a decade ago, considerable progress has been made with regards to the generation of both static valley polarisations, and valley polarised currents[@Garcia2008; @Gunlycke2011; @Jiang2013; @Settnes2016]. Detecting valley polarisation on the other hand has proved to be difficult. An early proposal suggested using superconducting superconducting contacts[@Akhmerov2007]. More recently it has been shown that static valley polarisations can be induced and detected via second harmonic generation[@Golub2011; @Wehling2015]. Nonetheless, the detection of valley polarised currents remains an ongoing challenge, with implications for a wide variety of phenomena beyond valleytronics.
In this letter, motivated by recent developments in electron optics in graphene I propose an approach for the detection of valley polarised currents in graphene. Over the past decade, a variety of improvements in material processing have allowed for high mobilities, with mean free paths of tens of microns[@Banszerus2016]. Very recently, several groups have considered how to form highly collimated electron beams in graphene; Barnard [*et al*]{} using absorptive metal contacts to form a pinhole aperture, and Liu [*et al*]{} using a parabolic [*p-n*]{} junction as a refinement of the Veselago lens[@Barnard2017; @Liu2017]. Herein I show that collimation, combined with the trigonal warping of the Dirac cone, results in a significant enhancement in the spatial separation between ballistic valley polarised currents. Combined with an appropriate device layout, this spatial separation can be exploited to individually address distinct valley states. Due to the significant enhancement, I find that the required trigonal warping is small, and the required density is well within current experimental limits. It thereby provides a novel method of detecting ballistic valley polarised currents.
[*Valley separation*]{}: The effective Hamiltonian for graphene near the charge neutrality point is Dirac-like, ${\cal H} \sim {\bm k} \cdot \boldsymbol{\sigma}$, where $\boldsymbol{\sigma}$ are the usual Pauli matrices and reflect the two constituent sub-lattices. At low densities, there is a four-fold degeneracy, due to spin and valley degrees of freedom. The two inequivalent valleys are located at $K$ and $K'$ respectively in the Brillouin zone, and close to the charge neutrality point are cylindrically symmetric. For higher densities, the Fermi surface in each valley $K$ and $K'$ exhibits trigonal warping, the emergence of which is shown in Fig. \[trigonal\].
With an applied transverse magnetic field, ${\bm p } \rightarrow \boldsymbol{\pi} = \bm p + e \bm A$, where $\bm A$ is the vector potential. If the applied field is weak, and the electron or hole density is high, the charge carrier dynamics can be described semi-classically, starting from the Heisenberg equation of motion for the operators, $$\begin{aligned}
\dot{\hat{\boldsymbol{\pi}}}= \left[{\cal H}, \hat{\boldsymbol{\pi}}\right] = e B \hat{\bm v }\times {\bf n}
\label{eqmotion}\end{aligned}$$ where ${\bf n}$ is the unit vector normal to the graphene plane, and $B$ is the magnitude of the applied magnetic field. Note that $\hat{\boldsymbol{\pi}}$ and $\hat{\bm v}$ are operators. Eq. is general, and holds for a variety of dispersion relations[@Bladwell2015]. In the semiclassical limit, the operator equation, Eq. , is converted to a classical equation of the expectation values, which can then be trivially integrated to yield the real space motion of a electron under an applied transverse magnetic field, $$\begin{aligned}
{\bm r}(t) = \frac{\boldsymbol{\pi} \times \bm n}{eB}
\label{eqmotion1}\end{aligned}$$ where $\bm r = \left< {\hat{ \bm r}}\right>$ and $\boldsymbol{\pi} = \left<\hat{\boldsymbol{\pi}}\right> $. This is the equation of motion for cyclotron motion, with the electron following the equienergetiuc contours of the Fermi surface. Thus at high densities, the semi-classical cyclotron orbits of graphene are trigonally warped.
[![The emergence of trigonal warping in the two valleys, with $K$ ($K'$) indicated in red (blue).[]{data-label="trigonal"}](Valley11.pdf "fig:"){width="11.00000%"}]{} [![The emergence of trigonal warping in the two valleys, with $K$ ($K'$) indicated in red (blue).[]{data-label="trigonal"}](Valley12.pdf "fig:"){width="11.00000%"}]{} [![The emergence of trigonal warping in the two valleys, with $K$ ($K'$) indicated in red (blue).[]{data-label="trigonal"}](Valley13.pdf "fig:"){width="11.00000%"}]{} [![The emergence of trigonal warping in the two valleys, with $K$ ($K'$) indicated in red (blue).[]{data-label="trigonal"}](Valley14.pdf "fig:"){width="11.00000%"}]{} [![The emergence of trigonal warping in the two valleys, with $K$ ($K'$) indicated in red (blue).[]{data-label="trigonal"}](Valley21.pdf "fig:"){width="11.00000%"}]{} [![The emergence of trigonal warping in the two valleys, with $K$ ($K'$) indicated in red (blue).[]{data-label="trigonal"}](Valley22.pdf "fig:"){width="11.00000%"}]{} [![The emergence of trigonal warping in the two valleys, with $K$ ($K'$) indicated in red (blue).[]{data-label="trigonal"}](Valley23.pdf "fig:"){width="11.00000%"}]{} [![The emergence of trigonal warping in the two valleys, with $K$ ($K'$) indicated in red (blue).[]{data-label="trigonal"}](Valley24.pdf "fig:"){width="11.00000%"}]{}
In Eq. , guiding centres of the two valleys are identically located at $(x, y) = (0, 0)$. For an electron optics device, for example, the pin-hole collimator designed in Ref. [@Barnard2017], the initial position of the wave packet is at $(0,0)$, the location of the injector. In addition, for a perfectly collimated beam of electrons, the initial velocity is fully aligned along the $x$ axis, parallel to the channel of the injector. For a cylindrically symmetric Fermi surface, the location of the guiding centre co-ordinate is unchanged. When the Fermi contour becomes trigonally warped, the guiding centre co-ordinate becomes offset, proportional to the magnitude of the trigonal warping. This offset effect is presented in Fig. \[fig2\]. Since the two valleys exhibit triagonal warping with opposite signs, the guiding centre co-ordinates are offset above and below the $y$ axis. The magnitude of the offset is proportional to the magnitude of the trigonal warping; as trigonal warping increases, so does guiding centre co-ordinate offset.
To illustrate the effect analytically, I consider the following approximation for the Fermi momentum, $p_F$, $$\begin{aligned}
p_F \approx \hbar k (1 + s u \sin3\theta )
\label{momentum}\end{aligned}$$ where $\theta$ is the polar angle, $\theta = \tan^{-1} k_y/k_x$, and $k_0 = \sqrt{\pi n}$. Here $k_x$ and $k_y$ are chosen according to Fig. \[brill\]. The valley index, $s=\pm1$, with $u = k a/4$, where $a$ is the lattice constant of graphene. It is important to note that this analytic approach is only valid while $3u \ll 1$, that is, $u \sim 0.1$. The transverse velocity must vanish for col
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Let $k$ be a knot in $S^3$. In [@HS], H.N. Howards and J. Schultens introduced a method to construct a manifold decomposition of double branched cover of $(S^3, k)$ from a thin position of $k$. In this article, we will prove that if a thin position of $k$ induces a thin decomposition of double branched cover of $(S^3,k)$ by Howards and Schultens’ method, then the thin position is the sum of prime summands by stacking a thin position of one of prime summands of $k$ on top of a thin position of another prime summand, and so on. Therefore, $k$ holds the nearly additivity of knot width (i.e. for $k=k_1\#k_2$, $w(k)=w(k_1)\# w(k_2)-2$) in this case. Moreover, we will generalize the hypothesis to the property a thin position induces a manifold decomposition whose thick surfaces consists of strongly irreducible or critical surfaces (so topologically minimal.)'
author:
- Jungsoo Kim
date: '13, Jan, 2010'
title: A note on the nearly additivity of knot width
---
Introduction and result
=======================
Let $k$ be a knot in $S^3$. In [@HS], H.N. Howards and J. Schultens introduced a method to construct a manifold decomposition of double branched cover (abbreviate it as *DBC*, and call the method *the H-S method*) of $(S^3, k)$ (see section \[section-DBC\]) and they proved that for $2$-bridge knots and $3$-bridge knots in thin position DBC inherits thin manifold decomposition (note that a knot in a thin position may not induce a thin manifold decomposition by the H-S method in general, see [@HS] and [@HRS].) Indeed, if $k$ is a non-prime $3$-bridge knot, then $k=k_1\#k_2$ for $2$-bridge knots $k_1$ and $k_2$, and thin position of $k$ is the sum of $k_1$ and $k_2$ by stacking a thin position of one of the knots on top of a thin position of the other (see Corollary 2.5 of [@HS].) So $w(k)=w(k_1)+w(k_2)-2$, i.e. $k$ holds the *nearly additivity* of knot width (for $k=k_1\#k_2$, $w(k)=w(k_1)+w(k_2)-2$, see [@ST2] for more details on “the nearly additivity of knot width”.)
So we get a question whether the property that a thin position of a knot induces a thin manifold decomposition of DBC by the H-N method implies the nearly additivity of knot width in general. If a thin position of $k$ is the sum of an ordered stack of prime summands of $k$ where each summand is in a thin position (“*a sum of an ordered stack of prime summands*” means like the left of Figure \[figure-thin\], where the bottom summand is a Montesinos knot $M(0; (2,1), (3,1), (3, 1), (5, 1))$ in a thin position (this figure is borrowed from Figure 5.2.(c) of [@HK]) and the top summands are trefoils in thin position,) then the sum of an ordered stack like that in a different order also determines a thin position of $k$. So $k$ must hold the nearly additivity of knot width by the uniqueness of prime factorization of $k$. In [@RS], Y. Rieck and E. Sedgwick proved that thin position of the sum of small knots is the sum of an ordered stack in that manner, i.e. it holds the nearly additivity of knot width. But M. Scharlemann and A. Thompson proposed a way to construct a example to contradict the nearly additivity of knot width (see [@ST2].) Although R. Blair and M. Tomova proved that most of Scharlemann and Thompson’s constructions do not produce counterexamples for the nearly additivity of knot width (see [@BT],) the question seems not obvious.
In this article, we will prove that the question is true.
\[theorem-1\] If a thin position of a knot $k$ induces a thin manifold decomposition of double branched cover of $(S^3,k)$ by the Howards and Schultens’ method, then the thin position is the sum of prime summands by stacking a thin position of one of prime summands of $k$ on top of a thin position of another prime summand, and so on. Therefore, $k$ holds the nearly additivity of knot width in this case.
In section \[section-critical\], we will generalize Theorem 1.1 by using the concept *critical surface* originated from D. Bachman (see [@Bachman1] and [@Bachman3] for the original definition and the recently modified definition of “critical surface”.) So we will get the corollary.\
[**Corollary 5.3.**]{} *If a thin position of a knot $k$ induces a manifold decomposition of double branched cover $M$ of $(S^3,k)$ by the Howards and Schultens’ method where each thick surface $H_+$ of the manifold decomposition of $M$ is strongly irreducible or critical in $M(H_+)$, then the thin position of $k$ is the sum of prime summands by stacking a thin position of one of prime summands on top of a thin position of another prime summand, and so on. Therefore, $k$ holds the nearly additivity of knot width in this case.*
Generalized Heegaard splittings
===============================
In this section, we will introduce some definitions about generalized Heegaard splittings. We use the notations and definitions by D. Bachman in [@Bachman3] through this section for convenience.
A *compression body* (a *punctured compression body* resp.) is a $3$-manifold which can be obtained by starting with some closed, orientable, connected surface, $H$, forming the product $H\times I$, attaching some number of $2$-handles to $H\times\{1\}$ and capping off all resulting $2$-sphere boundary components (some $2$-sphere boundaries resp.) that are not contained in $H\times\{0\}$ with $3$-balls. The boundary component $H\times\{0\}$ is referred to as $\partial_+$. The rest of the boundary is referred to as $\partial_-$.
A *Heegaard splitting* of a $3$-manifold $M$ is an expression of $M$ as a union $V\cup_H W$, where $V$ and $W$ are compression bodies that intersect in a transversally oriented surface $H=\partial_+V=\partial_+ W$. If $V\cup_H W$ is a Heegaard splitting of $M$ then we say $H$ is a *Heegaard surface*.
Let $V\cup_H W$ be a Heegaard splitting of a $3$-manifold $M$. Then we say the pair $(V,W)$ is a *weak reducing pair* for $H$ if $V$ and $W$ are disjoint compressing disks on opposite sides of $H$. A Heegaard surface is *strongly irreducible* if it is compressible to both sides but has no weak reducing pairs.
\[definition-GHS\] A *generalized Heegaard splitting* (GHS)[^1] H of a $3$-manifold $M$ is a pair of sets of pairwise disjoint, transversally oriented, connected surfaces, $\operatorname{Thick}(H)$ and $\operatorname{Thin}(H)$ (in this article, we will call the elements of each of both *thick surfaces* and *thin surfaces*, resp.), which satisfies the following conditions.
1. Each component $M'$ of $M-\operatorname{Thin}(H)$ meets a unique element $H_+$ of $\operatorname{Thick}(H)$ and $H_+$ is a Heegaard surface in $M'$. Henceforth we will denote the closure of the component of $M-\operatorname{Thin}(H)$ that contains an element $H_+\in\operatorname{Thick}(H)$ as $M(H_+)$.
2. As each Heegaard surface $H_+\subset M(H_+)$ is transversally oriented, we can consistently talk about the points of $M(H_+)$ that are “above” $H_+$ or “below” $H_+$. Suppose $H_-\in \operatorname{Thin}(H)$. Let $M(H_+)$ and $M(H'_+)$ be the submanifolds on each side of $H_-$. Then $H_-$ is below $H_+$ if and only if it is above $H'_+$.
3. There is a partial ordering on the elements of $\operatorname{Thin}(H)$ which satisfies the following: Suppose $H_+$ is an element of $\operatorname{Thick}(H)$, $H_-$ is a component of $\partial M(H_+)$ above $H_+$ and $H'_-$ is
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We consider Gaussian multiple-input multiple-output (MIMO) channels with discrete input alphabets. We propose a non-diagonal precoder based on the X-Codes in [@Xcodes_paper] to increase the mutual information. The MIMO channel is transformed into a set of parallel subchannels using Singular Value Decomposition (SVD) and X-Codes are then used to pair the subchannels. X-Codes are fully characterized by the pairings and a $2\times 2$ real rotation matrix for each pair (parameterized with a single angle). This precoding structure enables us to express the total mutual information as a sum of the mutual information of all the pairs. The problem of finding the optimal precoder with the above structure, which maximizes the total mutual information, is solved by [*i*]{}) optimizing the rotation angle and the power allocation within each pair and [*ii*]{}) finding the optimal pairing and power allocation among the pairs. It is shown that the mutual information achieved with the proposed pairing scheme is very close to that achieved with the optimal precoder by Cruz [*et al.*]{}, and is significantly better than Mercury/waterfilling strategy by Lozano [*et al.*]{}. Our approach greatly simplifies both the precoder optimization and the detection complexity, making it suitable for practical applications.'
author:
- 'Saif Khan Mohammed, Emanuele Viterbo, Yi Hong, and Ananthanarayanan Chockalingam, [^1] [^2]'
title: |
Precoding by Pairing Subchannels to Increase\
MIMO Capacity with Discrete Input Alphabets
---
Mutual information, MIMO, OFDM, precoding, singular value decomposition, condition number.
Introduction
============
Many modern communication channels are modeled as a Gaussian multiple-input multiple-output (MIMO) channel. Examples include multi-tone digital subscriber line (DSL), orthogonal frequency division multiplexing (OFDM) and multiple transmit-receive antenna systems. It is known that the capacity of the Gaussian MIMO channel is achieved by beamforming a [*Gaussian input alphabet*]{} along the right singular vectors of the MIMO channel. The received vector is projected along the left singular vectors, resulting in a set of parallel Gaussian subchannels. Optimal power allocation between the subchannels is achieved by waterfilling [@Cover]. In practice, the input alphabet is [*not Gaussian*]{} and is generally chosen from a finite signal set.
We distinguish between two kinds of MIMO channels: [*i*]{}) [*diagonal*]{} (or parallel) channels and [*ii*]{}) [*non-diagonal*]{} channels.
For a diagonal MIMO channel with discrete input alphabets, assuming only power allocation on each subchannel (i.e., a diagonal precoder), Mercury/waterfilling was shown to be optimal by Lozano [*et al.*]{} in [@Lozano]. With discrete input alphabets, Cruz [*et al.*]{} later proved in [@cruz] that the optimal precoder is, however, non-diagonal, i.e., precoding needs to be performed across all the subchannels.
For a general non-diagonal Gaussian MIMO channel, it was also shown in [@cruz] that the optimal precoder is non-diagonal. Such an optimal precoder is given by a fixed point equation, which requires a high complexity numeric evaluation. Since the precoder jointly codes all the $n$ inputs, joint decoding is also required at the receiver. Thus, the decoding complexity can be very high, specially for large $n$, as in the case of DSL and OFDM applications. This motivates our quest for a practical low complexity precoding scheme achieving near optimal capacity.
In this paper, we consider a general MIMO channel and a non-diagonal precoder based on X-Codes [@Xcodes_paper]. The MIMO channel is transformed into a set of parallel subchannels using Singular Value Decomposition (SVD) and X-Codes are then used to pair the subchannels. X-Codes are fully characterized by the pairings and the 2-dimensional real rotation matrices for each pair. These rotation matrices are parameterized with a single angle. This precoding structure enables us to express the total mutual information as a sum of the mutual information of all the pairs. The problem of finding the optimal precoder with the above structure, which maximizes the total mutual information, can be split into two tractable problems: [*i*]{}) optimizing the rotation angle and the power allocation within each pair and [*ii*]{}) finding the optimal pairing and power allocation among the pairs. It is shown by simulation that the mutual information achieved with the proposed pairing scheme is very close to that achieved with the optimal precoder in [@cruz], and is significantly better than the Mercury/waterfilling strategy in [@Lozano]. Our approach greatly simplifies both the precoder optimization and the detection complexity, making it suitable for practical applications.
The rest of the paper is organized as follows. Section \[SMPrecoding\] introduces the system model and SVD precoding. In Section \[optimalprecoding\], we provide a brief review of the optimal precoding with discrete inputs in [@cruz] and the relevant MIMO capacity. In Section \[PrecodingX\], we present the precoding using X-Codes with discrete inputs and the relevant capacity expressions. In Section \[two\_subch\], we consider the first problem, which is to find the optimal rotation angle and power allocation for a given pair. This problem is equivalent to optimizing the mutual information for a Gaussian MIMO channel with two subchannels. In Section \[multi\_subch\], using the results from Section \[two\_subch\], we attempt to optimize the mutual information for a Gaussian MIMO channel with $n$ subchannels, where $n>2$. Conclusions are drawn in Section \[conclusions\].Finally, in Section \[sec\_ofdm\] we discuss the application of our precoding to OFDM systems.
[*Notations*]{}: The field of complex numbers is denoted by $\mathbb{C}$ and let ${\mathbb R}^+$ be the positive real numbers. Superscripts $^T$ and $^{\dag}$ denote transposition and Hermitian transposition, respectively. The $n\times n$ identity matrix is denoted by $\mathbf{I}_{n}$, and the zero matrix is denoted by $\mathbf{0}$. The ${{\mathbb E}}[\cdot]$ is the expectation operator, $\Vert \cdot \Vert$ denotes the Euclidean norm of a vector, and $\|\cdot\|_F$ the Frobenius norm of a matrix. Finally, we let $\mbox{tr}(\cdot)$ be the trace of a matrix.
System model and Precoding with Gaussian inputs {#SMPrecoding}
===============================================
We consider a $n_t\times n_r$ MIMO channel, where the channel state information (CSI) is known perfectly at both transmitter and receiver. Let ${\bf x} = (x_1,\cdots, x_{n_t})^T$ be the vector of input symbols to the channel, and let ${\bf H}=\{h_{ij}\}$, $i=1, \cdots, n_r$, $j=1, \cdots, n_t$, be a full rank $n_r\times n_t$ channel coefficient matrix, with $h_{ij}$ representing the complex channel gain between the $j$-th input symbol and the $i$-th output symbol. The vector of $n_r$ channel output symbols is given by $$\label{system_modeleq}
{\bf y} = \sqrt {P_T}{\bf H}{\bf x} + {\bf w}$$ where ${\bf w}$ is an uncorrelated Gaussian noise vector, such that ${{\mathbb E}}[{\bf w}{\bf w}^\dag]= {\bf I}_{n_r}$, and $P_T$ is the total transmitted power. The power constraint is given by $$\label{tx_pow}
{{\mathbb E}}[\Vert {\bf x} \Vert^2] = 1$$ The maximum multiplexing gain of this channel is $n = \min(n_r,n_t)$. Let ${\bf u}=(u_1,\cdots, u_{n})^T \in{\mathbb C}^{n}$ be the vector of $n$ information symbols to be sent through the MIMO channel, with ${{\mathbb E}}[\vert u_i \vert^2] = 1, i = 1, \cdots, n$. Then the vector ${\bf u}$ can be precoded using a $n_t \times n$ matrix ${\bf T}$, resulting in ${\bf x}={\bf T}{\bf u}$.
The capacity of the deterministic Gaussian MIMO channel is then achieved by solving
\[cap\_gaussian\] $$\begin{aligned}
\label{cap_gaussian_mimo}
C({\bf H},P_T) & = & \max_{ {\bf K}_{\bf x} | \mbox{tr}({\bf K}_{\bf x} ) = 1} I({\bf x} ; {\bf y} | {\bf H}) \\ \nonumber
& \geq & \max_{ {\bf K}_{\bf u}, {\bf T} \,| \,
\mbox{tr}({\bf T}{\bf K}_{\bf u}{\bf T}^\dag) = 1} I({\bf u} ; {\bf y} | {\bf H})\end{aligned}$$
where $I({\bf x} ; {\bf y} | {\bf H})$ is the mutual information between ${\bf x}$ and ${\bf y}$, and ${\bf K}_{\bf x} {\stackrel
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In the $\phi $-mapping theory, the topological current constructed by the order parameters can possess different inner structure. The difference in topology must correspond to the difference in physical structure. The transition between different structures happens at the bifurcation point of the topological current. In a self-interaction two-level system, the change of topological particles corresponds to change of energy levels.'
address: |
$^1$ Institute of Applied Physics and Computational Mathematics,\
P.O. Box 8009(28), Beijing 100088, P.R. China\
$^2$ Institute of Theoretical Physics, Department of Physics,\
Lanzhou University, 730000, P. R. China
author:
- 'Li-Bin Fu$^1$, Jie Liu$^1$, Shi-Gang Chen$^1$, and Yi-Shi Duan$^2$'
title: 'The configuration of a topological current and physical structure: an application and paradigmatic evidence'
---
0.5cm In recent years, topology has established itself as an important part of the physicist’s mathematical arsenal [@zh1]. The concepts of the topological particle and its current have been widely used in particle physics [@duan1; @hha] and topological defect theory [@zh4]. Here, the topological particles are regarded as abstract particles, such as monopoles and the points defects.
In this paper, we give a new understanding in topology and physics. Many physics system can be described by employing the order parameters. By making use of the $\phi $-mapping theory, we find that the topological current constructed by the order parameters can possess different inner structure. The topological properties are basic properties for a physics system, so the difference in configuration of the topological current must correspond to the difference in physical structure.
Considering a $(n+1)$-dimensional system with $n$-component vector order parameter field $\vec \phi ({\bf x}),$ where ${\bf x}=(x^0,x^1,x^2,\cdots
x^n)$ correspond to local coordinates. The direction unit field of $\vec \phi
$ is defined by $$n^a=\frac{\phi ^a}{||\phi ||},\quad a=1,2,\cdots n \label{c1unit}$$ where $$||\phi ||=(\phi ^a\phi ^a)^{1/2}.$$ The topological current of this system is defined by $$j^\mu (x)=\frac{\in ^{\mu \mu _1\cdots \mu _n}}{A(S^{n-1})(d-1)!}\in
_{a_1\cdots a_n}\partial _{\mu _1}n^{a_1}\cdots \partial _{\mu _n}n^{a_n}
\label{firstcurr}$$ where $A(S^{n-1})$ is the surface area of $(n-1)$-dimensional unit sphere $%
S^{n-1}$. Obviously, the current is identically conserved, $$\partial _\mu j^\mu =0.$$ If we define a Jacobians by $$\in ^{a_1\cdots a_n}D^\mu (\frac \phi x)=\in ^{\mu \mu _1\cdots \mu
_n}\partial _{\mu _1}\phi ^{a_1}\cdots \partial _{\mu _n}\phi ^{a_n},
\label{firstjac}$$ as has been proved before [@topc], this current takes the form as $$j^\mu =\delta (\vec \phi )D^\mu (\frac \phi x). \label{deltfirstcurr}$$ Then, we can obtain $$j^\mu =\sum_{i=1}^l\beta _i\eta _i\delta (\vec x-\vec z_i(x^0))\frac{%
dz_i^\mu }{dx^0},$$ in which $z_i(x^0)$ are the zero lines where $\vec \phi ({\bf x)}=0,$ the positive integer $\beta _i$ and $\eta _i=sgnD(\frac \phi {\vec x})$ are the Hopf index and Brouwer degree of $\phi $-mapping [@zh17] respectively, and $l$ is the total number of the zero lines$.$ This current is similar to a current of point particles and the $i$-th one with the charge $\beta
_i\eta _i,$ and the zero lines $z_i(x)$ are just the trajectories of the particles, for convenience we define these point particles as topological particles. Then the total topological charge of the system is $$Q=\int_Mj^0d^nx=\sum_{i=1}^l\beta _i\eta _i,$$ here $M$ is a $n$-dimensional spatial space for a given $x^0.$ This is a topological invariant and corresponds to some basic conditions of this physical system. However, it is important that the inner structure of the topological invariant can be constructed in different configurations, i.e., the number of the topological particles and their charge can be changed. This change in configuration of the topological current must correspond to some change in physical structure.
All of the above discussion is based on the condition that $$D\left( \frac \phi x\right) =\left. D^0\left( \frac \phi x\right) \right|
_{z_i}\neq 0.$$ When $\left. D\left( \frac \phi x\right) \right| _{z_i}=0$ at some points $%
p_i^{*}=z^{*}(x_c^0)$ at $x^0=x_c^0$ along the zero line $z_i(x^0)$, it is shown that there exist several crucial cases of branch process, which correspond to the topological particle generating or annihilating at limit points and splitting, encountering or merging at the bifurcation points. A vast amount of literature has been devoted to discussing these features of the evolution of the topological particles [@bifur]. Here, we will not spend more attention on describing these evolution, but put our attention on the physical substance of these processes.
As we have known before, all of these branch processes keep the total topological charge conserved, but it is very important that these branch processes change the number and the charge of the topological particles. i.e. change the inner structure of the topological current. In our point of view, the different configuration of topological current corresponds to the different physical structure.
We consider $x^0$ as a parameter $\lambda $ of a physics system. Let us define $$\left. f_i(\lambda )=D^0\left( \frac \phi x\right) \right| _{z_i}$$ As $\lambda $ changing, the value of $f_i(\lambda )$ changes along the zero lines $z_i(\lambda ).$ At a critical point $\lambda =\lambda _c,$ when $%
f_i(\lambda _c)=0,$ we know that the inner structure of the topological current will be changed in some way, at the same time the physical structure will also be changed, i.e., the physical structure when $\lambda <\lambda _c$ will be different from the one when $\lambda >\lambda _c.$ The transition between these structures occurs at the bifurcation points where $f_i(\lambda
)=0.$
As an application and example, let us consider a self-interacting two-level model introduced in Ref. [@wuniu]. The nonlinear two-level system is described by the dimensionless Schrödinger equation $$i\frac \partial {\partial t}\left(
\begin{array}{c}
a \\
b
\end{array}
\right) =H(\gamma )\left(
\begin{array}{c}
a \\
b
\end{array}
\right) \label{a}$$ with the Hamiltonian given by $$H(\gamma )=\left(
\begin{array}{cc}
\frac \gamma 2+\frac C2(|b|^2-|a|^2) & \frac V2 \\
\frac V2 & -\frac \gamma 2-\frac C2(|b|^2-|a|^2)
\end{array}
\right) , \label{b}$$ in which $\gamma $ is the level separation, $V$ is the coupling constant between the two levels, and $C$ is the nonlinear parameter describing the interaction. The total probability $|a|^2+|b|^2$ is conserved and is set to be $1$.
We assume $a=|a|e^{i\varphi _1(t)}$, $b=|b|e^{i\varphi _2(t)},$ the fractional population imbalance and relative phase can be defined by $$z(t)=|b|^2-|a|^2,\;\;\;\;\;\varphi (t)=\varphi _2(t)-\varphi _1(t).
\label{bal}$$ From Eqs. (\[a\]) and (\[b\]), we obtain $$\frac d{dt}z(t)=-V\sqrt{1-z^2(t)}\sin [\varphi (t)] \label{ceq1}$$ $$\frac d{dt}\varphi (t)=\gamma +Cz(t)+\frac{Vz(t)}{\sqrt{1-z^2(t)}}\cos
[\varphi (t)]. \label{ceq2}$$
If we chose $x=2|a||b|\cos (\varphi ),$ $y=2|a||b|\sin (\varphi )$, it is easy to see that $x^2+y^2+z^
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In Part I of this work we defined a generalization of the concept of effective resistance to directed graphs, and we explored some of the properties of this new definition. Here, we use the theory developed in Part I to compute effective resistances in some prototypical directed graphs. This exploration highlights cases where our notion of effective resistance for directed graphs behaves analogously to our experience from undirected graphs, as well as cases where it behaves in unexpected ways.'
author:
- 'George Forrest Young, Luca Scardovi, and Naomi Ehrich Leonard, [^1][^2][^3]'
bibliography:
- 'REFabrv.bib'
- 'ReferenceList.bib'
title: 'A New Notion of Effective Resistance for Directed Graphs—Part II: Computing Resistances'
---
Graph theory, networks, networked control systems, directed graphs, effective resistance
Introduction {#sec:intro}
============
In the companion paper to this work, [@Young13I], we presented a generalization of the concept of effective resistance to directed graphs. This extension was constructed algebraically to preserve the relationships for directed graphs, as they exist in undirected graphs, between effective resistances and control-theoretic properties, including robustness of linear consensus to noise [@Young10; @Young11], and node certainty in networks of stochastic decision-makers [@Poulakakis12]. Further applications of this concept to directed graphs should be possible in formation control [@Barooah06], distributed estimation [@Barooah07; @Barooah08] and optimal leader selection in networked control systems [@Patterson10; @Clark11; @Fardad11].
Effective resistances have proved to be important in the study of networked systems because they relate global network properties to the individual connections between nodes, and they relate local network changes (e.g. the addition or deletion of an edge, or the change of an edge weight) to global properties without the need to re-compute these properties for the entire network (since only resistances that depend on the edge in question will change). Accordingly, the concept of effective resistance for directed graphs will be most useful if the resistance of any given connection within a graph can be computed, and if it is understood how to combine resistances from multiple connections. Computation and combination of resistances are possible for undirected graphs using the familiar rules for combining resistors in series and parallel.
In this paper, we address the problems of computing and combining effective resistances for directed graphs. In Section \[sec:back\] we review our definition of effective resistance for directed graphs from [@Young13I]. In Section \[sec:equalres\] we develop some theory to identify directed graphs that have the same resistances as an equivalent undirected graph. We use these results in Section \[sec:direct\] to recover the series-resistance formula for nodes connected by one directed path and the parallel-resistance formula for nodes connected by two directed paths in the form of a directed cycle. In Section \[sec:indirect\] we examine nodes connected by a directed tree and derive a resistance formula that has no analogue from undirected graphs.
Background and notation {#sec:back}
=======================
We present below some basic definitions of directed graph theory, as well as our definition of effective resistance. For more detail, the reader is referred to the companion paper [@Young13I].
A *graph* $\mathcal{G}$ consists of the triple $\left(\mathcal{V}, \mathcal{E}, A \right)$, where $\mathcal{V} = \left\{1, 2, \ldots, N \right\}$ is the set of nodes, $\mathcal{E} \subseteq \mathcal{V}\times\mathcal{V}$ is the set of edges and $A \in \mathbb{R}^{N\times N}$ is a weighted adjacency matrix with non-negative entries $a_{i,j}$. Each $a_{i,j}$ will be positive if and only if $\left( i,j \right) \in \mathcal{E}$, otherwise $a_{i,j} = 0$. The graph $\mathcal{G}$ is said to be *undirected* if $\left(i,j\right) \in \mathcal{E}$ implies $\left(j,i\right) \in \mathcal{E}$ and $a_{i,j} = a_{j,i}$. Thus, a graph will be undirected if and only if its adjacency matrix is symmetric.
The *out-degree* of node $k$ is defined as $d_k^{\text{\emph{out}}} = \sum_{j=1}^N{a_{k,j}}$. $\mathcal{G}$ has an associated *Laplacian* matrix $L$, defined by $L = D - A$, where $D$ is the diagonal matrix of node out-degrees.
A *connection* in $\mathcal{G}$ between nodes $k$ and $j$ consists of two paths, one starting at $k$ and the other at $j$ and which both terminate at the same node. A *direct connection* between nodes $k$ and $j$ is a connection in which one path is trivial (i.e. either only node $k$ or only node $j$) - thus a direct connection is equivalent to a path. Conversely, an *indirect connection* is one in which the terminal node of the two paths is neither node $k$ nor node $j$.
The graph $\mathcal{G}$ is *connected* if it contains a globally reachable node. Equivalently, $\mathcal{G}$ is connected if and only if a connection exists between any pair of nodes.
A *connection subgraph* between nodes $k$ and $j$ in the graph $\mathcal{G}$ is a maximal connected subgraph of $\mathcal{G}$ in which every node and edge form part of a connection between nodes $k$ and $j$ in $\mathcal{G}$. If only one connection subgraph exists in $\mathcal{G}$ between nodes $k$ and $j$, it is referred to as *the* connection subgraph and is denoted by $\mathcal{C}_\mathcal{G}(k,j)$.
Let $Q \in \mathbb{R}^{(N-1)\times N}$ be a matrix that satisfies $$\label{eqn:propq}
Q\mathbf{1}_N = \mathbf{0}, \; QQ^T = I_{N-1} \text{ and } Q^TQ = \Pi.$$
Using $Q$, we can compute the reduced Laplacian matrix for any graph as $$\label{eqn:lbar}
\overline{L} = QLQ^T,$$ and then for connected graphs we can find the unique solution $\Sigma$ to the Lyapunov equation $$\label{eqn:lyap}
\overline{L}\Sigma + \Sigma \overline{L}^T = I_{N-1}.$$ If we let $$\label{eqn:xdef}
X \mathrel{\mathop :}= 2Q^T \Sigma Q,$$ the resistance between two nodes in a graph can be computed as $$\label{eqn:dirres}
r_{k,j} \!=\! \left(\mathbf{e}_N^{(k)} \!-\! \mathbf{e}_N^{(j)}\right)^{\!T}\!\!\! X \!\left(\mathbf{e}_N^{(k)} \!-\! \mathbf{e}_N^{(j)}\right) \!=\! x_{k,k} + x_{j,j} - 2x_{k,j}.$$
Note that Definition \[P1:def:generalres\] in the companion paper [@Young13I] extends effective resistance computations to disconnected graphs as well.
In some of the following results, we make use of *binomial coefficients*, defined as $$\label{eqn:bincoef}
\binom{n}{k} = \frac{n!}{k! \left(n-k\right)!} \; n,k \in \mathbb{Z}, \; 0 \leq k \leq n.$$
Directed and undirected graphs with equal effective resistances {#sec:equalres}
===============================================================
In this section we prove Proposition \[prop:DAP\], which provides sufficient conditions for the resistances in a directed graph to be the same as the resistances in an equivalent undirected graph. The proof relies on two lemmas, which we prove first.
Recall that a *permutation matrix* is a square matrix containing precisely one entry of $1$ in each row and column with every other entry being $0$.
\[lem:Pprop\] Let $P$ be a permutation matrix. Then $P$ has the following properties
1. \[eqn:Porthog\]P\^[-1]{} = P\^T,
2. \[eqn:PPi\]P= P
3. \[eqn:PIPi\](P - I)= (P - I) = P - I.
<!-- -->
1. This follows from the fact that the rows (or columns) of $P$ form an orthonormal set. See, e.g. Theorem 2.1.4 in [@Horn85].
2. Since $P$ contains precisely one $1$ in each row and column, $P\mathbf{1}_N = \mathbf{1}_N$ and $\mathbf{1}_N^T = \mathbf{1}_N^TP$. Thus $P\Pi = P -
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We performed a radio recombination line (RRL) survey to construct a high-mass star-forming region (HMSFR) sample in the Milky Way based on the all-sky Wide-Field Infrared Survey Explorer (*All-WISE*) point source catalog. The survey was observed with the Shanghai 65m Tianma radio telescope (TMRT) covering 10 hydrogen RRL transitions ranging from H98$\alpha$ to H113$\alpha$ (corresponding to the rest frequencies of 4.5$-$6.9 GHz) simultaneously. Out of 3348 selected targets, we identified an HMSFR sample consisting of 517 sources traced by RRLs, a large fraction of this sample (486) locate near the Galactic plane ($|$*b*$|$ $<$ 2$\degr$). In addition to the hydrogen RRLs, we also detected helium and carbon RRLs towards 49 and 23 sources respectively. We cross-match the RRL detections with the 6.7 methanol maser sources built up in previous works for the same target sample, as a result, 103 HMSFR sources were found to harbor both emissions. In this paper, we present the HMSFR catalog accompanied by the measured RRL line properties and a correlation with our methanol maser sample, which is believed to tracer massive stars at earlier stages. The construction of an HMSFR sample consisting of sources in various evolutionary stages indicated by different tracers is fundamental for future studies of high-mass star formation in such regions.'
author:
- 'Hong-Ying Chen'
- Xi Chen
- 'Jun-Zhi Wang'
- 'Zhi-Qiang Shen'
- Kai Yang
bibliography:
- 'ref.bib'
title: 'A 4-6 GHz Radio Recombination Line Survey in the Milky Way[^1]'
---
Introduction\[1\]
=================
Formation of high-mass stars in the giant molecular clouds, though intensively studied, remains mysterious (see review papers, e.g., @ZY2007 [@tan2014]). To reveal the intrinsic of high-mass star formation (HMSF) at the very early stage, the fundamental and vital step is to construct a complete sample of high-mass star-forming regions (HMSFRs). Ultra-compact H regions (UCHRs) ($<$ 0.1 pc) are hot ionized gas surrounding an exciting central high-mass star. Such regions are excited by an early O$-$B star from which the ultra-violet photons are strong enough to ionize neutral hydrogen. H regions spread widely at a Galactic scale and have strong luminosity across multiple wavebands (ultraviolet, visible, infra-red and radio), therefore, they are ideal tracers of HMSFRs.
H region surveys in the Milky Way were firstly studied in visible wavelengths [@sharpless1953; @sharpless1959; @gum1955; @rodgers1960]. However, the extinction in the optical largely limited the capability of such researches. The dust-free radio observations are therefore needed to construct a more complete sample of Galactic H regions.
In 1965, radio recombination line (RRL) was firstly detected by @HM1965 from M 17 and Orion A. Its thin optical depth in centimeter wavelengths makes it an optimal tracer of H regions. RRL surveys were then performed in the next few decades, e.g. @MH1967 [@wilson1970; @reifenstein1970; @downes1980; @CH1987] and @lockman1989. The properties of the Galactic RRLs, such as their spatial distribution, line widths, LSR velocities and intensities are probes of the morphological, chemical and dynamical information of the Milky Way (see @anderson2011). Thus, RRL is important in a range of astrophysical topics, such as the Galactic structure (e.g. @HH2015 [@downes1980; @AB2009]) and metallicity gradient across the Galactic disk which helps understanding the Galactic chemical evolution (GCE) [@wink1983; @shaver1983; @quireza2006; @balser2011].
More recent RRL surveys were performed with high-sensitivity facilities (e.g. @liu2013 [@alves2015; @anderson2011; @anderson2014]). In particular, the recent Green Bank Telescope (GBT) H Region Discovery Survey (HRDS) detected 603 discrete RRL components from 448 targets which were considered to be H regions, thus doubled the number of known Galactic H regions [@anderson2011]. With the demonstration that H regions can be reliably identified by their mid-infrared (MIR) morphology, @anderson2014 extended the HRDS sample to $\sim~8000$ candidate sources based on the *all-sky Wide-Field Infrared Survey Explorer* (*WISE*) MIR images (hereafter the catalog). The catalog contains $\sim~1500$ confirmed H regions with observed RRL data in the literature, it is the most complete sample of H regions to date.
The *WISE* data have four MIR bands: 3.4 $\mu$m, 4.6 $\mu$m, 12 $\mu$m and 22 $\mu$m, with angular resolutions of 6$\arcsec$.1, 6$\arcsec$.4, 6$\arcsec$.5 and 12$\arcsec$, respectively, which are sensitive to HMSFRs. Its complete sky coverage and up-to-date database provide an optimal target sample for identifying HMSFR candidates. To further extended the HMSFR sample traced by RRLs beyond the catalog, we conducted an RRL survey with the Shanghai 65m Tianma Radio Telescope (TMRT) based on the *WISE* point source catalog rather than the *WISE* MIR images. Since as H regions form and evolve they will expand, selecting targets from the point source catalog will make our sample to include more compact, and therefore younger sources.
Comparing to other single-dish RRL surveys, we concentrate more on the correlation and association with methanol masers to signpost different periods of star-forming processes. Class methanol maser is a powerful tracer of the hot molecular cloud phase of HMSFR [@minier2003; @ellingsen2006; @xu2008], when there is significant mass accretion. H region generally appears in more evolved phases of star formation, well before the main sequence [@walsh1998; @BM1996]. As suggested by @churchwell2002, due to beam-blending and thick optical depth, the densest and earliest H regions are heavily obscured, more extended detectable UCH regions probably are only formed until the central star reaches main sequence, and no longer accreting significant mass. By cross-matching the RRL and class methanol maser samples, the evolutionary stages of their hosts may be specified more accurately. Therefore, simultaneous observation for both the RRLs and 6.7 GHz methanol masers were conducted to investigate their associations. Notably, due to beam dilution, RRL emissions from dense UCH regions at earlier stages will be undetectable, thus our detected RRL sources will trace more extended and evolved H regions. Previous studies have demonstrated that 6.7 GHz methanol masers can be excited in the UCH regions, including both extended and compact sources identified by radio continuum data (e.g @hu2016). Since RRL-detected UCH regions are generally more evolved than those without RRL emissions, RRL researches will be helpful for identifying which methanol maser sources are at more evolutionary stages.
In this paper, we report the RRL detections with the measured line parameters, as well as the results of a correlation with the 6.7 GHz methanol maser sample towards the same target sample built by [@yang2017; @yang2019]. Section \[2\] describes the sample selection and observations. Section \[f3\] presents the results of the survey followed by a discussion in section \[4\]. We summarize our main conclusions in Section \[5\].\
Observations and Data Reduction\[2\]
====================================
Source Selection\[2.1\]
-----------------------
RRLs and 6.7 GHz methanol masers were observed simultaneously in our survey. The targets were selected with the following methodology: firstly, a cross-matching was applied between the 6.7 GHz methanol maser catalog created by the Methanol Multi beam (MMB) Survey conducted with the Parkes telescope [@caswell2010; @caswell2011; @green2010; @green2012; @breen2015], and the *All-WISE* point source catalog. As a result, there are 502 MMB maser sources which have a *WISE* counterpart with a spatial offset within 7$\arcsec$. We only kept 473 sources with *WISE* data available from all four bands. A magnitude and color-color analysis was then applied to those 473 sources (see @yang2017 for details). 73$\%$ of those sources fell in the color region with well-constrained *WISE* color criteria: \[3.4\] $<$ 14 mag; \[4.6\] $<$ 12 mag; \[12\] $<$ 11 mag; \[22\] $<$ 5.5 mag; \[3.4\] - \[4.6\] $>$ 2, and \[12\] - \[22\] $>$ 2. To avoid repetition, we excluded sources locating in the MMB survey region ($20\degr < l < 186\degr$ and $|$*b*$|$ $>$ 2$\degr$). Due to the limitation of observing range, we also excluded those with a declination below $-30 \degr$. In total, 3348 *WISE* point sources were selected searching for RRLs and methanol maser emissions. In this sample, 1473 sources are located at a high Galactic latitude region with $|$*b*$|$ $>$ 2
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- |
Nicholas T. Jones and S.-H. Henry Tye\
Laboratory for Elementary-Particle Physics, Cornell University, Ithaca, NY 14853.\
E-mail: ,
title: 'An Improved Brane Anti-Brane Action from Boundary Superstring Field Theory and Multi-Vortex Solutions'
---
Introduction
============
D-branes play a crucial role in string theory [@Polchinski:1998rq]. To understand the D-brane anti-D-brane ($\DD$) system involves off-shell physics. A powerful way to study it is to write down its effective space-time action from background-independent or boundary string field theory (BSFT) [@Witten:1992qy; @Shatashvili:1993kk; @Gerasimov:2000zp]. Following the work on the non-BPS D-brane effective action in open superstring theory [@Kutasov:2000aq], this program was carried out by two groups (KL [@Kraus:2000nj] and TTU [@Takayanagi:2000rz]). Here we seek to improve on their effective $\DD$ action and study its properties.
The effective action in Refs[@Kraus:2000nj; @Takayanagi:2000rz] has a number of interesting properties. It includes all powers of the single derivative of the tachyon field $T$, a feature very important for time dependent, or rolling tachyon, solutions [@Sen:2002nu; @Sugimoto:2002fp]. This feature is also necessary to lead to the fact that the lower dimensional branes appear as soliton solutions in tachyon condensation. In particular, KL/TTU find a codimension-two BPS brane as a solitonic solution, with the correct brane tension and the correct RR charge [@Sen:1999mg; @Witten:1998cd]. However, that vortex solution does not have “magnetic” flux inside it, contrary to our intuition from the Abelian Higgs model. As written, the KL/TTU effective action that involves all powers of the first derivative of $T$ does not respect the $U(1)\times U(1)$ gauge symmetry of the $\DD$ system; the derivatives of $T$ do not generalize to covariant derivatives, as is necessary since the complex tachyon field $T$ is charged under the relative $U(1)$. Without the correct gauge covariant action, it is not clear whether the vortex solution, and more generally the multi-vortex solutions, should have “magnetic” flux inside them or not.
We improve the $\DD$ effective action by restoring the covariance and the $U(1)\times U(1)$ gauge symmetry of the system so the tachyon field couples to one of the gauge fields as expected. This improved action is summarized in Eq.(\[action\]). Starting with this $\DD$ action we find analytic multi-vortex multi-anti-vortex solutions (all parallel with arbitrary positions and constant velocities), summarized in Eq.(\[general\_solution\]). The solution with $n$ vortices (*i.e.* $n$ parallel codimension-2 branes) and $m$ anti-vortices has total tension $\varepsilon_{p-2} = (n+m) \tau_{p-2}$ and Ramond-Ramond (RR) charge $\mu_{p-2} = (n-m) \tau_{p-2}g_s$ under the spacetime $(p-1)$-form potential. Here $\tau_{p-2}$ is the D$(p-2)$-brane tension and $g_s$ is the string coupling constant. The simplicity of the solution suggests that the $\DD$ effective action may be useful to study the brane dynamics. For $m=0$ and an appropriate choice of the magnetic flux, the solution is supersymmetric and corresponds to $n$ BPS D$(p-2)$-branes.
These solutions have a curious degeneracy. Each unit of winding (*i.e.* a vortex corresponding to a D-brane) can have up to one unit of “magnetic” flux inside it. That is, both the tension and the RR charge are independent of the presence (or absence) of the “magnetic” flux. We expect this degeneracy to be lifted by the quantum corrections to the $\DD$ action and/or the corrections from the higher derivative and gauge field-strength terms. However, it is not clear exactly how the degeneracy will be lifted.
One motivation to understand the $\DD$ system better is its role in cosmology. D-brane interaction in the brane world scenario provides a natural setting for an inflationary epoch in the early universe [@Dvali:1998pa; @Burgess:2001fx; @Garcia-Bellido:2001ky; @Jones:2002cv; @Herdeiro:2001zb] (see also [@Quevedo:2002xw] for a review and extensive list of references). There, the inflaton is simply the brane-brane separation while the inflaton potential comes from their interaction. The simplest such scenario involves a brane-anti-brane pair[@Dvali:2001fw; @Burgess:2001fx]. Toward the end of inflation, as the brane and the anti-brane approach each other and collide, a tachyon emerges and tachyon condensation (*i.e.* the tachyon field rolling down its potential) is expected to reheat the universe and produce solitons (even codimensional branes) that appear as cosmic strings in our universe [@Jones:2002cv]. The cosmic string density is estimated to be compatible with present day observations, but will be critically tested by cosmic microwave background radiation and gravitational wave detectors in the near future[@Sarangi:2002yt]. To study inflation and how it ends, we also construct the $(\DD)_p$ effective action when the brane is separated from the anti-brane. We find a separation dependent tachyon potential which predicts that the $\DD$ system is classically stable when the brane and anti-brane are further than $\frac{2\pi^2\ap}{2\ln2}$ apart, but can quantum mechanically decay with the tachyon tunneling through its potential. The critical separation agrees with the result known from other methods [@Banks:1995ch] aside from the factor of $2\ln2$.
The paper is organized as follows. In §2, we briefly review the BSFT derivation of the $\DD$ action. Then we use Lorentz and gauge symmetry to complete the terms in the effective action. As a check, we expand it to next to leading order and show agreement with known results. In §3, we present the general multi-vortex multi-anti-vortex solutions, with zero and non-zero gauge field strengths inside the vortices. We calculate the RR charge and the total energy of these solutions and reveal the degeneracy. We discuss how this degeneracy may be lifted. In §4, we construct the effective action when the D$p$-brane and the ${
\overline{\text{D}}
}p$-brane are separated. The barrier potential to tunelling is evaluated. §5 is the conclusion.
Brane Anti-Brane Effective Actions
==================================
Linear Tachyon Action from BSFT
-------------------------------
We summarize the brane anti-brane effective action from BSFT calculated by KL and TTU [@Kraus:2000nj; @Takayanagi:2000rz]. We restrict attention to D9-branes in type IIB theory, and generalize using T-Duality later. BSFT essentially extends the sigma-model approach to string theory[@Tseytlin:1989rr], in that (under certain conditions [@Witten:1992qy; @Gerasimov:2000zp]) the disc world-sheet partition function with appropriate boundary insertions gives the classical spacetime action. This framework for the bosonic BSFT was extended to the open superstring in [@Kutasov:2000aq] and formally justified in [@Marino:2001qc]. In the NS sector the spacetime action is $$\begin{aligned}
\label{definition_S}
S_{\text{spacetime}} &= -\int \mathcal DX\mathcal D\psi
\mathcal D\tilde\psi\;e^{-S_\Sigma-S_{\partial\Sigma}}.\end{aligned}$$ where $\Sigma$ is the worldsheet disc and $\partial\Sigma$ is its boundary. The worldsheet bulk disc action is the usual one $$\begin{aligned}
S_\Sigma &= \frac1{2\pi\ap}\int d^2z\;
\partial X^\mu{
\overline{\partial}
} X_\mu
+ \frac1{4\pi}\int d^2z\left(\psi^\mu{
\overline{\partial}
}\psi_\mu
+ \tilde\psi^\mu\partial\tilde\psi_\mu\right)\\
&= \hf\sum_{n=1}^\infty nX_{-n}^\mu X_{n\;\mu} +
i\sum_{r=\hf}^\infty\psi_{-r}^\mu\psi_{r\;\mu},\end{aligned}$$ after expanding the fields in the standard modes. To reproduce the Dirac-Born-Infeld (DBI) action for a single brane, the appropriate boundary insertion is the boundary pullback of the $U(1)$ gauge superfield to which the open string ends couple; for the $N$ brane $M$ anti-brane system, the string ends couple to the superconnection [@Quillen; @Witten:1998cd], hence the
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- 'J.-B. Delisle'
bibliography:
- 'laplace.bib'
title: |
Analytical model of multi-planetary resonant chains\
and constraints on migration scenarios
---
Introduction {#sec:introduction}
============
Mean motion resonances (MMR) between two planets are a natural outcome of the convergent migration of planets in a gas-disk [e.g., @weidenschilling_orbital_1985]. The planets initially form farther away from each other, and planet-disk interactions induce a migration of the planets. The period ratio between the planets decreases until they get captured in a MMR. The planets then continue to migrate whilst maintaining their period ratio at a rational value (2/1, 3/2, etc.). The eccentricities increase due to the resonant interactions, until they reach an equilibrium between the migration torque and the eccentricity damping exerted by the disk. The argument of the resonance, which is a combination of the mean longitudes of the two planets, enters into libration (oscillations around an equilibrium value).
For systems of three and more planets, once a pair of planets has been captured in a MMR, the other planets might also join this couple to form a chain of resonances. Each time a planet gets captured in the chain, it enters into a MMR (and thus maintains a constant and rational period ratio) with each of the other planets of the chain. The eccentricities of the planets and the resonant arguments of each pair find a new equilibrium. Such multi-planetary resonant chains are expected from simulations of planet migration [e.g., @cresswell_evolution_2006]. Recently, @mills_resonant_2016 showed that the four planets in the Kepler-223 system are in a 3:4:6:8 resonant chain (period ratios of 4/3, 3/2, and 4/3 between consecutive pairs of planets). Using transit timing variations (TTVs), the authors observed that the Laplace angles of the system are librating with small amplitudes. The Laplace angles are combinations of the mean longitudes of three planets in the chain, and the observation of their libration is evidence that the system is indeed captured in the resonant chain. Using numerical simulations, @mills_resonant_2016 showed that the observed orbital configuration is very well reproduced by a smooth convergent migration of the planets.
In this article, I present an analytical model of resonant chains. Analytical models have already been proposed, in particular to study the dynamics of the Laplace resonance (1:2:4 chain) between Io, Europa, and Ganymede [e.g., @henrard_orbital_1983]. However, while several numerical studies have been dedicated to the capture of planets in various resonant chains [e.g., @cresswell_evolution_2006; @papaloizou_dynamics_2010; @libert_trapping_2011; @papaloizou_consequences_2016], general analytical models have not yet been proposed. Recently, @papaloizou_three_2015 proposed a semi-analytical model of three-planet resonances taking into account only the interactions between consecutive planets in the chain, with a particular focus on the system [12:15:20 resonant chain, see also @steffen_transit_2012; @gozdziewski_laplace_2016]. This model is very similar to the studies of the Laplace resonance between the Galilean moons, but is not well suited in the general case. For instance, four-planet (or more) resonances are not considered. Moreover, for some three-planet resonances, the interactions between non-consecutive planets cannot be neglected. For instance, in a 3:4:6 resonant chain, each planet is locked in a first-order resonance with each of the other planets. In particular, the innermost and outermost planets are involved in a 2/1 MMR that strongly influences the dynamics of the system. I describe here a general model of resonant chains, with any number of planets, valid for any resonance order. I particularly focus on finding the equilibrium configurations (eccentricities, resonant arguments, etc.) around which a resonant system should librate. While a real system may be observed with significant amplitude of libration around the equilibrium, or could even have some angles circulating, the position of the equilibria still provides useful insights into the dynamics of the system. In Sect. \[sec:model\], I describe this analytical model, and the method I use to find the equilibrium configurations. In Sect. \[sec:application\], I apply the model to . I show that six equilibrium configurations exist for this resonant chain, and that the system is observed to be librating around one of them. I also show that knowing the current configuration of the system allows for interesting constraints to be put on its migration scenario, and in particular on the order in which the planets have been captured in the chain.
Model {#sec:model}
=====
I consider a planetary system with $n$ planets (which I denote with indices $1,...,n$ from the innermost to the outermost) orbiting around a star (index 0). I assume that the system is coplanar and is locked in a chain of resonances. In such a resonant chain, each pair of planets is locked in a MMR. For two planets $i<j$, I denote by $k_{j,i}/k_{i,j}$ the resonant ratio, such that $$k_{j,i} n_j - k_{i,j} n_i \approx 0,$$ where $n_i$ ($n_j$) is the mean motion of planet $i$ ($j$). I also introduce the degree of the resonance between planet $i$ and planet $j$ $$q_{i,j} = k_{j,i} - k_{i,j}.$$ At low eccentricities, resonances of a lower degree have a stronger influence on the dynamics of the system.
In order to study the dynamics of these resonant chains, I generalize to $n$ planets the method developped in the case of two-planet resonances [@delisle_dissipation_2012; @delisle_resonance_2014]. The Hamiltonian of the system takes the form [@laskar_analytical_1991] $$\label{eq:hamposvel}
\H = -\sum_{i=1}^n \G\frac{m_0m_i}{2 a_i}
+ \sum_{1\leq i<j\leq n} \left(-\G\frac{m_i m_j}{||\vec{r}_i-\vec{r}_j||}
+ \frac{\vec{\tilde{r}}_i.\vec{\tilde{r}}_j}{m_0} \right),$$ where $\G$ is the gravitational constant, $m_i$ is the mass of body $i$, $a_i$ is the semi-major axis, $\vec{r}_i$ the position vector, and $\vec{\tilde{r}}_i$ the canonically conjugated momentum of planet $i$ [in astrocentric coordinates, see @laskar_analytical_1991]. The first sum on the right-hand side of Eq. (\[eq:hamposvel\]) is the Keplerian part of the Hamiltonian (planet-star interactions), while the second sum is the perturbative part (planet-planet interactions).
In the coplanar case (which I assume here) the system has $2 n$ degrees of freedom (DOF), with 2 DOF (4 coordinates) associated to each planet. As for two-planet resonances [e.g., @delisle_dissipation_2012], the number of DOF can be reduced by using the conservation of the total angular momentum (1 DOF), and by averaging over the fast angles (1 DOF). Therefore, the problem can be reduced to $2(n-1)$ DOF. Even with these reductions, the phase space is still very complex, especially for systems of many planets such as (chain of 4 planets, 6 DOF), and the problem is, in most cases, non-integrable. In this study, I focus on finding the fixed point of the averaged problem, which provides useful insight into the dynamics of the system, and especially into the values around which the angles of a resonant system should librate. The method described in the following is a generalization of the method presented in @delisle_dissipation_2012 which focuses on finding the fixed points for two-planet MMR.
I denote by $\lambda_i$ and $\varpi_i$ the mean longitude and longitude of periastron of planet $i$ (in astrocentric coordinates), respectively. The actions canonically conjugated to the angles $\lambda_i$ and $-\varpi_i$ are the circular angular momentum $\Lambda_i$ and the angular momentum deficit [AMD, see @laskar_spacing_2000] $D_i$, respectively. These actions are defined as follows $$\begin{aligned}
\Lambda_i &=& \beta_i\sqrt{\mu_i a_i},\\
D_i &=& \Lambda_i-G_i = \Lambda_i \left(1-\sqrt{1-e_i^2}\right),\end{aligned}$$ where $G_i=\Lambda_i\sqrt{1-e_i^2}$ is the angular momentum of planet $i$, $\beta_i = m_i m_0/(m_0+m_i)$, $\mu_i = \G (m_0+m_
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We introduce the Connection Scan Algorithm (CSA) to efficiently answer queries to timetable information systems. The input consists, in the simplest setting, of a source position and a desired target position. The output consist is a sequence of vehicles such as trains or buses that a traveler should take to get from the source to the target. We study several problem variations such as the earliest arrival and profile problems. We present algorithm variants that only optimize the arrival time or additionally optimize the number of transfers in the Pareto sense. An advantage of CSA is that is can easily adjust to changes in the timetable, allowing the easy incorporation of known vehicle delays. We additionally introduce the Minimum Expected Arrival Time (MEAT) problem to handle possible, uncertain, future vehicle delays. We present a solution to the MEAT problem that is based upon CSA. Finally, we extend CSA using the multilevel overlay paradigm to answer complex queries on nation-wide integrated timetables with trains and buses.'
author:
- |
Julian Dibbelt, Thomas Pajor, Ben Strasser, Dorothea Wagner\
Karlsruhe Institute of Technology (KIT), Germany\
work done while at KIT\
`algo@dibbelt.de` `thomas@tpajor.com`\
`strasser@kit.edu` `dorothea.wagner@kit.edu`
date: March 2017
title: 'Connection Scan Algorithm[^1]'
---
Introduction
============
We study the problem of efficiently answering queries to timetable information systems. Efficient algorithms are needed as the foundation of complex web services such as the Google Transit or bahn.de - the German national railroad company’s website. To use these websites, the user enters his desired departure stop, arrival stop and a vague moment in time and the system should compute a journey telling the user when to take which train. In practice, trains do not adhere perfectly to the timetable and therefore it is necessary to be able to quickly adjust the scheduled timetable to the actual situation or account in advance for possible delays.
At its core, the studied problem setting consists of the classical shortest path problem. This problem is usually solved using Dijkstra’s algorithm [@d-ntpcg-59] which is build around a priority queue. Algorithmic solutions that reduce timetable information systems to variation of the shortest path problem that are solved with extensions of Dijkstra’s algorithm are therefore common. The time-dependent and time-expanded graph [@pswz-emtip-08] approaches are prominent examples.
In this work, we present an alternative approach to the problem, namely the *Connection Scan Algorithm* (CSA). The core idea consists of doing away with the priority queue and replacing it with a list of trains sorted by departure time. Contrary to most competitors, CSA is therefore not build upon Dijkstra’s algorithm. The resulting algorithm is comparatively simple because the complexity inherent to the queue is missing. Further, Dijkstra’s algorithm spends most of its execution time within queue operations. Our approach replaces these with faster more elementary operations on arrays. The resulting algorithm is therefore also able of achieving low query running times. A further advantage of our approach is that the data structure consists primarily of an array of trains sorted by departure time. Maintaining a sorted array is easy even when train schedules change.
Modern timetable information systems do not only optimize the arrival time. A common approach consists of optimizing several criteria in the Pareto sense [@mswz-tima-07; @dms-mcspt-08; @bm-somcs-09]. The practicality of this approach was shown by [@mw-pspof-01]. The most common second criterion is the number of transfers. Another often requested criterion is the price [@ms-pltcm-06] but we omit this criterion from our study because of very complex realworld pricing schemes. A further commonly considered problem variant consists of profile queries. In this variant the input does not contain a departure time. Instead, the output should contain all optimal journeys between two stops for all possible departure times. As further problem variant, we propose and study the minimum expected arrival time (MEAT) problem setting to compute delay-robust journeys.
CSA is very fast as it does not possess a heavyweight preprocessing step. This makes the algorithm comparatively simple but it also makes the running time inherently dependent on the timetable’s size. For very large instances this can be a problem. We therefore study an algorithmic extension called Connection Scan Accelerated (CSAccel) which combines a multilevel overlay approach [@sww-daola-99; @hsw-emlog-08; @dgpw-crprn-13] with CSA.
#### Related Work.
Finding routes in transportation networks is the focus of many research projects and thus many publications on this subject exist. The published papers can be roughly divided into two categories depending on whether the studied network is timetable-based. As our research focuses on timetable routing, we restrict our exposition to it and refer to a recent survey [@bdgmpsww-rptn-16] for other routing problems.
Some techniques are processing-based and have an expensive and slow startup phase. The advantage of preprocessing is, that it decreases query running times. A major problem with preprocessing-based techniques is that the preprocessing needs to be rerun each time that the timetable changes. We start by proving an overview over techniques without preprocessing and afterwards describe the preprocessing-based techniques.
The traditional approach consists of extending Dijkstra’s algorithm. Two common methods exist and are called the time-dependent and time-expanded graph models [@pswz-emtip-08]. In [@dkp-pcbcp-12] the time-dependent model has been refined by coloring graph elements. The authors further introduce SPCS, an efficient algorithm to answer earliest arrival profile queries. A parallel version called PSPCS is also introduced. We experimentally compare CSA to SPCS, to the colored time-dependent model and the basic time-expanded model.
Another interesting preprocessing-less technique is called RAPTOR and was introduced in [@dpw-rbptr-14]. Just as CSA it does not employ a priority queue and therefore is not based on Dijkstra’s algorithm. It inherently supports optimizing the number of transfers in the Pareto-sense in addition to the arrival time. A profile extension called rRAPTOR also exists. We experimentally compare CSA with RAPTOR and rRAPTOR.
Adjusting the time-dependent and time-expanded graphs to account for realtime delays is conceptually straightforward but the details are non-trivial and difficult as the studies of [@ms-etipd-09] and [@cddfgpz-egbmd-14] show.
In [@bgm-fdsut-10] SUBITO was introduced. This is an acceleration of Dijkstra’s algorithm applied to the time-dependent graph model. It works using lower bounds on the travel time between stops to prune the search. As slowing down trains does not invalidate the lower bounds, most realworld train delays can be incorporated. However, CSA supports more flexible timetable updates. For example, contrary to SUBITO CSA supports the efficient insertion of connections between stops that were previously not directly connected.
In [@w-tbptr-15] trip-based routing (TB) was introduced. It works by computing all possible transfers between trains in a preprocessing step. The preprocessing running times are still well below those of other preprocessing-based techniques but non-negligible. Unfortunately, the achieved query speedup lacks behind techniques with more extensive preprocessing. In [@w-tbptr-16] the technique was extended with a significantly more heavy-weight preprocessing algorithm that stores a large amount of trees to achieve higher speedups.
Many more preprocessing-based techniques exist. For example, in [@g-ctnrt-10] Contraction Hierarchy, a very successful technique for road routing, was adapted for timetable-based routing. In [@ddpw-ptl-15], Hub-labeling, another successful technique for roads, was also adapted for timetable-based routing. Another labeling-based approach was proposed in [@wlyxz-erppt-15]. In addition to SUBITO, [@bgm-fdsut-10] introduces $k$-flags. $k$-flags is an adaptation of Arc-Flags [@l-aefea-04], a further successful technique for roads, to timetables. Another well-known preprocessing-based technique is called Transfer Patterns (TP). It was introduced in [@bceghrv-frvlp-10] and was refined since then over the course of several papers. In [@bs-fbspt-14] the authors combined frequency-based compression with routing and used it to decrease the TP preprocessing running times. In [@bhs-stp-16] TP was combined with a bilevel overlay approach to further decrease preprocessing running times. CSAccel is not the first technique to combine multilevel routing with timetables. This was already done in [@swz-umlgt-02].
We postpone giving an overview over the existing papers related to the MEAT problem until Section \[sec:related-work-meat\], as the details of the MEAT problem are described in Section \[sec:
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
A scheme for conditional generating a Hermite polynomial excited squeezed vacuum states (HESVS) is proposed. Injecting a two-mode squeezed vacuum state (TMSVS) into a beam splitter (BS) and counting the photons in one of the output channels, the conditional state in the other output channel is just a HESVS. To exhibit a number of nonclassical effects and non-Guassianity, we mainly investigate the photon number distribution, sub-Poissonian distribution, quadrature component distribution, and quasi-probability distribution of the HPESVS. We find that its nonclassicality closely relates to the control parameter of the BS, the squeezed parameter of the TMSVS, and the photon number of conditional measurement. These further demonstrate that performing the conditional measurement on a BS is an effective approach to generate non-Guassian state.
**ocis:** (270.5570) Quantum detectors; (270.4180) Multiphoton processes; (270.5290) Photon statistics
**Keywords:**Conditional measurement; beam splitter; Wigner function; Nonclassicality
author:
- 'Xue-xiang Xu$^{1,\dag }$, Hong-chun Yuan$^{2}$ and Hong-yi Fan$^{3}$'
title: Generating Hermite polynomial excited squeezed states by means of conditional measurements on a beam splitter
---
Introduction
============
Quantum state engineering has been a subject of increasing interest to construct various novel nonclassical states in quantum optics and quantum information processing[@a1; @a2]. From a theoretical point of view, the simplest way of generating nonclassical field states is to apply the photon creation operation to classical states such as the thermal and coherent states[@a3; @a4; @a5]. These nonclassical states, such as the single-photon added coherent state[@a6] and single-photon-added thermal state[@a7], have been realized experimentally. Subsequently, it has been demonstrated that subtracting photons from traditional quantum states exhibit an abundance of nonclassical properties[@a8; @a9; @a10; @a11]. Photon subtraction or addition can improve entanglement between Guassian states[@a12], loophole-free tests of Bell’s inequality[@a13], and quantum computing[@a14].
To meet the requirement of the development of quantum optics and quantum information tasks, some nonclassical states are explored by performing the different combination of photon subtraction and photon addition[a15,a16,a17,a18,a19,a20]{}, which have different properties. Kim et al[a21]{} discussed single photon adding then subtracting (or single photon subtracting then adding) coherent state (or thermal state) to probe quantum commutation rules $\left[ a,a^{\dag}\right] =1$. Lee et al[@a17] investigated the nonclassicality of field states when photon subtraction-then-addition operation or the photon addition-then-subtraction operation is applied to the coherent state (or thermal state), respectively. Yang and Li[@a18] analyzed multiphoton addition followed by multiphoton subtraction ($a^{l}a^{\dag k}$) and its inverse ($a^{\dag l}a^{k}$) on an arbitrary state. Recently, Lee and Nha[@a23] proposed a coherent superposition of photon addition and subtraction, $ta+ra^{\dag}$ ($%
\left
\vert t\right \vert ^{2}+\left \vert r\right \vert ^{2}=1$) acting on a coherent state and a thermal state. More recently, we investigated the nonclassical properties of optical fields generated by Hermite-excited coherent state[@a24] and Hermite-excited squeezed thermal states[a25]{}.
On the other hand, another promising method for generating highly nonclassical states of optical fields is known to be conditional measurement[@a25a; @a26; @a27; @a28; @a28a]. Namely, when a system is prepared in an entangled state of two subsystems and a measurement is performed on one subsystem, then the quantum state of the other subsystem can be reduced to a new state. In particular, it turned out that conditional measurement on a beam splitter may be advantageously used for generating new classes of quantum states[@a28; @a28a]. Dakna’s group used conditional measurement on the BS to generate cat-like state[@a29]. Podoshvedov et al[@a27] proposed optical scheme for generating both a displaced photon and a displaced qubit via conditional measurement. In Ref.[@a30], they proposed to create arbitrary Fock states via conditional measurement on the BS. In addition, conditional output measurement on the BS may be used to produce photon-added states for a large class of signal-mode quantum states, such as thermal state, coherent state, and squeezed states[@a31]. Similarly, photon-subtracted states can be produced by means of conditional measurement on the BS[@a32]. Therefore, based on conditional measurement on the BS, it is possible to generate and manipulate various nonclassical optical fields in a real laboratory.
In this paper, we study the Hermite polynomial excited squeezed vacuum state (HESVS), a kind non-Gaussian quantum state, generated by conditional output measurement on a BS. The calculations show that when a two-mode squeezed vacuum state (TMSVS) is injected in the input channels and the photon number of the mode in one of the output channels is measured, then the mode in the other output channel is prepared in a conditional state that has the typical features of a Hermite polynomial excited squeezed state. To exhibit the nonclassical properties of this conditional state, we mainly analyze the states in terms of the photon number distribution, sub-Poissonian distribution, quadrature component distribution, and Quasi-probability distribution including the Wigner function(WF) and Husimi function(HF). The paper is organized as follows. Section 2 presents the basic scheme for generation of the HESVS and its normalization related to Legendre polynomial. The nonclassical properties of the HESVS are analytically and numerically studied in Section 3-4. The results indicate that the conditional HESVS is strongly noncassical and non-Gaussian due to the presence of the partial negative WF. Finally, a summary and concluding remarks are given in Section 5.
Generation of Hermite polynomial excited squeezed state
=======================================================
It is well known that the input-output relations at a lossless beam splitter can be characterized by the SU(2) Lie algebra. In the Schrödinger picture, the role played by the beam splitter (BS) upon the input state $%
\rho _{in}$ results in the output state $$\rho _{out}=\hat{B}\rho _{in}\hat{B}^{\dag }, \label{1.1}$$where $\hat{B}=\exp \left[ \theta \left( a^{\dag }b-ab^{\dag }\right) \right]
$ corresponds to the unitary operator in terms of the creation (annihilation) operator $a^{\dag }$($a$) and $b^{\dag }$($b$) for mode $a$ and $b$, whose transformations satisfy[@a30] $$\begin{aligned}
\hat{B}a\hat{B}^{\dag }& =a\cos \theta -b\sin \theta , \notag \\
\hat{B}b\hat{B}^{\dag }& =a\sin \theta +b\cos \theta . \label{1.1a}\end{aligned}$$Moreover, $\cos \theta $ and $\sin \theta $ are the transmittance and reflectance of the beam splitter, respectively. Note that the globe phase factor of BS may be omitted without loss of generality. For the sake of simplicity, we also assume that $\theta $ is tunable in the range of $\left[
0,\pi /2\right] $. Under special circumstances, when $\theta =0$ or $\theta
=\pi /2$, the BS corresponds to the cases of total transmission and total reflection, respectively. For $\theta =\pi /4$, the BS is just the symmetrical, i.e. 50/50 BS.
Hermite polynomial excited squeezed state
-----------------------------------------
A two-mode squeezed vacuum state (TMSVS) is the correlated state of two field modes $a$ and $b$ (signal and idle) that can be generated by a nonlinear medium. Theoretically, the TMSVS is obtained by applying the unitary operator $S_{2}\left( r\right) $ on the two-mode vacuum,$$\left\vert \Psi \right\rangle _{ab}=S_{2}\left( r\right) \left\vert
0,0\right\rangle =\cosh ^{-1}re^{a^{\dag }b^{\dag }\tanh r}\left\vert
0,0\right\rangle , \label{1.2}$$where $S_{2}\left( r\right) =\exp \left[ r\left( a^{\dag }b^{\dag
}-ab\right) \right] $ is the two-mode squeezed operator and the values of $r$ determines the degree of squeezing. The larger $r$, the more the state is squeezed. Especially, when $r=0$, $\left\vert \Psi \right\rangle _{ab}$ reduces to two-mode vacuum state $\left\vert 0,0\right\rangle $.
\[Fig1-1\] ![[Preparation scheme of HPESVS. When a TMSVS is mixed by a beam splitter and the number
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The CELESTE atmospheric Cherenkov detector ran until June 2004. It has observed the blazars Mrk 421, 1ES 1426+428 and Mrk 501. We significantly improved our understanding of the atmosphere using a LIDAR, and of the optical throughput of the detector using stellar photometry. The new data analysis provides better background rejection. We present our light curve for Mrk 421 for the 2002-2004 season and a comparison with X-ray data and the 2004 observation of 1ES 1426+428. The new analysis will allow a more sensitive search for a signal from Mrk 501.'
address: 'CENBG, Domaine du Haut-Vigneau, BP 120, 33175 Gradignan Cedex, France'
author:
- 'Brion, E.'
- and the CELESTE collaboration
title: 'Blazar observations above 60 GeV: the Influence of CELESTE’s Energy Scale on the Study of Flares and Spectra'
---
CELESTE,Cherenkov,Mrk 421,1ES 1426+428,Mrk 501 95.85.Pw ,98.54.Cm
Introduction
============
CELESTE was a Cherenkov experiment using 53 heliostats of the former Électricité de France solar plant in the French Pyrenees at the Thémis site. It detected Cherenkov light from electromagnetic showers produced in the atmosphere by the $\gamma$-rays coming from high energy astrophysical sources. The light is reflected to secondary optics and photomultipliers installed at the top of the tower. Finally it is sampled to be analysed (Paré 2002).
To constrain the energy scale of the experiment we improved the optics simulation, now in good agreement with the data. The data analysis has also been improved so that we have better background rejection. We present the light curve for Mrk 421.
Constraining the energy scale {#sec:simulation}
=============================
The simulation has been reexamined to reduce the uncertainties on the energy scale of the experiment (Brion 2003). The LIDAR operating on the site for atmospheric monitoring provided a better determination of the atmospheric extinction (Bussóns Gordo 2004). A stellar photometry study, focussing on the comparison between simulations and data on bright stars’ currents, has been done on the star 51 UMa (M$_\mathrm{B}=6.16$) which is in the field of view (FOV) of Mrk 421. This showed that the old simulation was too optimistic. All mirror reflectivities were decreased subsequent to new measurements. The nominal focussing of the heliostats was degraded after a study of star image sizes. We also verified the photomultiplier gains. The results for the old data set (40 heliostats, new is 53 helisotats, see § \[sec:analysis\]) are presented in figure \[fig:simulation\]: the effect of these changes is smaller for $\gamma$-ray showers (an extended light source) than for stars (point sources).
![ON$-$OFF illumination from the star 51 UMa (M$_\mathrm{B}=6.16$) in the FOV of Mrk 421 as a function of hour angle for pointing at 11 km: the new simulation with our corrections (red stars) fits the data well (black squares) whereas the old simulation (blue circles) was 50 % too high.[]{data-label="fig:simulation"}](brion_fig1.ps){width="7.0cm"}
This study helped us to perform our selection criteria for the data: all data that are too low in currents on the star have also low trigger rates. In order to trigger CELESTE, the heliostats, except the *veto* (see § \[sec:veto\]), are split into 6 groups. For each of them, the analog sum of signals gives the first level trigger. Then, a logical pattern is defined on the majority of the triggering groups. Thus, in the case of the source Mrk 421, we reject all data with trigger rates under 20 Hz for the old data set and under 16 Hz for the new data set.
We also looked at the proton rates as a standard candle for the detector which should be stable for good quality nights. These rates are determined with high offline threshold cuts to avoid trigger bias, and are therefore low (typical trigger rates $\sim 22$ Hz). We’ve shown that they are correlated with the currents on star 51 UMa for selected data (new data set, figure \[fig:ProtonRate\] (a)). The data with low rates are now still rejected but perhaps some of them could be corrected. Indeed, three doubtful zones, A, B and C on the figure, can be distinguished. The zone A can be interpreted as bad nights with thick cloud cover (weak star, little Cherenkov light), the zone B as nights with aerosols and above average extinction of Cherenkov and starlight, and the zone C as nights with high clouds that stop starlight but not Cherenkov light. The data in this last zone may therefore be used. Figure \[fig:ProtonRate\] (b) shows the correlation between the proton rate and the trigger rate for the same data set. Defining a selection criteria for each type of acquisition, and based only on these rates, would be very interesting for sources that don’t have any star nearby for a photometry study.
![(a) Proton rate as a function of ON$-$OFF current for heliostat E03 that sees 51 UMa when pointing Mrk 421. For a typical photomultiplier tube gain of $5.6\times10^{4}$, $2.5$ A corresponds to 0.28 p.e./ns, as seen in figure \[fig:simulation\]. (b) Proton rate as a function of trigger rate for data on Mrk 421.[]{data-label="fig:ProtonRate"}](brion_fig2.eps "fig:"){width="6.0cm"} ![(a) Proton rate as a function of ON$-$OFF current for heliostat E03 that sees 51 UMa when pointing Mrk 421. For a typical photomultiplier tube gain of $5.6\times10^{4}$, $2.5$ A corresponds to 0.28 p.e./ns, as seen in figure \[fig:simulation\]. (b) Proton rate as a function of trigger rate for data on Mrk 421.[]{data-label="fig:ProtonRate"}](brion_fig3.eps "fig:"){width="6.0cm"}
Analysis improvement {#sec:analysis}
====================
Since the 2000 status of the experiment (de Naurois 2002) three main changes were made for the analysis. First, the selection criteria of the data are stricter for the current and trigger rate stability, and as we’ve shown for the trigger and proton rate value (work in progress). The experiment has been upgraded from 40 to 53 heliostats. We use part of them to broaden our narrow FOV. Finally, we have found a new method to exploit the FADC information to reject the hadronic background (Manseri 2004 a, b). The second and third point are developed hereafter.
The *veto* configuration {#sec:veto}
------------------------
Hadronic showers have a more chaotic and extended development than electromagnetic showers. To measure the extent of the shower, we artificially broaden the FOV: as before, all heliostats aim at 11 km above the ground in the direction of the source, where the maximum of the shower is supposed to occur in our energy range. But 12 heliostats, distributed around the edge of the field, sample a ring of 150 m around that point (figure \[fig:PrincipeVeto\]).
Because of the compactness of electromagnetic showers, the light does not illuminate these 12 heliostats, named [*veto*]{}, contrary to hadronic showers (figure \[fig:NbVetoMC\]). So we require that no *veto* be illuminated.
![Distribution of the number of illuminated *veto* heliostats (simulations).[]{data-label="fig:NbVetoMC"}](brion_fig4.eps){width="\linewidth"}
![Distribution of the number of illuminated *veto* heliostats (simulations).[]{data-label="fig:NbVetoMC"}](brion_fig5.eps){width="\linewidth"}
FADC information
----------------
To detect low energy $\gamma$-rays, we use the sum of the individual digitized signals to increase the signal-to-background ratio. The summation includes a correction for the sphericity of the wavefront, assumed to be centered in the 11 km plane. Assuming a wrong position for this center (impact parameter) broadens the sum: the height-over-width ratio, $(H/W)$, decreases. We compute $(H/W)$ for different assumed positions. The impact parameter is the position for which the $(H/W)$ ratio is maximum, denoted by $(H/W)_{max}$. This is valid for $\gamma$-rays (figure \[fig:Timing\] (a)) but not for protons for which the wavefront is not spherical (figure \[fig:Timing\] (b)). A measurement of the flatness of these 2
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- |
[Austin Hounsel]{}\
Princeton University
- |
[Prateek Mittal]{}\
Princeton University
- |
[Nick Feamster]{}\
Princeton University
bibliography:
- '../common/bibliography.bib'
title: |
**Automatically Generating a Large,\
Culture-Specific Blocklist for China**
---
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Hysteresis and commonly observed p-doping of graphene based field effect transistors (FET) was already discussed in reports over last few years. However, the interpretation of experimental works differs; and the mechanism behind the appearance of the hysteresis and the role of charge transfer between graphene and its environment are not clarified yet. We analyze the relation between electrochemical and electronic properties of graphene FET in moist environment extracted from the standard back gate dependence of the graphene resistance. We argue that graphene based FET on a regular SiO$_2$ substrate exhibits behavior that corresponds to electrochemically induced hysteresis in ambient conditions, and can be caused by charge trapping mechanism associated with sensitivity of graphene to the local pH.'
author:
- Alina Veligura
- 'Paul J. Zomer'
- 'Ivan J. Vera-Marun'
- Csaba Józsa
- 'Pavlo I. Gordiichuk'
- 'Bart J. van Wees'
title: Relating Hysteresis and Electrochemistry in Graphene Field Effect Transistors
---
Introduction
============
Graphene, as a single atom thick layer of carbon atoms, has already showed potential for application in electronics and biosensing[@Ratinac]. However graphene as a truly 2D system is ultrasensitive [@Schedin2007] to the underlying substrate and surface chemistry, which alters the charge transport properties of pristine graphene. One of the main issues in graphene devices is a hysteretic behavior of its resistance observed in ambient conditions, when a gate voltage is swept back and forth. The presence of hysteresis and commonly observed p-doping of graphene based field effect transistors (FET) was already discussed in recent reports[@Lafkioti; @Wang2010; @Sabri2009; @Levesque]. The interpretation of experimental works differs; and the mechanism behind the appearance of hysteresis and the role of charge transfer between graphene and its environment are not clarified yet.
In an ideal case of grounded graphene its charge neutrality point (CNP) is located at zero back gate voltage. However, in ambient conditions most of the graphene based FETs show initial p-doping (CNP is positioned at positive Vg) and hysteresis. We point out that these two effects can be related but do not necessarily have the same nature. The doping of graphene can be caused either by the adsorbates on top or underneath the graphene surface[@Schedin2007; @Lafkioti; @Wang2010] or by the electrochemical processes involving graphene[@Sabri2009; @Levesque; @Sidorov2011]. Depending on the nature of the dopant or the electrochemical environment, the initial doping can be either p or n, which introduces a shift of the graphene CNP to positive or negative gate voltages respectively. One should keep in mind that even in the absence of a net doping the dynamic response of the graphene resistance, namely hysteresis, can be different.
There are two types of directions defined for hysteresis; positive and negative [@Wang2010]. The positive direction of hysteresis corresponds to the CNP shifting towards negative voltages while the gate voltage is swept further into the negative regime. In case of negative hysteresis the shift of the resistance with respect to the gate voltage is in the opposite direction: the CNP shifts toward more positive values while sweeping the gate into the negative regime. Wang et al.[@Wang2010] proposed that negative and positive hysteresis directions can be attributed to two competing mechanisms: capacitive coupling and charge trapping from/to graphene, respectively.
Capacitive coupling enhances the local electrical field near graphene, inducing more charge carriers and causing a negative direction of hysteresis. An example of a mechanism for capacitive coupling is a dipole layer placed in between graphene and the back gate. In moist air and without additional treatment of the silicon oxide substrate (a common insulator for a GFET) this dipole layer exists as adsorbed water molecules at room temperature [@Moser; @Lafkioti] or ordered ice at low temperature[@Wang2010; @Wehling]. The capacitive coupling mechanism is also dominant in electrolyte-gating devices, via ions in the electrical double layer [@Wang2010]. The positive direction of hysteresis is caused by a charge trapping mechanism. Accumulated charge in trap centers will start screening the electric field of the back gate. One of the examples of trap centers are surface states in between SiO$_2$ and graphene [@Wang2010; @Romero2008; @Liao; @Shin]. In case of graphene based FET traps in bulk SiO$_2$ or SiO$_2$/Si interface were excluded in a recent report by Lee et al.[@Lee2011], who measured time scales which were too fast for these types of trapped centers.
A separate charge transfer mechanism which was observed for the hydrogenated surface of diamond [@Chakrapani], carbon nanotubes [@Aguirre] and graphene based FETs [@Sabri2009; @Levesque; @Sidorov2011], is the dissociation of adsorbed water and oxygen on the carbon surface. Since water in equilibrium with air is slightly acidic (pH=6), the electrochemical potential of the carbon surface is higher than that of the solution, resulting in electron transfer from graphene. Therefore, a graphene FET possesses a net p-doping in moist air. The electron transfer is mediated by oxygen solvated in water and can occur in opposite direction with increasing pH. This redox can therefore influence the dynamic response of graphene devices under an applied back gate and cause a positive hysteresis.
A recent report by Fu et al.[@Fu] opened the discussion whether graphene pH sensitivity is caused by charge transfer directly between graphene and the solution [@Ang2008; @Ohno; @Heller] or if the sensitivity is mediated by a layer on top or next to graphene (either oxide or polymer residue). This layer can provide terminal hydroxyl groups which can be protonized or deprotonized depending on the proton concentration in the solution (pH), yielding a bound surface charge layer, which can electrostatically induce carriers in graphene. Recently it was reported that application of a gate potential can lead to a local change of pH in a thin water film next to an oxide substrate [@Veenhuis]. We argue that a combination of these two effects can result in a positive hysteresis in graphene, where the residues act as mediators for charge trapping actuated by pH changes induced via gate electrical field. We emphasize that both cases, independent whether the charge trapping is direct or mediated by residues, would lead to the same direction in hysteresis and will be undistinguishable in transport experiments. Though replacement of the silicon oxide with either a hydrophobic [@Lafkioti; @Shin] or an oxygen free [@Sabri2009] substrate did show suppression of both initial p-doping and hysteretic behavior, none of the reports link the chemical redox to the direction of hysteresis.
In this work we analyze the relation between electrochemical and electronic properties of graphene FET in moist environment. We argue that graphene based FET on a regular SiO$_2$ substrate exhibits behavior that corresponds to electrochemically induced hysteresis in ambient conditions, caused by charge trapping mechanisms associated with the sensitivity of graphene to the local pH.
Methods
=======
Samples were obtained by mechanical exfoliation of graphite (Highly Ordered Pyrolytic Graphite or Kish) on an oxidized n$^+$-doped silicon substrate (300 or 500 nm thick oxide layer), which functions as a back gate. The SiO$_2$ wafers are commercially available from Silicon Quest International, where the oxide is prepared by dry oxidation. Single layer graphene flakes were chosen based on their optical contrast and thickness measured by atomic force microscopy. A small number of samples were inspected with Raman spectroscopy to verify the number of layers. Ti/Au (5/40 nm thick) electrodes were prepared using standard electron beam lithography and lift off techniques. For electrical measurements samples are placed in a vacuum can with base pressure of $~5\cdot 10^{-6}$ mbar, using a standard low frequency AC lock-in technique with an excitation current of 100 nA. The carrier density in graphene is varied by applying DC voltage (Vg) between the back gate electrode and the graphene flake, as depicted in Fig. \[fig:Fig1\](a). The charge carrier mobilities ($\mu$) ranged from 2.500 up to 5.000 cm$^2$/Vs at a charge carrier density of $n=2\cdot 10^{11}cm^{-2}$.
The sensor properties of the devices were studied in the following way. First, we pumped down the sample can (95 cm$^3$ in volume) to the base pressure. Then a valve connecting the can to a volume, containing liquid water and filled with saturated vapor (H$_2$O or D$_2$O at 32 mbar saturation pressure) at 25 C$^o$, was kept open for 1 s (short exposure to the vapor). After measurement, the valve to the sample was fully opened, connecting the sample volume to the water container (flooding with water vapor). In case of ethanol vapor exposure the procedure was kept the same, but the partial pressure of ethanol in the liquid cavity was 78 mbar. The purity of heavy water and ethanol was 99.9%. A graphene based FET on a hydrophobic substrate was also prepared by exposure of SiO$_2$ to hexamethyldisilazane (HMDS) vapor prior to graphene deposition. HMDS forms a self assembled monolayer which protects graphene from the influence of dangling bonds in silicon dioxide and prevents adsorbtion of water molecules in the vicinity of graphene.
Results and discussion
======================
In ambient conditions the devices appear to be p-doped, with a pronounced positive hysteresis in the dependence of resistivity versus gate voltage (
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We investigate the space of abelian relations of planar webs admitting infinitesimal automorphisms. As an application we construct $4k - 14$ new algebraic families of global exceptional $k$-webs on the projective plane, for each $k\ge 5$.'
address:
- |
Departament de Matem[à]{}tiques\
Universitat Aut[ò]{}noma de Barcelona\
E-08193 Bellaterra (Barcelona)\
Spain
- |
Instituto de Matem[á]{}tica Pura e Aplicada\
Est. D. Castorina, 110\
22460-320, Rio de Janeiro, RJ, Brasil
- |
IRMAR\
Campus de Beaulieu\
35042 Rennes Cedex, France
author:
- 'D. Mar[í]{}n'
- 'J. V. Pereira'
- 'L. Pirio'
title: On Planar Webs with Infinitesimal Automorphisms
---
[^1]
Introduction and statement of the results
=========================================
Planar Webs
-----------
A germ of regular $k$-web $\mathcal W=\mathcal F_1 \boxtimes \cdots \boxtimes \mathcal F_k$ on $(\mathbb C^2,0)$ is a collection of $k$ germs of smooth foliations $\mathcal F_i$ subjected to the condition that any two of these foliations have distinct tangent spaces at the origin.
One of the most intriguing invariants of a web is its [*space of abelian relations*]{} $\mathcal A(\mathcal W)$. If the foliations $\mathcal F_i$ are induced by $1$-forms $\omega_i$ then by definition $$\begin{aligned}
\mathcal A(\mathcal W) = \Big\lbrace {\big(\eta_i\big)}_{i=1}^k \in
(\Omega^1(\mathbb C^2,0))^k \, \, \Big| \, \,\forall i \, \,
d\eta_i =0 \,, \; \eta_i\wedge \omega_i=0 \, \, \text{ and } \sum_{i=1}^k\eta_i = 0 \Big\rbrace
\, .\end{aligned}$$ The dimension of $\mathcal A(\mathcal W)$ is commonly called the [*rank*]{} of $\mathcal W$ and noted by $\mathrm{rk}(\mathcal W)$. It is a theorem of Bol that $\mathcal A(\mathcal W)$ is a finite-dimensional $\mathbb C$-vector space and moreover $$\begin{aligned}
\label{bolbound}
\mathrm{rk}( \mathcal W) \le \frac{1}{2}\, (k-1)(k-2)\, .\end{aligned}$$
An interesting chapter of the theory of webs concerns the characterization of webs of [*maximal rank*]{}, [*i.e*]{} webs for which (\[bolbound\]) is in fact an equality. It follows from Abel’s Addition Theorem that all the webs $\mathcal W_C$ obtained from reduced plane curves $C$ by projective duality are of maximal rank ([*cf.*]{} §\[S:action\] for details). The webs analytically equivalent to some $\mathcal W_C$ are the so called [*algebrizable webs*]{}.
It can be traced back to Lie a remarkable result that says that all $4$-webs of maximal rank are in fact algebrizable. In the early 1930’s Blaschke claimed to have extended Lie’s result to $5$-webs of maximal rank. Not much latter Bol came up with a counter-example: a $5$-web of maximal rank that is not algebrizable.
The non-algebrizable webs of maximal rank are nowadays called [*exceptional webs*]{}. For a long time Bol’s web remained as the only example of exceptional planar web in the literature. The following quote illustrates quite well this fact.
> [*(…) we cannot refrain from mentioning what we consider to be the fundamental problem on the subject, which is to determine the maximum rank non-linearizable webs. The strong conditions must imply that there are not many. It may not be unreasonable to compare the situation with the exceptional simple Lie groups.*]{}
>
> Chern and Griffiths in [@Jbr].
A comprehensive account of the current state of the art concerning the exceptional webs is available at [@theseluc Introduction §3.2.1], [@Robert] and [@PT §1.4]. Here we will just mention that before this work no exceptional $k$-web with $k\ge 10$ appeared in the literature.
At first glance, the list of known exceptional webs up today does not reveal common features among them. Although at a second look one sees that many of them (but not all, not even the majority) have one property in common: infinitesimal automorphisms.
Infinitesimal Automorphisms
---------------------------
In [@cartan], [É]{}. Cartan proves that [*a 3-web which admits an 2-dimensional continuous group of transformations is hexagonal*]{}. It is then an exercise to deduce that a $k$-web ($k> 3$) which admits $2$ linearly independent infinitesimal automorphisms is parallelizable and in particular algebrizable.
Cartan’s result naturally leads to the following question:
> [*What can be said about webs which admit one infinitesimal automorphism?*]{}
In fact, Cartan answers this question for 3-webs. In [*loc. cit.*]{} he establishes that such a web is equivalent to those induced by the $1$-forms $dx,dy, dy -u(x+y)dx$, where $u$ is a germ of holomorphic function.
It is very surprising that this story stops here… To our knowledge, there is no other study concerning webs with infinitesimal automorphisms, although they are particularly interesting. Indeed, on the one hand their study is considerably simplified by the presence of an infinitesimal automorphism, but on the other hand, these webs can be very interesting from a geometrical point of view: we will show they are connected to the theory of exceptional webs.
Variation of the Rank
---------------------
Let ${\mathcal W}$ be a regular web in $({\mathbb C}^2,0)$ which admits an infinitesimal automorphism $X$, [*i.e.*]{} $X$ is a germ of vector field whose local flow preserves the foliations of $\mathcal W$. As we will see in §\[S:geral\] the Lie derivative $L_X=i_Xd + di_X$ with respect to $X$ induces a linear operator on $\mathcal A(\mathcal W)$. Most of our results will follow from an analysis of such operator.
In §\[S:liouville\] we use this operator to give a simple description of the abelian relations of $\mathcal W$ and from this we will deduce in §\[S:rank\] what we consider our main result:
\[T:1\] Let ${\mathcal W}$ be a $k$–web which admits a transverse infinitesimal automorphism $X$. Then $$\mathrm{rk}(\mathcal W \boxtimes \mathcal F_X) =\mathrm{rk}(\mathcal W) + (k -1)\, .$$ In particular, ${\mathcal W}$ is of maximal rank if and only if $\mathcal W \boxtimes \mathcal F_{X}$ is of maximal rank.
We will derive from Theorem \[T:1\] the existence of new families of exceptional webs.
New Families of Exceptional Webs
--------------------------------
If we start with a reduced plane curve $C$ invariant under an algebraic ${\mathbb C}^*$-action on ${\mathbb P}^2$ then we obtain a dual algebraic ${\mathbb C}^*$-action on $\check{\mathbb P}^2$, letting invariant the algebraic web ${\mathcal W}_C$ ([*cf.*]{} §\[S:action\] for details). Combining this construction with Theorem \[T:1\] we deduce our second main result.
\[T:2\] For every $k \geq 5$ there exist a family of dimension at least $\lfloor k/2 \rfloor -1$ of pairwise non-equivalent exceptional global $k$-webs on ${\mathbb P}^2$.
In fact, for each $k\ge 5$, we obtain $4k - 15$ other families of smaller dimension.
We also give a complete classification of all the exceptional 5-webs of the type $ {\mathcal W}\boxtimes {\mathcal F}_X$ where $X$ is an infinitesimal automorphism of $ {\mathcal W}$ ([*cf.*]{} Corollary \[C:classi\]).
Generalities on webs with infinitesimal automorphisms {#S:geral}
=====================================================
Let $\mathcal F$ be a regular foliation on $(\mathbb C^2,0)$ induced by a (germ of) $1$-form $\omega$. We say that a (germ of) vector field $X$ is an infinitesimal automorphism of $\mathcal F$ if the foliation $\mathcal F$ is preserved by the local flow of $X$. In algebraic terms: $ L_X \omega \wedge \omega = 0 \, .$
When the infinitesimal automorphism $X$ is transverse to $\mathcal F$, [*i.e*]{} when $\omega(X)\neq0$, then a simple computation ([*cf*]{}. [@Percy Corollary 2]) shows that the $1$-form $$\eta = \frac{\omega}{i_X \omega}$$ is closed and satisfies $L_X \eta =0$. By definition, the integral $$u(z
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Region-based methods have proven necessary for improving segmentation accuracy of neuronal structures in electron microscopy (EM) images. Most region-based segmentation methods use a scoring function to determine region merging. Such functions are usually learned with supervised algorithms that demand considerable ground truth data, which are costly to collect. We propose a semi-supervised approach that reduces this demand. Based on a merge tree structure, we develop a differentiable unsupervised loss term that enforces consistent predictions from the learned function. We then propose a Bayesian model that combines the supervised and the unsupervised information for probabilistic learning. The experimental results on three EM data sets demonstrate that by using a subset of only $3\%$ to $7\%$ of the entire ground truth data, our approach consistently performs close to the state-of-the-art supervised method with the full labeled data set, and significantly outperforms the supervised method with the same labeled subset.'
author:
- Ting Liu
- Miaomiao Zhang
- Mehran Javanmardi
- Nisha Ramesh
- Tolga Tasdizen
- Ting Liu
- Miaomiao Zhang
- Mehran Javanmardi
- Nisha Ramesh
- Tolga Tasdizen
bibliography:
- 'refs-arxiv.bib'
subtitle: Supplementary Materials
title:
- 'SSHMT: Semi-supervised Hierarchical Merge Tree for Electron Microscopy Image Segmentation'
- 'SSHMT: Semi-supervised Hierarchical Merge Tree for Electron Microscopy Image Segmentation'
---
Introduction
============
Connectomics researchers study structures of nervous systems to understand their function [@sporns2005human]. Electron microscopy (EM) is the only modality capable of imaging substantial tissue volumes at sufficient resolution and has been used for the reconstruction of neural circuitry [@famiglietti1991synaptic; @briggman2011wiring; @helmstaedter2013cellular]. The high resolution leads to image data sets at enormous scale, for which manual analysis is extremely laborious and can take decades to complete [@briggman2006towards]. Therefore, reliable automatic connectome reconstruction from EM images, and as the first step, automatic segmentation of neuronal structures is crucial. However, due to the anisotropic nature, deformation, complex cellular structures and semantic ambiguity of the image data, automatic segmentation still remains challenging after years of active research.
Similar to the boundary detection/region segmentation pipeline for natural image segmentation [@arbelaez2011contour; @ren2013image; @arbelaez2014multiscale; @liu2016image], most recent EM image segmentation methods use a membrane detection/cell segmentation pipeline. First, a membrane detector generates pixel-wise confidence maps of membrane predictions using local image cues [@sommer2011ilastik; @ciresan2012deep; @seyedhosseini2013image]. Next, region-based methods are applied to transforming the membrane confidence maps into cell segments. It has been shown that region-based methods are necessary for improving the segmentation accuracy from membrane detections for EM images [@arganda2015crowdsourcing]. A common approach to region-based segmentation is to transform a membrane confidence map into over-segmenting superpixels and use them as “building blocks” for final segmentation. To correctly combine superpixels, greedy region agglomeration based on certain boundary saliency has been shown to work [@nunez2013machine]. Meanwhile, structures, such as loopy graphs [@kaynig2015large; @krasowski2015improving] or trees [@liu2014modular; @funke2015learning; @uzunbas2016efficient], are more often imposed to represent the region merging hierarchy and help transform the superpixel combination search into graph labeling problems. To this end, local [@liu2014modular; @krasowski2015improving] or structured [@funke2015learning; @uzunbas2016efficient] learning based methods are developed.
Most current region-based segmentation methods use a scoring function to determine how likely two adjacent regions should be combined. Such scoring functions are usually learned in a supervised manner that demands considerable amount of high-quality ground truth data. Obtaining such ground truth data, however, involves manual labeling of image pixels and is very labor intensive, especially given the large scale and complex structures of EM images. To alleviate this demand, Parag et al. recently propose an active learning framework [@parag2014small; @parag2015efficient] that starts with small sets of labeled samples and constantly measures the disagreement between a supervised classifier and a semi-supervised label propagation algorithm on unlabeled samples. Only the most disagreed samples are pushed to users for interactive labeling. The authors demonstrate that by using $15\%$ to $20\%$ of all labeled samples, the method can perform similar to the underlying fully supervised method with full training set. One disadvantage of this framework is that it does not directly explore the unsupervised information while searching for the optimal classification function. Also, retraining is required for the supervised algorithm at each iteration, which can be time consuming especially when more iterations with fewer samples per iteration are used to maximize the utilization of supervised information and minimize human effort. Moreover, repeated human interactions may lead to extra cost overhead in practice.
In this paper, we propose a semi-supervised learning framework for region-based neuron segmentation that seeks to reduce the demand for labeled data by exploiting the underlying correlation between unsupervised data samples. Based on the merge tree structure [@liu2014modular; @funke2015learning; @uzunbas2016efficient], we redefine the labeling constraint and formulate it into a differentiable loss function that can be effectively used to guide the unsupervised search in the function hypothesis space. We then develop a Bayesian model that incorporates both unsupervised and supervised information for probabilistic learning. The parameters that are essential to balancing the learning can be estimated from the data automatically. Our method works with very small amount of supervised data and requires no further human interaction. We show that by using only $3\%$ to $7\%$ of the labeled data, our method performs stably close to the state-of-the-art fully supervised algorithm with the entire supervised data set (Section \[sec:res\]). Also, our method can be conveniently adopted to replace the supervised algorithm in the active learning framework [@parag2014small; @parag2015efficient] and further improve the overall segmentation performance.
Hierarchical Merge Tree {#sec:hmt}
=======================
Starting with an initial superpixel segmentation $S_o$ of an image, a merge tree $T=(\mathcal{V},\mathcal{E})$ is a graphical representation of superpixel merging order. Each node $v_i\in\mathcal{V}$ corresponds to an image region $s_i$. Each leaf node aligns with an initial superpixel in $S_o$. A non-leaf node corresponds to an image region combined by multiple superpixels, and the root node represents the whole image as a single region. An edge $e_{i,c}\in\mathcal{E}$ between $v_i$ and one of its child $v_c$ indicates $s_c\subset s_i$. Assuming only two regions are merged each time, we have $T$ as a full binary tree. A clique $p_i=(\{v_i,v_{c_1},v_{c_2}\},\{e_{i,c_1},e_{i,c_2}\})$ represents $s_i=s_{c_1}\cup s_{c_2}$. In this paper, we call clique $p_i$ is at node $v_i$. We call the cliques $p_{c_1}$ and $p_{c_2}$ at $v_{c_1}$ and $v_{c_2}$ the child cliques of $p_i$, and $p_i$ the parent clique of $p_{c_1}$ and $p_{c_2}$. If $v_i$ is a leaf node, $p_i=(\{v_i\},\varnothing)$ is called a leaf clique. We call $p_i$ a non-leaf/root/non-root clique if $v_i$ is a non-leaf/root/non-root node. An example merge tree, as shown in Fig. \[fig:sub:toy\_tree\], represents the merging of superpixels in Fig. \[fig:sub:toy\_segi\]. The red box in Fig. \[fig:sub:toy\_tree\] shows a non-leaf clique $p_7=(\{v_7,v_1,v_2\},\{e_{7,1},e_{7,2}\})$ as the child clique of $p_9=(\{v_9,v_7,v_3\},\{e_{9,7},e_{9,3}\})$. A common approach to building a merge tree is to greedily merge regions based on certain boundary saliency measurement in an iterative fashion [@liu2014modular; @funke2015learning; @uzunbas2016efficient].
Given the merge tree, the problem of finding a final segmentation is equivalent to finding a complete label assignment $\mathbf{z}=\{z_i\}_{i=1}^{|\mathcal{V}|}$ for every node being a final segment ($z=1$) or not ($z=0$). Let $\rho(i)$ be a query function that returns the index of the parent node of $v_i$. The $k$-th ($k=
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We study almost periodic orbits of quantum systems and prove that for periodic time-dependent Hamiltonians an orbit is almost periodic if, and only if, it is precompact. In the case of quasiperiodic time-dependence we present an example of a precompact orbit that is not almost periodic. Finally we discuss some simple conditions assuring dynamical stability for nonautonomous quantum system.'
address:
- 'Departamento de Matemática – UFSCar, São Carlos, SP, 13560-970 Brazil'
- 'Departamento de Matemática – UFSCar, São Carlos, SP, 13560-970 Brazil\'
author:
- 'César R. de Oliveira'
- 'Mariza S. Simsen'
title: 'Almost Periodic Orbits and Stability for Quantum Time-Dependent Hamiltonians'
---
[^1]
[^2]
[Keywords]{}: almost periodicity; quantum stability; time-dependent systems; precompact orbits.
Introduction {#IntroductionSection}
============
The time evolution of a quantum mechanical system with time-dependent Hamiltonians $H(t)$ is determined by the Schrödinger equation $$i\frac{d\psi(t)}{dt}=H(t)\psi(t),$$ where $H(t)$ is a family of self-adjoint operators in the Hilbert space $\mathcal{H}$ and $\psi(t)\in\mathcal{H}$ for all $t\in{\ensuremath{{\mathrm{I\!R}}}}$. The initial value problem $\psi(0)=\psi$ has a unique solution $$\psi(t)\doteq U(t,0)\psi,$$ under suitable conditions on $H(t)$ (see [@RS; @K; @K1; @I]) and the propagators, or time evolution operators $U(t,s)$, form a strongly continuous family of unitary operators acting on $\mathcal{H}$, such that $$U(t,r)U(r,s)=U(t,s),\qquad \forall r,s,t,\in{\ensuremath{{\mathrm{I\!R}}}}$$ $$U(t,t)=
{{\mathrm{I_d}}},\qquad \forall t.$$ ${{\mathrm{I_d}}}$ denotes the identity operator. If the Hamiltonian is time-periodic with period $T$, then $U(t+T,r+T)=U(t,r)$ and the Floquet operator at $s$ is defined by $U_F(s)\doteq U(s+T,s)$; $U_F(0)$ is simply called Floquet operator and denoted by $U_F$, and $U_F(s)$ is unitarily equivalent to $U_F(r)$, $\forall
r,s$. Let $$\mathcal{O}(\psi)\doteq\{ U(t,0)\psi:t\in{\ensuremath{{\mathrm{I\!R}}}}\}$$ be the orbit of a vector $\psi\in\mathcal{H}$.
If $H(t)=H$ is independent of $t$ the time evolution operators are $U(t,s)=e^{-iH(t-s)}$. In this case, it is a well-known fact that if $\psi$ is in the point subspace of $H$ then the quantum time evolution of the state $\psi$, $\psi(t)$, is almost periodic, since it can be expanded in terms of the eigenfunctions $\varphi_n$ of $H$, with eigenvalues $E_n$, $$\psi(t)=\sum_nc_ne^{-iE_nt}\varphi_n.$$ Reciprocally, if $\psi(t)$ is almost periodic then using the results in [@Kat] (Chapter VI) it holds true that $\mathcal{O}(\psi)$ is precompact and then $\psi$ is in the point subspace of $H$ (see Theorem \[teo.31\] ahead). In this work, we prove that this fact remains true in the periodic case, that is, $\psi$ is in the point subspace of $U_F$ if, and only if, $\psi(t)$ is almost periodic (see Theorem \[teo.31a\]).
In the studies of time-dependent systems it is common to consider the quasienergy operator, i.e., a self-adjoint operator formally given by $$K=-i\frac{d}{dt}+H(t)$$ acting in some enlarged Hilbert space. The quasienergy operator $K$ was previously defined for periodic Hamiltonians [@Y; @H] and then generalized for general time dependence in [@H1]. In the periodic case it was proved that $$e^{-iKT}\simeq {{\mathrm{I_d}}}\otimes U_F,$$ where $\simeq$ means unitary equivalence.
A natural framework for considering general time-dependent perturbations, which includes both periodic and the random potentials as special cases, is to write $H(t)$ in the form $$H(t)=H(g_t(\theta))=H_0+V(g_t(\theta)),$$ where $g_t:\Omega\rightarrow\Omega$ is an invertible flow on a compact manifold $\Omega$ with a probability ergodic measure $\mu$ and $H_0$ is the Hamiltonian of the isolated system (see [@JL; @BJL]). Again, under suitable conditions on $V$ there exists a unitary time evolution operator $U_{\theta}(t,s)$ and the generalized quasienergy operator is defined [@JL] on $L^2(\Omega,\mathcal{H},d\mu)$ by $$(e^{-i\tilde{K}t}f)_{\theta}=\mathcal{F}_{-t}U_{\theta}(t,0)f_{\theta}=
U_{\theta}(0,-t)\mathcal{F}_{-t}f_{\theta},$$ where $\mathcal{F}_{-t}f_{\theta}=f_{g_{-t}(\theta)};$ we refer to this construction as [*Jauslin-Lebowitz formulation.*]{} The operator $\tilde{K}$ acts as $$(\tilde{K}f)_{\theta}=i\frac{d}{dt}f_{g_{-t}(\theta)}\Big|_{t=0}+H_{\theta}f_{\theta}.$$ In the case of a periodic potential one has $\Omega=S^1\equiv[0,2\pi)$, $g_t(\theta)=\theta+\omega t$ and $d\mu=\frac{d\theta}{2\pi}$.
For quasiperiodic potentials with two incommensurate frequencies $\omega_1/\omega_2\notin{\ensuremath{{\mathrm{Q\hspace{-2.1mm}\rule{0.3mm}{2.6mm}\;}}}}$ the manifold $\Omega$ is $S^1\times S^1$, $g_t(\theta_1,\theta_2)=(\theta_1+\omega_1t,\theta_2+\omega_2t)$ and $d\mu=\frac{d\theta_1}{2\pi}\frac{d\theta_2}{2\pi}$. We denote the two periods by $T_j=\frac{2\pi}{\omega_j}$. In this case the generalized Floquet operator acting on $\mathcal{K}_1\doteq
L^2(S^1,\mathcal{H},\frac{d\theta_1}{2\pi})$ is defined by $$\label{FloquetGenEq}
U_{\mathrm{F}}=\mathcal{T}_{-T_2}u_1,$$ where $u_1(\theta_1)=U_{(\theta_1,0)}(T_2,0)$ ($\doteq$ monodromy operator) and $(\mathcal{T}_{-T_2}\phi)(\theta_1)=\phi(\theta_1-\omega_1T_2)$.
Let $A:{{\mathrm{dom}~}}A\subset\mathcal{H}\rightarrow\mathcal{H}$ be an unbounded positive self-adjoint operator with discrete spectrum which we call a [*probe operator*]{}. Assuming that if $\psi\in{{\mathrm{dom}~}}A$, then $U(t,0)\psi\in{{\mathrm{dom}~}}A$ for all $t\geq0$, a very interesting question is about the behavior of the expectation value of $A$, that is, $$E_{\psi}^{A}(t)\equiv\langle
U(t,0)\psi,AU(t,0)\psi\rangle.$$ We say the system is $A$-dynamically stable if $E_{\psi}^{A}(t)$ is a bounded function of time, and $A$-dynamically unstable otherwise. A particular case is when the Hamiltonian has the form $H(t)=H_0+V(t)$ and $A=H_0$. In this work we discuss some simple conditions assuring dynamical stability, mainly when either the Floquet or quasienergy operator has purely point spectrum; recall that in the periodic case it is known that continuous spectrum of the Floquet operator implies dynamical instability (see Section \[PreliminarSection\]).
Usually it is not a simple task to get results on dynamical (in)stability in the original Hilbert space $H$ through properties of $K$ or $\tilde{K}$ acting in the corresponding enlarged space. We present some theoretical results about this point in Section \[BoundedSection\]. An important result in the periodic case was proved in [@DSSV], i.e., that the applicability of the KAM method for the quasienergy operator $K$, which
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Differential resistance measurements are conducted for point contacts (PCs) between tungsten tip approaching along the $c$ axis direction and the $ab$ plane of Sr$_{2}$RuO$_{4}$ single crystal. Three key features are found. Firstly, within 0.2 mV there is a dome like conductance enhancement due to Andreev reflection at the normal-superconducting interface. By pushing the W tip further, the conductance enhancement increases from 3% to more than 20%, much larger than that was previously reported, probably due to the pressure exerted by the tip. Secondly, there are also superconducting like features at bias higher than 0.2 mV which persists up to 6.2 K, resembling the enhanced superconductivity under uniaxial pressure for bulk Sr$_{2}$RuO$_{4}$ crystals but more pronounced here. Third, the logarithmic background can be fitted with the Altshuler-Aronov theory of tunneling into quasi two dimensional electron system, consistent with the highly anisotropic electronic system in Sr$_{2}$RuO$_{4}$.'
author:
- 'He Wang (王贺)'
- 'Weijian Lou (娄伟坚)'
- 'Jiawei Luo (骆佳伟)'
- 'Jian Wei (危健)'
- 'Y. Liu'
- 'J.E. Ortmann'
- 'Z.Q. Mao'
title: 'Enhanced superconductivity at the interface of W/Sr$_{2}$RuO$_{4}$ point contact'
---
[UTF8]{}[gbsn]{}
The layered perovskite ruthenate Sr$_{2}$RuO$_{4}$ (SRO) has shown evidence for spin-triplet, odd-parity superconductivity (SC) which may be useful for topological quantum computation. [@Maeno1994nature; @Mackenzie2003rmp; @Maeno2012jpsj] The possible chiral orbital order parameter for the two-dimensional SC is $p_{x}\pm ip_{y}$ as suggested by the time-reversal symmetry breaking experiments. [@Luke1998nature; @Xia2006prl] Such chiral order is expected to generate edge currents, but the expected magnetic field due to edge currents has not been directly observed with local field imaging, [@Kirtley2007prb; @Hicks2010prb; @Curran2014prb] though there is indirect evidence of edge currents revealed by in-plane tunneling spectroscopy [@Kashiwaya2011prl; @Kashiwaya2014pe] and point contact spectroscopy (PCS), [@Laube2000prl]both with assumptions to fit the conductance spectra.
The surface properties of SRO is very critical for field imaging with scanning quantum interference devices, as well as for the tunneling and point contact spectroscopy. It is known that the SRO surface can undergo reconstruction and the intrinsic SC may not be probed, [@Upward2002prb; @Firmo2013prb] and it may even show ferromagnetism (FM) due to lattice distortion. [@Matzdorf2000science] Very careful *in situ* preparation of devices is required for making good tunnel junctions using microfabrication techniques. [@Kashiwaya2011prl] Recently there is also theory proposal that surface disorder indeed can destroy the spontaneous currents. [@Lederer2014prb]
One way to overcome the surface problem is to use a hard tip for the point contact (PC) measurement. If the tip is hard enough, it may pierce through the surface dead layer and probe the SC underneath. [@Gonnelli2002jpcs] In fact, for this reason tungsten tip has been used for PCS of heavy fermion superconductors. [@Gloos1996jltp_scaling] A consequence of using a hard tip is that the tip will exert some pressure on the surface which may affect the SC, [@Daghero2010sst] possibly due to local distortion of lattice. [@Gloos1995pb; @Miyoshi2005prb] It is known that for SRO a very low uniaxial pressure of 0.2 GPa along the $c$ axis can enhance the superconducting transition temperature ($T_c$) of pure SRO from 1.5 K up to 3.2 K, [@Kittaka2010prb; @Kittaka2009jpsj_b] and recently in-plane strain (0.23%) along $\langle 100 \rangle$ direction is also shown to enhance $T_c$ from 1.3 K up to 1.9 K. [@Hicks2014science] The pressure in abovementioned measurements were applied to bulk samples, while for PCS the pressure is exerted locally. In the latter case it may be less affected by the inhomogenity of the applied pressure and the sample is less tend to developing cracks, thus locally higher pressure may be reached though absolute pressure is not known. Here we report greatly enhanced SC observed at the interface of the point contact junction between a tungsten tip approaching along the $c$ axis direction and the $ab$ plane surface of a SRO single crystal.
SRO single crystals are grown by floating zone methods and are from two different batches, details of sample preparation can be found in previous reports. [@Mao2000mrb] Sample S1 is from the first batch and is easier to cleave and shows no Ru inclusions. Sample S2 is from the second batch, too hard to cleave, and contains a lot of Ru inclusions (for optical images see Appendix \[appendix\_Ru\_inclusions\]). Only on the cleaved surface of S1 do we observe SC feature. Tungsten wire of 0.25 mm diameter is etched to form the tip, and then fixed pointing to the $ab$ plane of the SRO sample. A Si chip with the sample and thermometer glued on top is mounted on an attoCube nanopositioner stack. Since the tip and sample are both fixed to the copper housing, relative displacement between the tip and sample is suppressed, which ensures a stable contact and reproducible PCS. The housing is suspended with springs at the bottom of a insertable probe for a Leiden dilution fridge. With such customization the sample position is not at the field center of the magnet, and the field value is estimated with the tabled values from the magnet manufacturer. Differential resistance ($dV/dI$) is measured with standard lock-in technique.
![(Color online) Bias dependence of $dV/dI$ (a, c, e) and magnetoresistance (b, d, f) of three different point contact (PC) resistance at the same location between the W tip and SRO single crystal S1 at 0.35 K. The resistance at zero bias and zero field is 9.3, 4.3, 3.2 $\Omega$ respectively. For clarity, in (a) and (c) the $dV/dI$ curves at 625 Oe (Green) are shifted up by 0.2 $\Omega$. Arrows in (b), (d), (f) show the sweeping direction of the magnetic field. The reproducibility of the measurements is demonstrated by the overlapping of $dV/dI$ curves in (a), (c), (e) with bias ramping in both directions. The discontinuity around $\pm$625 Oe is related to the ramping speed of the field, and can be smaller when the field ramping speed is reduced, while the hysteresis is almost the same.[]{data-label="fig_dVdI_pressure"}](Fig1){width="9cm"}
At the same location, by pushing the tungsten tip towards the SRO surface (more precisely it is the SRO moving towards the tip), the PC resistance is reduced and the pressure is increased. The zero bias and zero field resistance ($R_0$) is: 9.3, 4.3, 3.2 $\Omega$ respectively (see Appendix \[appendix\_R\_pc\] for a discussion of PC resistance). The bias dependence of $dV/dI$ is shown in Figs. \[fig\_dVdI\_pressure\]a, \[fig\_dVdI\_pressure\]c, and \[fig\_dVdI\_pressure\]e, at nominal temperature 0.35 K. SC is clearly shown by the resistance dip within $\pm$0.2 mV without any applied field. With a 625 Oe magnetic field applied along the $c$ axis (H$_{\perp}$), SC is almost fully suppressed for the 9.3 $\Omega$ PC as shown by the recover of the resistance peak at zero bias. However, for the 4.3 $\Omega$ PC there is still a small dip, suggesting that SC is not fully suppressed, *i.e.*, SC is enhanced with increased pressure.
Enhancement of SC is further confirmed by the temperature dependence of $dI/dV$ at zero field as shown in Fig. \[fig\_dVdI\_9ohm\]b and Fig. \[fig\_dVdI\_4ohm\]b, where $T_c$ is increased from the bulk value of 1.5 k to about 2 K and 2.5 K for the 9.3 $\Omega$ and 4.3 $\Omega$ PC respectively. This enhanced T$_{c}$ is consistent with previous susceptibility measurements on bulk SRO sample under uniaxial pressure, where the mechanism of $T_c$ enhancement was ascribed to anisotropic lattice distortion, [@Kittaka2009jpsj_b; @Kittaka2010prb; @Taniguchi2012j
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We have used the GALEX ultraviolet telescope to study stellar populations and star formation morphology in a well-defined sample of more than three dozen nearby optically-selected pre-merger interacting galaxy pairs. We have combined the GALEX NUV and FUV images with broadband optical maps from the Sloan Digitized Sky Survey to investigate the ages and extinctions of the tidal features and the disks. We have identified a few new candidate tidal dwarf galaxies in this sample, as well as other interesting morphologies such as accretion tails, ‘beads on a string’, and ‘hinge clumps’. In only a few cases are strong tidal features seen in HI maps but not in GALEX.'
author:
- 'Beverly J. Smith$^1$, Mark L. Giroux$^1$, Curtis Struck$^2$, Mark Hancock$^{3}$, and Sabrina Hurlock$^1$'
title: 'Tidal Dwarf Galaxies, Accretion Tails, and ‘Beads on a String’ in the ‘Spirals, Bridges, and Tails’ Interacting Galaxy Survey'
---
Introduction
============
Tidal disturbances have played an important role in reshaping galaxies and triggering star formation over cosmic time. This is confirmed by H$\alpha$, far-infrared, and mid-infrared studies showing that the mass-normalized star formation rates of pre-merger optically-selected interacting galaxies are enhanced by a factor of two on average compared to normal spirals [@bushouse87; @kennicutt87; @bushouse88; @smith07].
With the advent of the Galaxy Evolution Explorer (GALEX), a new window on star formation in galaxies is now available. The addition of UV helps to break the age$-$extinction degeneracy in population synthesis modeling (e.g., [@smith08]). Furthermore, since the UV traces somewhat older and lower mass stars ($\le$400 Myrs; O to early-B stars) than H$\alpha$ ($\le$10 Myrs; early- to mid-O stars), it provides a measure of star formation over a longer timescale than H$\alpha$ studies. GALEX imaging has shown that some tidal features in interacting galaxies are quite bright in the UV (e.g., [@neff05]). In some cases, tidal features previously thought to be purely gaseous have been detected by GALEX (e.g., [@hancock07]). In other systems, GALEX images have been used to identify new tidal features (e.g., [@boselli05]).
To address these issues, we have used the GALEX telescope to image more than three dozen strongly interacting galaxies in the UV (the ‘Spirals, Bridges, and Tails’ (SB&T) sample). These galaxies were selected from the Arp (1966) Atlas using the following criteria: 1) They are relatively isolated binary systems; we eliminated merger remnants, close triples, and multiple systems in which the galaxies have similar optical brightnesses. 2) They are tidally disturbed. 3) They have radial velocities less than 10,350 km/s. 4) Their total angular size is $>$ 3$'$, to allow for good resolution with GALEX.
Each galaxy was imaged for $\ge$1500 seconds in the FUV and NUV broadband filters of GALEX, which have effective bandpasses of 1350 $-$ 1705Å and 1750 $-$ 2800Å, respectively. Some of the galaxies that fit our selection criteria were previously observed by guaranteed time projects. For these galaxies, we used the archival GALEX images. The circular GALEX field of view has a diameter of 1.2 degrees. The pixel size is 15, and the spatial resolution $\sim$ 5$''$. About 2/3rds of our galaxies have broadband optical images available from the Sloan Digitized Sky Survey (SDSS), while 3/4 have broadband Spitzer infrared images available [@smith07]. About half have published 21 cm HI maps.
Morphologies
============
The SB&T galaxies have a large range of collisional morphologies, including M51-like systems, wide pairs with long tails and/or bridges, wide pairs with short tails, close pairs with long tails, and close pairs with short tails. In the current paper, we discuss unusual tidal morphologies in a subset of the galaxies. In @giroux10, we present an Atlas of UV images of additional SB&T galaxies. The full survey is described in detail in @smith10. For four of the galaxies in the SB&T sample, we have already published the GALEX images as part of detailed studies of the individual galaxies, and compared with numerical simulations of the interaction [@hancock07; @hancock09; @hancock10; @smith08; @peterson09; @peterson10].
There is a large variety of star formation morphologies within the tidal features in this sample. In many cases, the tidal features are quite bright in the UV. This is illustrated by Arp 72 (Figure 1a), whose eastern tail is very prominent in the GALEX images, and has very blue UV/optical colors. Arp 72 is also a good example of the so-called ‘beads on a string’ morphology, in which regularly-spaced clumps of star formation are seen along spiral arms and tidal features. These clumps are generally spaced about 1 kpc apart, the characteristic scale for gravitational collapse of molecular clouds [@elmegreen96]. Such beads are seen in many other systems in our sample, including the northern tail of the western galaxy in Arp 65 (Figure 1b), Arp 82 [@hancock07], and Arp 285 [@smith08].
In a few systems, we see luminous star forming regions at the base of a tidal feature. We call these features ‘hinge clumps’ [@hancock09]. These lie near the intersection of the spiral density wave in the inner disk and the material wave in the tail. These may form when dense material in the inner disk gets pulled out into a tail. This lowers the shear, which may allow more massive clouds to gravitationally collapse. Hinge clumps are visible at the eastern end of the Arp 72 bridge (Figure 1a) and the base of the northern tail of Arp 65 (Figure 1b). Hinge clumps are also seen in Arp 82 [@hancock07] and Arp 305 [@hancock09].
Our sample also includes some candidate ‘tidal dwarf galaxies’ (TDGs), massive concentrations of young stars near the tips of tidal features. The prototypical TDGs in Arp 244 and Arp 245 [@mirabel92; @duc00] are included in the SB&T sample, along with the bridge TDG in Arp 305 [@hancock09; @hancock10]. Another possible TDG is seen in Arp 202 (Figure 2), an interaction between an edge-on disk galaxy and a smaller irregularly-shaped galaxy to the south. A long clumpy tail is visible to the west of the southern galaxy. The tip of this tail is particularly prominent in the GALEX images, and has very blue UV/optical colors. Our optical spectrum shows that this clump is at the same redshift as Arp 202. This source was not detected in our Spitzer 8 $\mu$m map [@smith07] or in our SARA H$\alpha$ map, suggesting that it is in a post-starburst stage.
Another SB&T system that may have TDGs is Arp 181 (Figure 3). A clump is visible near the end of the western tail in the GALEX and SDSS images, with very blue optical/UV colors. However, no optical spectrum is available, thus it is unclear whether it is at the same redshift as Arp 181. Further west, another galaxy is visible, without any obvious link to the tail. Our optical spectrum shows that it is at the same redshift as Arp 181. In the SDSS image it looks like a spiral galaxy or a disturbed disk with short tidal tails. It is extremely blue in NUV $-$ g, and is detected at 8 $\mu$m [@smith07]. This may be either a pre-existing dwarf galaxy or a recently detached TDG.
The SB&T sample also contains numerous examples of accretion from one galaxy to another. One of the best-studied examples is the northern tail of Arp 285, which was likely produced from material accreted from the southern galaxy [@toomre72; @smith08]. According to our numerical simulations, the material in this tail fell into the gravitational potential of the northern galaxy, overshot that potential, and is now gravitationally collapsing and forming stars [@smith08]. We call such features ‘accretion tails’, to distinguish them from classical tidal features. The inner western tail of Arp 284 was likely produced by the same process [@struck03]. Another system which may have an accretion tail is Arp 105 (Figure 4). The spiral in this system has a long tail extending to the north, previously classified as a TDG [@duc97]. The spiral is connected by a bridge to an elliptical galaxy to the south. South of the elliptical is a bright star formation knot [@stockton72]. Both the northern TDG candidate and the southern knot of star formation are luminous in HI maps [@duc97]. In the GALEX images, the spiral and the northern TDG are quite bright in the UV, but the highest UV surface brightness is found in the knot of star formation south of the elliptical. We suggest, based on analogy to Arp 285 [@smith08] and proximity to the elliptical, that the southern star forming region in Arp 105 is an accretion tail, rather than simply a classical
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The fermions of the Standard Model are integrated out to obtain the effective Lagrangian in the sector violating $P$ and $CP$ at zero temperature. We confirm that no contributions arise for operators of dimension six or less and show that the leading operators are of dimension eight. To assert this we explicitly compute one such non-vanishing contribution, namely, that with three $Z^0$, two $W^+$ and two $W^-$. Terms involving just gluons and $W$’s are also considered, however, they turn out to vanish in the $P$-odd sector to eighth order. The analogous gluonic term in the $CP$-odd and $P$-even ($C$-odd) sector is non-vanishing and it is also computed. The expressions derived apply directly to Dirac massive neutrinos. All $CP$-violating results display the infrared enhancement already found at dimension six.'
author:
- 'L. L. Salcedo'
title: 'Leading order one-loop $CP$ and $P$ violating effective action in the Standard Model'
---
Introduction {#sec:1}
============
A full understanding of $CP$-violation remains a challenge and for this reason it is a fruitful field of research both in relation with the Standard Model of particle physics and in extensions thereof [@Xing:2003ez; @Buras:1997fb; @Neubert:1996qg; @Grossman:1997pa; @Winstein:1992sx; @Paschos:1989ur; @Wolfenstein:1987pe; @Donoghue:1987wu; @Charles:2004jd]. $CP$-violation enters in very different phenomena, like non vanishing of the electric dipole momentum of elementary particles, baryogenesis [@Sakharov:1967dj], or, assuming $CPT$ invariance, the puzzling $T$-violation. Yet, in the Standard Model, $CP$-violation is rather elusive. There is no trace of it in the QCD sector, while in the electroweak sector it enters through a small parameter in the CKM matrix for quarks [@Kobayashi:1973fv], and possibly also for leptons, for massive neutrinos [@Maki:1962mu]. Even in the electroweak sector manifestation of $CP$ breaking requires a subtle combination, the Jarlskog determinant $\Delta$, which requires order twelve in the quark (or leptons) masses and would vanish if two up-like or two down-like quarks were degenerated in mass [@Jarlskog:1985ht]. In any case only through fermions $CP$ can be broken in the Standard Model. The structure of the Standard Model action implies that integration of the fermions results in an effective Lagrangian of the form (we assume the unitary gauge throughout) $$\mathcal{L}^\text{eff}
{}
(x) = \sum_\alpha g_\alpha
\left(\frac{v}{\phi(x)}\right)^{d_\alpha-4}\mathcal{O}_\alpha(x) ,$$ where $\mathcal{O}_\alpha(x)$ represents any possible operator, of mass dimension $d_\alpha$, constructed as a Lorentz and gauge invariant product of the gauge fields, their derivatives and derivatives of the Higgs field. $g_\alpha$ is the operator coupling constant, with mass dimension $4-d_\alpha$. $\phi(x)$ denotes the Higgs field and $v$ its vacuum expectation value. The coupling constant (which may vanish for some operators) has two additive contributions, one from the quark loop and another from the lepton loop. In the $CP$-odd sector, $g_\alpha$ must contain the Jarlskog determinant. In terms of the Yukawa coupling this yields a tiny dimensionless number, $\Delta/v^{12}$, of the order of $10^{-24}$. This fact has occasionally been presented as an indication of an intrinsic limitation of the Standard Model to produce enough $CP$-breaking to account for observations, including the baryon asymmetry. While this might be true, qualitative arguments should eventually be supported by a detailed computation. Smit argued in [@Smit:2004kh] that the coupling $g_\alpha$ is just a homogeneous function of the quarks (or leptons) masses of the appropriate degree. This implies that $g_\alpha \sim \Delta \times I_\alpha$, where $I_\alpha$ has a large negative degree to compensate that of $\Delta$. Both $\Delta$ and $I_\alpha$ depend only on the fermion masses and do not involve $v$. On the other hand, the various quark masses are very different and widely different result can be obtained by combining them at random. Actual calculations have been carried out in [@Hernandez:2008db; @GarciaRecio:2009zp] for operators of dimension six, which is the first possible $CP$-violating contribution at one-loop. They show that $g_\alpha \sim J\kappa/m_c^2$ where is $m_c$ the charm quark mass, $J=2.9(2)\times 10^{-5}$ is the Jarlskog invariant [@Nakamura:2010zzi] and $\kappa$ is a dimensionless coefficient of the order of unity. Implications for cold electroweak baryogenesis have been considered in [@Tranberg:2009de; @Tranberg:2010af]. Unfortunately, these two references differ in that [@Hernandez:2008db] finds such a dimension six contribution in the $P$-odd sector whereas [@GarciaRecio:2009zp] finds a contribution in the $C$-odd sector but none in the $P$-odd one.
The purpose of this note is manyfold. First, to reduce to the simplest and more transparent terms the calculation of these couplings constants. Second, to confirm that, although dimension six $CP$-odd and $P$-odd operators do exist, their coupling vanish in the Standard Model. Third, to verify that the order six cancellation is accidental, and non vanishing contributions in the $CP$-odd and $P$-odd sector appear for the first time at dimension eight. The purely gluonic leading (eighth) order term is also computed since it is particularly simple. As it turns out, this term breaks $C$ but not $P$. Lastly, to verify that the enhancement (as compared to the naive estimate) found at order six is displayed also at higher orders.
The method {#sec:2}
==========
We will integrate out the fermions in the Standard Model to extract the $CP$ violating contribution of the resulting effective action. This is the one-loop approximation to the effective action with full one-particle irreducible bosonic lines and vertices. We work at zero temperature. Quarks will be explicitly considered. Leptons would not contribute to the $CP$-odd sector if neutrinos are assumed to be exactly massless. For massive Dirac neutrinos the contribution of the leptons will be completely analogous to the one obtained for quarks.
The quark-sector Lagrangian of the Standard Model, in its Euclidean version and in the unitary gauge, can be written as [@Huang:1992bk]: $${\cal L}(x) =
\bar{q}(x){{\bf D}}q(x)
=
(\bar{q}_L,\bar{q}_R) \left(\begin{matrix}
m & {\hbox{$ \mathrel{\mathop{D\!\!\!\!/}}$}}_L \\
{\hbox{$ \mathrel{\mathop{D\!\!\!\!/}}$}}_R & m
\end{matrix}
\right)
\left(\begin{matrix}
q_R \\ q_L
\end{matrix}
\right).$$ Here $q_{L,R}$ carry Dirac, generation (family), $ud$ and color indices ($ud$ space distinguishes the up-like from down-like quarks in each generation). Expanding further the matrices in $ud$ space: $$\begin{aligned}
m &=&
\left(\begin{matrix}
\frac{\phi}{v} m_u &0
\\
0 & \frac{\phi}{v} m_d
\end{matrix}
\right)
,
\quad
{\hbox{$ \mathrel{\mathop{D\!\!\!\!/}}$}}_L =
\left(\begin{matrix}
{\hbox{$ \mathrel{\mathop{D\!\!\!\!/}}$}}_u + {\hbox{$ \mathrel{\mathop{Z\!\!\!\!/}}$}} + {\hbox{$ \mathrel{\mathop{G\!\!\!\!/}}$}} & {\mathrel{\mathop{W\!\!\!\!\!/}}}{}^+ C
\\
{\mathrel{\mathop{W\!\!\!\!\!/}}}{}^-C^{-1} & {\hbox{$ \mathrel{\mathop{D\!\!\!\!/}}$}}_d - {\hbox{$ \mathrel{\mathop{Z\!\!\!\!/}}$}} + {\hbox{$ \mathrel{\mathop{G\!\!\!\!/}}$}}
\end{matrix}
\right)
,
\quad
{\hbox{$ \mathrel{\mathop{D\!\!\!\!/}}$}}_R =
\left(\begin{matrix}
{\hbox{$ \mathrel{\mathop{D\!\!\!\!/}}$}}_u + {\hbox{$ \mathrel{\mathop{G\!\!\!\!/}}$}} & 0
\\
0 & {\hbox{$ \mathrel{\mathop{D\!\!\!\!/}}$}}_d + {\hbox{$ \mathrel{\mathop{G\!\!\!\!/}}$}}
\end{matrix}
\right)
.
\label{eq:2.2}\end{aligned}$$ Here $m_{u,d}$ are the diagonal matrices (in generation space) with the up-like and down-like quarks masses, respectively. $G_\mu$ the gluon field, $Z_\mu$ the $Z^
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Let $M={\operatorname{GL}}_{r_1}\times\cdots\times{\operatorname{GL}}_{r_k}\subseteq{\operatorname{GL}}_r$ be a Levi subgroup of ${\operatorname{GL}}_r$, where $r=r_1+\cdots+r_k$, and ${\widetilde{M}}$ its metaplectic preimage in the $n$-fold metaplectic cover ${\widetilde{\operatorname{GL}}}_r$ of ${\operatorname{GL}}_r$. For automorphic representations $\pi_1,\dots,\pi_k$ of ${\widetilde{\operatorname{GL}}}_{r_1}({\mathbb{A}}),\dots,{\widetilde{\operatorname{GL}}}_{r_k}({\mathbb{A}})$, we construct (under a certain technical assumption, which is always satisfied when $n=2$) an automorphic representation $\pi$ of ${\widetilde{M}}({\mathbb{A}})$ which can be considered as the “tensor product” of the representations $\pi_1,\dots,\pi_k$. This is the global analogue of the metaplectic tensor product defined by P. Mezo in the sense that locally at each place $v$, $\pi_v$ is equivalent to the local metaplectic tensor product of $\pi_{1,v},\dots,\pi_{k,v}$ defined by Mezo. Then we show that if all of $\pi_i$ are cuspidal (resp. square-integrable modulo center), then the metaplectic tensor product is cuspidal (resp. square-integrable modulo center). We also show that (both locally and globally) the metaplectic tensor product behaves in the expected way under the action of a Weyl group element, and show the compatibility with parabolic inductions.'
address: 'Shuichiro Takeda: Mathematics Department, University of Missouri, Columbia, 202 Math Sciences Building, Columbia, MO, 65211'
author:
- Shuichiro Takeda
title: 'Metaplectic tensor products for automorphic representations of ${\widetilde{\operatorname{GL}}}(r)$'
---
**Introduction**
================
Let $F$ be either a local field of characteristic 0 or a number field, and $R$ be $F$ if $F$ is local and the ring of adeles ${\mathbb{A}}$ if $F$ is global. Consider the group ${\operatorname{GL}}_r(R)$. For a partition $r=r_1+\cdots+r_k$ of $r$, one has the Levi subgroup $$M(R):={\operatorname{GL}}_{r_1}(R)\times\cdots\times{\operatorname{GL}}_{r_k}(R)\subseteq{\operatorname{GL}}_r(R).$$ Let $\pi_1,\dots,\pi_k$ be irreducible admissible (resp. automorphic) representations of ${\operatorname{GL}}_{r_1}(R),\dots,{\operatorname{GL}}_{r_k}(R)$ where $F$ is local (resp. $F$ is global). Then it is a trivial construction to obtain the representation $\pi_1\otimes\cdots\otimes\pi_k$, which is an irreducible admissible (resp. automorphic) representation of the Levi $M(R)$. Though highly trivial, this construction is of great importance in the representation theory of ${\operatorname{GL}}_r(R)$.
Now if one considers the metaplectic $n$-fold cover ${\widetilde{\operatorname{GL}}}_r(R)$ constructed by Kazhdan and Patterson in [@KP], the analogous construction turns out to be far from trivial. Namely for the metaplectic preimage ${\widetilde{M}}(R)$ of $M(R)$ in ${\operatorname{GL}}_r(R)$ and representations $\pi_1,\dots,\pi_k$ of the metaplectic $n$-fold covers ${\widetilde{\operatorname{GL}}}_{r_1}(R),\dots,{\widetilde{\operatorname{GL}}}_{r_k}(R)$, one cannot construct a representation of ${\widetilde{M}}(R)$ simply by taking the tensor product $\pi_1\otimes\cdots\otimes\pi_k$. This is simply because ${\widetilde{M}}(R)$ is not the direct product of ${\widetilde{\operatorname{GL}}}_{r_1}(R),\dots,{\widetilde{\operatorname{GL}}}_{r_k}(R)$, namely $${\widetilde{M}}(R)\ncong{\widetilde{\operatorname{GL}}}_{r_1}(R)\times\dots\times{\widetilde{\operatorname{GL}}}_{r_k}(R),$$ and even worse there is no natural map between them.
When $F$ is a local field, for irreducible admissible representations $\pi_1,\dots,\pi_k$ of ${\widetilde{\operatorname{GL}}}_{r_1}(F),\dots,{\widetilde{\operatorname{GL}}}_{r_k}(F)$, P. Mezo ([@Mezo]), whose work, we believe, is based on the work by Kable [@Kable2], constructed an irreducible admissible representation of the Levi ${\widetilde{M}}(F)$, which can be called the “metaplectic tensor product” of $\pi_1,\dots,\pi_k$, and characterized it uniquely up to certain character twists. (His construction will be reviewed and expanded further in Section \[S:Mezo\].)
The theme of the paper is to carry out a construction analogous to Mezo’s when $F$ is a number field, and our main theorem is
Let $M={\operatorname{GL}}_{r_1}\times\cdots\times{\operatorname{GL}}_{r_k}$ be a Levi subgroup of ${\operatorname{GL}}_r$, and let $\pi_1,\dots,\pi_k$ be unitary automorphic subrepresentations of ${\widetilde{\operatorname{GL}}}_{r_1}({\mathbb{A}}),\dots,{\widetilde{\operatorname{GL}}}_{r_k}({\mathbb{A}})$. Assume that $M$ and $n$ are such that Hypothesis ($\ast$) is satisfied, which is always the case if $n=2$. Then there exists an automorphic representation $\pi$ of ${\widetilde{M}}({\mathbb{A}})$ such that $$\pi\cong{\widetilde{\otimes}}'_v\pi_v,$$ where each $\pi_v$ is the local metaplectic tensor product of Mezo. Moreover, if $\pi_1,\dots,\pi_k$ are cuspidal (resp. square-integrable modulo center), then $\pi$ is cuspidal (resp. square-integrable modulo center).
In the above theorem, ${\widetilde{\otimes}}_v'$ indicates the metaplectic restricted tensor product, the meaning of which will be explained later in the paper. The existence and the local-global compatibility in the main theorem are proven in Theorem \[T:main\], and the cuspidality and square-integrability are proven in Theorem \[T:cuspidal\] and Theorem \[T:square\_integrable\], respectively.
Let us note that by unitary, we mean that $\pi_i$ is equipped with a Hermitian structure invariant under the action of the group. Also we require $\pi_i$ be an automorphic subrepresentation, so that it is realized in a subspace of automorphic forms and hence each element in $\pi_i$ is indeed an automorphic form. (Note that usually an automorphic representation is a subquotient.) We need those two conditions for technical reasons, and they are satisfied if $\pi_i$ is in the discrete spectrum, namely either cuspidal or residual.
Also we should emphasize that if $n>2$, we do not know if our construction works unless we impose a technical assumption as in Hypothesis ($\ast$). We will show in Appendix \[A:topology\] that this assumption is always satisfied if $n=2$, and if $n>2$ it is satisfied, for example, if $\gcd(n, r-1+2cr)=1$, where $c$ is the parameter to be explained. We hope that even for $n>2$ it is always satisfied, though at this moment we do not know how to prove it.
As we will see, strictly speaking the metaplectic tensor product of $\pi_1,\dots,\pi_k$ might not be unique even up to equivalence but is dependent on a character $\omega$ on the center $Z_{{\widetilde{\operatorname{GL}}}_r}$ of ${\widetilde{\operatorname{GL}}}_r$. Hence we write $$\pi_\omega:=(\pi_1{\widetilde{\otimes}}\cdots{\widetilde{\otimes}}\pi_k)_\omega$$ for the metaplectic tensor product to emphasize the dependence on $\omega$.\
Also we will establish a couple of important properties of the metaplectic tensor product both locally and globally. The first one is that the metaplectic tensor product behaves in the expected way under the action of the Weyl group. Namely
\
[**Theorem \[T:Weyl\_group\_local\] and \[T:Weyl\_group\_global\].**]{} [*Let $w\in W_M$ be a Weyl group element of ${\operatorname{GL}}_r$ that only permutes the ${\operatorname{GL}}_{r_i}$-factors of $M$. Namely for each $(g_1,\dots,g_k)\in{\operatorname{GL}}_{r_1}\times\cdots\times{\operatorname{GL}}_{r_k}$, we have $w
(g_1,\dots,g_k)w^{-1}=(g_{\sigma(1)},\dots,g_{\sigma(k)})$ for a permutation $\sigma\in S_k$ of $k$ letters. Then both locally and globally, we have $$^{w}(\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: '[The aim of this study is to investigate systematic chemical differentiation of molecules in regions of high mass star formation.]{}[We observed five prominent sites of high mass star formation in HCN, HNC, HCO$^+$, their isotopes, C$^{18}$O, C$^{34}$S and some other molecular lines, for some sources both at 3 and 1.3 mm and in continuum at 1.3 mm. Taking into account earlier obtained data for N$_2$H$^+$ we derive molecular abundances and physical parameters of the sources (mass, density, ionization fraction, etc.). The kinetic temperature is estimated from CH$_3$C$_2$H observations. Then we analyze correlations between molecular abundances and physical parameters and discuss chemical models applicable to these species.]{}[The typical physical parameters for the sources in our sample are the following: kinetic temperature in the range $\sim 30-50$ K (it is systematically higher than that obtained from ammonia observations and is rather close to dust temperature), masses from tens to hundreds solar masses, gas densities $\sim 10^5$ cm$^{-3}$, ionization fraction $\sim 10^{-7}$. In most cases the ionization fraction slightly (a few times) increases towards the embedded YSOs. The observed clumps are close to gravitational equilibrium. There are systematic differences in distributions of various molecules. The abundances of CO, CS and HCN are more or less constant. There is no sign of CO and/or CS depletion as in cold cores. At the same time the abundances of HCO$^+$, HNC and especially N$_2$H$^+$ strongly vary in these objects. They anti-correlate with the ionization fraction and as a result decrease towards the embedded YSOs. For N$_2$H$^+$ this can be explained by dissociative recombination to be the dominant destroying process. ]{}[N$_2$H$^+$, HCO$^+$, and HNC are valuable indicators of massive protostars.]{}'
author:
- |
I. Zinchenko$^{1,2,3}$[^1], P. Caselli$^{4}$ and L. Pirogov$^{1}$\
$^1$Institute of Applied Physics of the Russian Academy of Sciences, Ulyanova 46, 603950 Nizhny Novgorod, Russia\
$^2$Nizhny Novgorod University , Gagarin av. 23, 603950 Nizhny Novgorod, Russia\
$^3$Helsinki University Observatory, Tähtitorninmäki, P.O. Box 14, FIN-00014 University of Helsinki, Finland\
$^4$School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT, UK
date:
title: 'Chemical differentiation in regions of high mass star formation II. Molecular multiline and dust continuum studies of selected objects'
---
\[firstpage\]
Astrochemistry – Stars: formation – ISM: clouds – ISM: molecules – Radio lines: ISM
Introduction
============
It is now well established that the central parts of dense low mass cloud cores suffer strong depletion of molecules onto dust grains. Best studied is CO, which has been shown to be depleted in, e.g., L1544 [@Caselli99], IC 5146 [@Kramer99], L1498 [@Willacy98], and L1689B [@Jessop01]. Species related to CO, such as HCO$^+$, are also expected to disappear at gas densities above $\sim 10^5$ cm$^{-3}$ [@Caselli02]. Moreover, @Tafalla02 and @Bergin01 have shown that CS also depletes out in the central parts of dense cores, suggesting that CS (so far considered a high density tracer) does not actually probe the central core regions. On the other hand, N$_2$H$^+$ is an excellent tracer of dust continuum emission [@Caselli02], implying that this species does not deplete out (due to the volatility of the parent species N$_2$).
In more massive cores, depletion is probably active in dense regions away from star forming sites where dust temperatures may be low enough ($T < 20$ K) for CO and CS abundances to drop and cause chemical differentiation [@Fontani06].
Several years ago we mapped several tens of dense cores towards water masers in CS(2–1) with the SEST-15m and Onsala 20m radio telescopes [@Zin95; @Zin98]. In 2000 many of them were mapped in N$_2$H$^+$ [@Pirogov03]. The goal was to identify dense clumps as local maxima in N$_2$H$^+$ maps and to further investigate their properties. However, large differences between the N$_2$H$^+$ and CS distributions have been found.
In this situation it is important to understand which species better trace the total gas distribution: is it N$_2$H$^+$ as in low mass cores? What is the reason for this differentiation in warm clouds, where freeze-out is hardly effective? To answer these questions we observed dust continuum emission and several additional molecular lines towards selected sources which show significant differences between the CS and N$_2$H$^+$ maps. The results for the southern sources (where we observed only N$_2$H$^+$ $J=1-0$, CS $J=2-1$ and $J=5-4$ and dust continuum) have been published separately [@Pirogov07 hereafter Paper I]. In that paper we have shown that the differences in the CS and N$_2$H$^+$ maps cannot be explained by molecular excitation and/or line opacity effects but are caused by chemical differentiation of these species. We found that N$_2$H$^+$ abundance in many cases drops significantly towards embedded luminous YSOs. However, the reasons for this behavior were not clear. Two possible explanations were mentioned: an accelerated collapse model suggested by @Lintott05 and dissociative recombination of N$_2$H$^+$.
Here we present and discuss the results for the northern sample where we observed also HCN, HNC, HCO$^+$, their isotopes, C$^{18}$O and some other molecular lines, for some sources both at 3 and 1.3 mm. These data help to understand better the chemical differentiation in these objects. In addition they give important information on their physical properties.
Observations
============
Sources
-------
The sources for this investigation were selected from the sample of massive cores studied by us earlier in various lines [@Zin98; @Zin00; @Pirogov03]. The main criterium for this selection was a presence of significant differences in the CS and N$_2$H$^+$ maps. The list of the sources is given in Table \[table:sources\]. For S187, W3 and S140, the source coordinates correspond to water masers. For S255, the IRAS position is used. In the DR-21 area we used the coordinates of an ammonia core as given by @Jijina99 which practically coincide (within a few arcsec) with the position of DR 21 (OH). In the last column the distances to the sources are indicated. The data sets obtained for these sources are different.
[llll]{} Source & $\alpha$(2000) & $\delta$(2000) & $D$\
& (${\rm ^h\ ^m\ ^s }$) &($\degr$ $\arcmin$ $\arcsec$) & (kpc)\
S 187 &01 23 15.0 &61 48 47 & 1.0 $^a$\
W3 &02 25 28.2 &62 06 58 & 2.1 $^b$\
S 255 &06 12 53.3 &17 59 22 & 2.5 $^b$\
DR-21 NH$_3$ &20 39 00.4 &42 22 53 & 3.0 $^c$\
S 140 &22 19 18.2 &63 18 49 & 0.9 $^b$\
$^a$@fich, $^b$@blitz, $^c$@harvey1
\[table:sources\]
Instruments and frequencies
---------------------------
The sources were observed with the 20-m Onsala, 12-m NRAO (which belongs now to the Arizona Radio Observatory) and 30-m IRAM radio telescopes in the 3 mm and 1.3 mm wavebands. Several molecular transitions were observed in each waveband. At IRAM 30m also the dust continuum emission was mapped at 1.2 mm. The details of observations at each instrument are given below. A part of these observations has been already published (some of the CS and C$^{34}$S $J=2-1$ data by @Zin98 and the N$_2$H$^+$ $J=1-0$ data by @Pirogov03). We express the results of the line observations in units of main beam brightness temperature ($T_{\rm
mb}$) assuming the main beam efficiencies ($\eta_{\rm mb}$) as provided by the telescope documentation.
### Onsala observations
The observations were performed with SIS receiver in a single-sideband (SSB) mode using either dual beam switching with a beam throw of 115 or frequency switching. As a backend until the year 2000 we used 2 filter spectrometers (usually in parallel): a 256 channel filterbank with 250 kHz resolution and a 512 channel filterbank with 1 MHz resolution. Since 2000 we used mainly the autocorrelator spectrometer tuned to 50 kHz resolution. Pointing was checked periodically by observations of nearby SiO mas
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'In this paper we fill in a fundamental gap in the extremal bootstrap percolation literature, by providing the first proof of the fact that for all $d \geq 1$, the size of the smallest percolating sets in $d$-neighbour bootstrap percolation on $[n]^d$, the $d$-dimensional grid of size $n$, is $n^{d-1}$. Additionally, we prove that such sets percolate in time at most $c_d n^2$, for some constant $c_d >0 $ depending on $d$ only.'
author:
- 'Micha[ł]{} Przykucki[^1] [^2]'
- Thomas Shelton
title: |
Smallest percolating sets\
in bootstrap percolation on grids
---
Introduction {#sec:intro}
============
*Bootstrap percolation*, suggested by Chalupa, Leath, and Reich [@bootstrapbethe], is a simple cellular automaton modelling the spread of an infection on the vertex set of a graph $G$. For some positive integer $r$, given a set of initially infected vertices $A \subseteq V(G)$, in consecutive rounds we infect all vertices with at least $r$ already infected neighbours. *Percolation* occurs if every vertex of $G$ is eventually infected.
The majority of research into bootstrap percolation processes has been focused on the probabilistic properties of the model. More precisely, if we initially infect every vertex independently at random with some probability $p$, how likely is the system to percolate? The monotonicity of the model (i.e., the fact that infected vertices never heal) makes it reasonable to ask about the value of the *critical probability* $p$, above which percolation becomes more likely to occur than not. This quantity has been analysed for many different families of graphs $G$ and for various infection rules, and often very sharp results have been obtained by, e.g., Aizenman and Lebowitz [@metastabilityeffects], Holroyd [@sharpmetastability], and Balogh, Bollob[á]{}s, Duminil-Copin, and Morris [@sharpbootstrapall].
Another family of questions related to bootstrap percolation that have been studied is concerned with the extremal properties of the model. Morris [@largestgridbootstrap] analysed the size of the largest minimal percolating sets in $2$-neighbour bootstrap percolation on the $n \times n$ square. For the same setup, Benevides and Przykucki [@maxtime] determined the maximum time the process can take until it stabilises. However, the first extremal question that attracted attention in bootstrap percolation was about the size of the smallest percolating sets. For grid graphs, this has been studied by Pete [@diseaseprocesses] (the summary of Pete’s results can be found in Balogh and Pete [@randomdisease]). For the hypercube, the size of the smallest percolating sets for all values of the infection threshold was found by Morrison and Noel [@extremalcube]. Feige, Krivelevich, and Reichman [@contagiousGnp] analysed the size of these sets in random graphs, while Coja-Oghlan, Feige, Krivelevich, and Reichman [@contagiousExpanders] studied such sets in expander graphs.
The $d$-neighbour process in $d$ dimensions
-------------------------------------------
Let us introduce some notation. For $n \in {\mathbb{N}}$, let $[n] = \{1,2, \ldots, n\}$. The $d$-dimensional grid graph of size $n$ is the graph with vertex set $[n]^d$, in which $u,v \in [n]^d$ are adjacent if and only if they differ by a value of 1 in exactly one coordinate. For $d,r,n \in {\mathbb{N}}$, let $G_{d,r}(n)$ denote the size of the smallest percolating sets in $r$-neighbour bootstrap percolation on $[n]^d$. For a set $A \subset [n]^d$, let $\langle A \rangle_r$ be the *closure* of $A$ in $r$-neighbour bootstrap percolation, i.e., the set of all vertices that become infected in the process that was started from $A$.
Among the results stated in [@diseaseprocesses] (see also the Perimeter Lemma in the Appendix to [@randomdisease]) is the following theorem.
\[thm:pete\] For all $n, d \in {\mathbb{N}}$, we have $G_{d,d}(n)=n^{d-1}$.
This is obviously trivial for $d=1$, and the case when $d=2$ constitutes a lovely and well-known puzzle. Indeed, finding a percolating set of size $n$ is easy: just take one of the diagonals of the square. To show that there is no percolating set of size strictly less than $n$, we can refer to the famous *perimeter argument*: the perimeter of the infected set (understood as the number of edges between an infected and a healthy vertex, if we naturally embed our square $[n]^2$ in the infinite grid ${\mathbb{Z}}^2$) can never grow. Indeed, whenever a new vertex becomes infected, it is by virtue of at least two perimeter edges. Thus at least two edges are removed from the perimeter of the infected set, and at most two new ones are added, and the aforementioned monotonicity of the perimeter follows. Since the whole $n \times n$ grid has perimeter $4n$, and any initially infected vertex contributes at most $4$ edges to the perimeter, we need at least $n$ initially infected vertices to percolate.
Somewhat surprisingly, the perimeter argument carries immediately to higher dimensions, giving us the appropriate lower bound $G_{d,d}(n) \geq n^{d-1}$ for all $d \in {\mathbb{N}}$. As for the upper bound, there is a natural candidate, sometimes referred to as a “cyclic combination” of the one-dimensional lower set. More precisely, for $d\leq k\leq dn$, let $V_k=\{v=(v_1,...,v_d)\in [n]^d:\sum_{i=1}^{d} v_i=k\}$. It is then natural to believe that the set $$\label{eqn:initialSet}
A = A_d = \bigcup_{i=1}^{d} V_{in}$$ percolates in $d$-neighbour bootstrap percolation on $[n]^d$, and indeed this is the construction that was used to deduce the upper bound in [@diseaseprocesses]. One can imagine how two “neighbouring hyperplanes”, $V_{(i-1)n}$ and $V_{in}$, fill in the space between them with infection until the two growths meet, from which point on the process quickly finishes. The fact that $G_{d,d}(n) = n^{d-1}$ has become a “folklore knowledge” in the area of bootstrap percolation, and has sometimes even been referred to as an “observation”. Up to our best knowledge [@PetePrivate], no formal proof of Theorem \[thm:pete\] was provided in [@diseaseprocesses], and no such proof exists in the literature.
However, problems arise quickly when one tries to describe how exactly the space between the two hyperplanes is filled in. Any vertex in $V_{(i-1)n+1}$ with at least one coordinate equal to $1$ has fewer than $d$ infected neighbours in $V_{(i-1)n}$, and consequently does not become infected in step 1. Similarily, after one step, any vertex in $V_{(i-1)n+2}$ with at least one coordinate equal to at most $2$ has fewer than $d$ infected neighbours in $V_{(i-1)n+1}$, and also remains healthy. This problem builds up (analogous constraints can be easily formulated for the layers being infected “from above” by $V_{in}$) and, in fact, the two growths barely meet - two hyperplanes at distance $n+1$ apart would have stayed separated, while hyperplanes at distance $n-1$ would result in some vertices being infected by more than $d$ infected neighbours, and consequently no percolation by the perimeter argument.
What is however even more troublesome, describing the growth from the moment of the meeting onwards is where the real challenges occur. By the perimeter argument, we know that we have no elbow room in this description: no proper subset of $A$ percolates, and even a small perturbation of $A$ would not percolate if any vertex ever became infected by virtue of more than $d$ infected neighbours. In Figure \[fig:G\_[3,3]{}(6)\] we present the growth of the infected set, starting from $A$ as defined in , in $3$-neighbour bootstrap percolation on $[6]^3$. Even though we are in just three dimensions, and the size of the grid is very small, the process already feels quite difficult to describe and lasts as many as 14 steps. Consequently, we believe that Theorem \[thm:pete\] requires a proper, formal proof, which we provide as the main result of this paper in Section \[sec:main\].
![Example showing the spread of infection in $[6]^3$ starting from a set of
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'It has been proved that almost all $n$-bit Boolean functions have [*exact classical query complexity*]{} $n$. However, the situation seemed to be very different when we deal with [*exact quantum query complexity*]{}. In this paper, we prove that almost all $n$-bit Boolean functions can be computed by an exact quantum algorithm with less than $n$ queries. More exactly, we prove that $\mbox{AND}_n$ is the only $n$-bit Boolean function, up to isomorphism, that requires $n$ queries.'
address: |
$^{1}$Faculty of Informatics, Masaryk University, Brno 60200, Czech Republic\
$^2$ Faculty of Computing, University of Latvia,Rīga, LV-1586, Latvia\
$^3$ School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, USA\
author:
- 'Andris Ambainis $^{2,3}$'
- 'Jozef Gruska$^{1}$'
- 'Shenggen Zheng$^{1,}$'
title: Exact quantum algorithms have advantage for almost all Boolean functions
---
Quantum computing,Quantum query complexity ,Boolean function ,Symmetric Boolean function ,Monotone Boolean function ,Read-once Boolean function
Introduction
============
[*Quantum query complexity*]{} is the quantum generalization of classical [*decision tree complexity*]{}. In this complexity model, an algorithm is charged for “queries" to the input bits, while any intermediate computation is considered as free (see [@BdW02]). For many functions one can obtain large quantum speed-ups in this model in the case algorithms are allowed a constant small probability of error (bounded error). As the most famous example, Grover’s algorithm [@Gro96] computes the $n$-bit $\mbox{OR}$ function with $O(\sqrt {n})$ queries in the bounded error mode, while any classical (also exact quantum) algorithm needs $\Omega(n)$ queries. More such cases of polynomial speed-ups are known, see[@Amb07; @Bel12; @DHHM06]. For [*partial functions*]{}, even an exponential speed-up is possible, in case quantum resources are used, see [@Shor97; @Sim97]. In the bounded-error setting, quantum complexity is now relatively well understood. The model of [*exact quantum query complexity*]{}, where the algorithms must output the correct answer with certainty for every input, seems to be more intriguing. It is much more difficult to come up with exact quantum algorithms that outperform, concerning number of queries, classical exact algorithms.
Though for partial functions exact quantum algorithms with exponential speed-up are known (for instance in [@AmYa11; @BH97; @DJ92; @GQZ14; @ZQ14; @GQZ14b; @Zhg13]), the results for total functions have been much less spectacular: the best known quantum speed-up was just by a factor of 2 for many years [@CEMM98; @FGGS98]. Recently, in a breakthrough result, Ambainis [@Amb13] has presented the first example of a Boolean function $f:\{0,1\}^n\to \{0,1\}$ for which exact quantum algorithms have superlinear advantage over exact classical algorithms.
In exact classical query complexity ([*decision tree complexity*]{}, [*deterministic query complexity*]{}) model, almost all $n$-bit Boolean functions require $n$ queries [@BdW02]. However, the situation seemed very different for the case of exact quantum complexity. Montanaro et al. [@MJM11] proved that $\mbox{AND}_3$ is the only $3$-bit Boolean function, up to isomorphism, that requires 3 queries and using the semidefinite programming approach, they numerically[^1] demonstrated that all $4$-bit Boolean functions, with the exception of functions isomorphic to the $\mbox{AND}_4$ function, have exact quantum query algorithms using at most 3 queries. They also listed their numerical results for all symmetric Boolean functions on 5 and 6 bits, up to isomorphism.
In 1998, Beals at al. [@BBC+98] proved, for any $n$, that $\mbox{AND}_n$ has exact quantum complexity $n$. Since that time it was an interesting problem whether $\mbox{AND}_n$ is the only $n$-bit Boolean function, up to isomorphism, that has exact quantum complexity $n$. In this paper we approve that this is indeed the case. As a corollary we get that almost all $n$-bit Boolean functions have exact quantum complexity less than $n$.
We prove our main results in four stages. In the first one we give the proof for symmetric Boolean functions, in the second one for monotone Boolean functions and in the third one for the case of read-once Boolean functions. On this basis we prove in the fourth stage the general case. In all four cases proofs used quite different approaches. They are expected to be of a broader interest since all these special classes of Boolean functions are of broad interest.
The paper is organized as follows. In Section 2 we introduce some notation concerning Boolean function and query complexity. In Section 3 we investigate symmetric Boolean functions. In section 4 we investigate monotone Boolean functions. In section 5 we investigate read-once Boolean functions. In Section 6 we prove our main result. Finally, Section 7 contains a conclusion.
Preliminaries
=============
We introduce some basic needed notation in this section. See also [@Gru99; @NC00] for details on quantum computing and see [@BdW02; @BBC+98; @NS94] for more on query complexity models and [*multilinear polynomials*]{}.
Boolean functions
-----------------
An $n$-bit Boolean function is a function $f:\{0,1\}^n\to \{0,1\}$. We say $f$ is total if $f$ is defined on all inputs. For an input $x\in\{0,1\}^n$, we use $x_i$ to denote its $i$-th bit, so $x=x_1x_2\cdots x_n$. Denote $[n]=\{1,2,\ldots,n\}$. For $i\in[n]$, we write $$f_{x_i=b}(x)=f(x_1,\ldots,x_{i-1},b,x_{i+1},\ldots,x_n),$$ which is an $(n-1)$ bit Boolean function. For any $i\in[n]$, we have $$\label{Eq-df(x)}
f(x)=(1-x_i)f_{x_i=0}(x)+x_if_{x_i=1}(x).$$
We say that two Boolean functions $f$ and $g$ are [*query-isomorphic*]{} (by convenience, isomorphic will mean query-isomorphic in this paper) if they are equal up to negations and permutations of the input variables, and negation of the output variable. This relationship is sometimes known as NPN-equivalence [@MJM11].
We will use the sign $(\neg)$ for a possible negation. For example, $\mbox{AND}((\neg)x_1,x_2)$ can denote $x_1\wedge x_2$ or $\neg x_1\wedge x_2$. We use $|x|$ to denote the Hamming weight of $x$ (its number of 1’s).
[**Definition 1:**]{} We call a Boolean function $f:\{0,1\}^n\to \{0,1\}$ symmetric if $f(x)$ depends only on $|x|$.
An $n$-bit symmetric Boolean function $f$ can be fully described by a vector $(b_0,b_1,\ldots,b_n)\linebreak[0]\in\{0,1\}^{n+1}$, where $f(x)=b_{|x|}$, i.e. $b_k$ is the value of $f(x)$ for $|x|=k$ [@ZGR97].
For $x,y\in\{0,1\}^n$, we will write $x\preceq y$ if $x_i\leq y_i$ for all $i\in[n]$. We will write $x\prec y$ if $x\preceq y$ and $x\neq y$.
[**Definition 2:**]{} We call a Boolean function $f:\{0,1\}^n\to \{0,1\}$ monotone if $f(x)\leq f(y)$ holds whenever $x\preceq y$.
Monotonic Boolean functions are precisely those that can be defined by an expression combining the input bits (each of them may appear more than once) using only the operators $\wedge$ and $\vee$ (in particular $\neg$ is forbidden). Monotone Boolean functions have many nice properties. For example they have a unique prime conjunctive normal form (CNF) and a unique prime disjunctive normal form (DNF) in which no negation occurs [@EMG08].
Let $f:\{0,1\}^n\to \{0,1\}$ be a monotone Boolean function, $f$ has a prime CNF $$f(x)=\bigwedge_{I\in C}\bigvee_{i\in I} x_i,$$ where $C$ is the set of some $I\subseteq[n]$. Similarly, $f$ has a prime DNF $$f(x)=\bigvee_{J\in D}\bigwedge
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Caching at mobile devices, accompanied by device-to-device (D2D) communications, is one promising technique to accommodate the exponentially increasing mobile data traffic. While most previous works ignored user mobility, there are some recent works taking it into account. However, the duration of user contact times has been ignored, making it difficult to explicitly characterize the effect of mobility. In this paper, we adopt the alternating renewal process to model the duration of both the contact and inter-contact times, and investigate how the caching performance is affected by mobility. The *data offloading ratio*, i.e., the proportion of requested data that can be delivered via D2D links, is taken as the performance metric. We first approximate the distribution of the *communication time* for a given user by beta distribution through moment matching. With this approximation, an accurate expression of the data offloading ratio is derived. For the homogeneous case where the average contact and inter-contact times of different user pairs are identical, we prove that the data offloading ratio increases with the user moving speed, assuming that the transmission rate remains the same. Simulation results are provided to show the accuracy of the approximate result, and also validate the effect of user mobility.'
author:
- '[^1]'
bibliography:
- 'IEEEabrv.bib'
- 'report.bib'
title: Mobility Increases the Data Offloading Ratio in D2D Caching Networks
---
Introduction
============
The mobile data traffic is growing at an exponential rate, among which mobile video accounts for more than a half [@forecast2016cisco]. Caching popular contents at helper nodes or user devices is a promising approach to reduce the data traffic on the backhaul links, as well as improving the user experience of video streaming applications [@d2d-cache; @jcache]. In comparison with the commonly considered femto-caching system, caching at devices enjoys a unique advantage, i.e., the devices’ aggregate caching capacity grows with the number of devices [@d2d-cache]. Moreover, device caching can promote device-to-device (D2D) communications, where nearby mobile devices may communicate directly rather than being forced to communicate through the base station (BS)[@design].
Recently, caching in D2D networks has attracted lots of attentions. In [@scaling], the scaling behavior of the number of D2D collaborating links was identified. Three concentration regimes, classified by the concentration of the file popularity, were investigated. The outage-throughput tradeoff and optimal scaling laws of both the throughput and outage probability were studied in [@tradeoff]. Two coded caching schemes, i.e., centralized and decentralized, were proposed in [@fundamentallimits], where the contents are delivered via broadcasting.
So far, an important characteristic of mobile users, i.e., the user mobility, has been ignored in previous studies of D2D caching networks. There are some works starting to consider the effect of user mobility. Effective methodologies to utilize the user mobility information in caching design were discussed in [@magmobility]. In [@mobilitycodedcaching], the effect of mobility was evaluated in D2D networks with coded caching, with the conclusion that mobility can improve the scaling law of throughput. This result was based on the assumption that the user locations are random and independent in each time slot, which failed to take into account the temporal correlation.
The inter-contact model, which considers the temporal correlation of the user mobility, has been widely applied [@exintercontactmodel], where the timeline for an arbitrary pair of mobile users are divided into *contact times* and *inter-contact times*. Specifically, the *contact times* denote the time intervals when the mobile users are located within the transmission range. Correspondingly, the *inter-contact times* denote the time intervals between contact times [@pocket]. This model has been used to develop device caching schemes to exploit the user mobility pattern in [@mobilitycaching]. The throughput-delay scaling law was developed by characterizing the inter-contact pattern of the random walk model [@scalingmobility]. In these works, it was assumed that a fixed amount of data can be delivered within one contact time, while the duration of the contact times was not considered. However, as the user moving speed will affect the durations of both the contact and inter-contact times, it is critical to account for their effects when investigating the impact of user mobility on caching performance.
In this paper, we shall analytically evaluate the effect of mobility in D2D caching networks, by adopting an alternating renewal process to model the mobility pattern so that both the contact and inter-contact times are accounted for. The *data offloading ratio*, which is defined as the proportion of data that can be obtained via D2D links, is adopted as the performance metric. The main contribution is an approximate expression for the data offloading ratio, for which the main difficulty is to deal with multiple alternating renewal processes. We tackle it by first deriving the expectation and variance of the *communication time* of a given user, and then use a beta random variable to approximate it by moment matching. Furthermore, we investigate the effect of mobility in a homogeneous case, where the average contact and inter-contact times for all the user pairs are the same. In the low-to-medium mobility scenario, by assuming that the transmission rate is irrelevant to the user speed, it is proved that the data offloading ratio increases with the user speed for any caching strategy that does not cache the same contents at all devices. Simulation results validates the accuracy of the derived expression, as well as the effect of the user mobility.
System Model and Performance Metric
===================================
In this section, we will first introduce the alternating renewal process to model the user mobility pattern, and discuss the caching and file delivery models. Then, the performance metric, i.e., the data offloading ratio, will be defined.
User Mobility Model
-------------------
![The timeline for an arbitrary pair of mobile users.[]{data-label="intercontact"}](intercontact){width="3in"}
The inter-contact model, which captures the temporal correlation of the user mobility [@exintercontactmodel], is used to model the user mobility pattern. Specifically, the timeline of each pair of users is divided into *contact times*, i.e, the times when the users are in the transmission range, and *inter-contact times*, i.e., the times between consecutive contact times. Considering that contact times and inter-contact times appear alternatively in the timeline of a pair of users, similar to [@renewalmodel], an alternating renewal process is applied to model the pairwise contact pattern, as defined below [@renewalprocess].
Consider a stochastic process with state space $\{A,B\}$, and the successive durations for the system to be in states $A$ and $B$ are denoted as $\xi_k,k=1,2,\cdots$ and $\eta_k,k=1,2,\cdots$, respectively, which are i.i.d.. Specifically, the system starts at state $A$ and remains for $\xi_1$, then switches to state $B$ for $\eta_1$, then backs to state $A$ for $\xi_2$, and so forth. Let $\psi_k=\xi_k+\eta_k$. The counting process of $\psi_k$ is called as an *alternating renewal process*.
As shown in Fig. \[intercontact\], if the pair of users is in contact at $t=0$, $\xi_k$ and $\eta_k$ represent the contact times and inter-contact times, respectively; otherwise, $\xi_k$ and $\eta_k$ represent the inter-contact times and contact times, respectively. It was shown in [@exintercontact] that exponential curves well fit the distribution of inter-contact times, while in [@excontact], it was identified that exponential distribution is a good approximation for the distribution of the contact times. Thus, same as [@renewalmodel], we assume that the contact times and inter-contact times follows independent exponential distributions. For simplicity, the timelines of different user pairs are assumed to be independent. Specifically, we consider $N_u$ users in a network, and the index set of the users is denoted as $\mathcal{S}=\{1,2,\cdots,N_u\}$. The contact times and inter-contact times of users $i \in \mathcal{S}$ and $j \in \mathcal{S} \backslash \{i\}$ follow independent exponential distributions with parameters $\lambda^C_{i,j}$ and $\lambda^I_{i,j}$, respectively.
Caching and File Transmission Model
-----------------------------------
![A sample network with three mobile users.[]{data-label="model"}](model){width="2.6in"}
There is a library with $N_f$ files, whose index set is denoted as $\mathcal{F}=\{1,2,\cdots,N_f\}$, each with size $C$. Each user device has a limited storage capacity, and each file can be completely cached or not cached at all at each user device. Specifically, the caching placement is denoted as $$x_{j,f}=
\begin{cases}
1, \text{if user $j$ caches file $f$}, \\
0, \text{if user $j$ does not cache file $f$},
\end{cases}$$ where $j \in \mathcal{S}$ and $f \in \mathcal{F}$. User $i \in \mathcal{S}$, is assumed to request a file $f \in \mathcal{F}$ with probability $p^r_{i,f}$, where $\sum \limits_{f \in \mathcal{F}
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'With the growth of Internet of Things (IoT) and mobile edge computing, billions of smart devices are interconnected to develop applications used in various domains including smart homes, healthcare and smart manufacturing. Deep learning has been extensively utilized in various IoT applications which require huge amount of data for model training. Due to privacy requirements, smart IoT devices do not release data to a remote third party for their use. To overcome this problem, collaborative approach to deep learning, also known as Collaborative Deep Learning (CDL) has been largely employed in data-driven applications. This approach enables multiple edge IoT devices to train their models locally on mobile edge devices. In this paper, we address IoT device training problem in CDL by analyzing the behavior of mobile edge devices using a game-theoretic model, where each mobile edge device aims at maximizing the accuracy of its local model at the same time limiting the overhead of participating in CDL. We analyze the Nash Equilibrium in an *N*-player static game model. We further present a novel cluster-based fair strategy to approximately solve the CDL game to enforce mobile edge devices for cooperation. Our experimental results and evaluation analysis in a real-world smart home deployment show that 80% mobile edge devices are ready to cooperate in CDL, while 20% of them do not train their local models collaboratively.'
author:
- 'deepti.gupta@my.utsa.edu, olumide.kayode@utsa.edu, sbhatt@tamusa.edu, mgupta@tntech.edu, tosun@cs.utsa.edu'
bibliography:
- 'references.bib'
title: 'Learner’s Dilemma: IoT Devices Training Strategies in Collaborative Deep Learning'
---
Collaborative deep learning, IoT device, Edge computing, Game Theory.
Introduction {#sec:introduction}
============
In recent years, Internet of Things (IoT) has grown rapidly and billions of smart devices are expected to be added over next few years. These devices generate a tremendous amount of data, from health information [@celik2018soteria] to social networking [@zeng2017end]. Deep learning models use this data for training and enhancing intelligence of various data driven IoT applications. Most of the IoT devices connect to a central cloud platform to use cloud services. These cloud services are crucial for storage of the datasets and model learning. However, use of cloud services requires additional latency in real time applications. To overcome this issue, edge devices are used for local data training which also safeguards privacy of personal data. Unlike constrained IoT devices, such devices have the capability to support Machine Learning (ML) models and have been used in various applications. For example, video doorbell performs training on its local datasets, and identifies person at the door.
Deep learning models are often associated with the size of training dataset. Under a reasonable learning mechanism, more training data will enhance the accuracy and performance of a trained model. However, in the era of big data, data is often distributed and cannot be brought together due to personal privacy constraints. Collaborative Deep Learning (CDL) allows multiple IoT devices to train their models, without revealing associated personal data. CDL offers an attractive trade-off between privacy and utility of data sets. Recent research [@jiang2019lightweight; @chen2019communication] have discussed the privacy issues of local training devices and the impact of communication latency between IoT edge devices and Parameter Server (PS). However, the strategic behavior of the rational local training devices have not been discussed in previous research, i.e., the authors have assumed that all IoT devices are altruistic. Altruistic devices are ones which always follow a suggested protocol (what all devices have decided to follow initially) regardless whether they are benefiting or losing by following this protocol. However, devices are not altruistic in real life, they are rational. Rational devices are the ones which will deviate from suggested protocol if they think that they will be benefited more by following a different protocol. In our proposed system model, we assume that all the mobile edge devices are rational.
A mobile edge device, which has low quality data, always wants to be a part of CDL to increase accuracy of its local model. Other mobile edge devices, who have high quality data, do not want to collaborate with low quality data holder mobile edge device. Therefore, there is a dilemma for mobile edge devices to participate or not in CDL. In this paper, we address this research problem of learner’s dilemma by proposing a general system model, a CDL game model, and a novel cluster-based fair strategy which enables each participant to cooperate in CDL based on the clusters formed to achieve overall benefit to itself in training the local ML model. We also evaluate our CDL game model and novel cluster-based strategy in smart home deployment using ARAS dataset[@SmartHome]. The main contributions of this paper are as follows.
1. We identify the problem of unfair cooperation of participants in CDL. A local training device, which has low quality data builds its learning model to take advantage from other device, which has high quality data.
2. We address this research problem by analyzing the behavior of mobile edge devices using a game-theoretic model, where each device aims at maximizing the accuracy of its local model with minimal cost of participating in CDL.
3. We introduce a system model for CDL and propose a solution of above defined problem.
4. We also implement a cluster-based fair algorithm on ARAS dataset [@SmartHome], and the results reflect that proposed solution elicit cooperation in CDL.
The rest of paper is organized as follows. Section \[sec:related\] presents relevant work and related background. System model along with rational assumption is discussed in Section \[sec:system-model\]. Game model and game analysis are explained in Section \[sec:The Collaborative Deep Learning Game\] and Section \[sec:game-analysis\] respectively. Section \[sec:num-anal\] presents implementation of proposed system model along with results. Section \[sec:conc\] concludes the paper with future research directions.
Related Work {#sec:related}
============
In this section, we describe related work on information leakage on deep learning models, privacy-preserving deep learning and game models.
Information leakage on Deep Learning Models
-------------------------------------------
Information leakage of individuals’ private data has become a well known problem for deep learning models. Data masking techniques, such as pseudonymize and anonymize are used to prevent this problem. In pseudonymize, data can be traced back into its original state, whereas it becomes impossible to return data into its original state in anonymize. However, indirect re-identification could be possible in anonymize. For example, Netflix released a hundred million anonymized film ratings which was matched with the other dataset Internet Movie Database (IMDb).
Cloud platforms such as Google and Amazon offer various services “AI Deep Learning”. Any customer can upload a dataset to use the service and pay to build a prediction model, which works as black-box API. The membership inference attack on black-box API is discussed in [@shokri2017membership; @yeom2018privacy]. An attacker asks queries to target the model and receives the model’s prediction. Rahman et al. [@rahman2018membership] show that differentially private deep model can also fail against membership inference attack. A novel white-box membership inference attack was proposed by Nasr et al. [@nasr2018comprehensive], against deep learning algorithms to measure their training datasets membership leakage. Melis et al. [@melis2019exploiting] demonstrate that the updated parameter leaks information of participants, thus develops passive and active inference attacks to exploit this leakage.
Privacy-Preserving Deep Learning
--------------------------------
Each participant has its own sensitive datasets, which needs to be protected the dataset from information leakage. Various privacy mechanisms, such as Secure Multi-party Communication (SMC) [@kerschbaum2009practical], Homomorphic Encryption (HE) [@rivest1978data], and Differential Privacy (DP) [@dwork2014algorithmic] have been proposed to protect the datasets in CDL. SMC helps to protect intermediate steps of the computation. Mohassel et al. [@mohassel2017secureml] adopt a two-server model for privacy-preserving training, used by previous work on privacy-preserving deep learning via SMC [@nikolaenko2013privacy1].
However, Aono et al. [@aono2018privacy] pointed out that the local data may be actually leaked to an honest-but-curious server. Using additively HE techniques fix several problems and also have some drawbacks. To obscure an individual’s identity, DP adds mathematical noise to a small sample of the individual’s usage pattern. Prior work [@abadi2016deep; @jiang2019lightweight; @shokri2015privacy; @weng2018deepchain] use differential privacy on privacy-preserving CDL system to protect privacy of training data. However, Hitaj et al. [@hitaj2017deep] pointed out that privacy preserving deep learning approach is failed to protect data privacy and demonstrated that a malicious participant can learn personal information of other participant through Generative Adversarial Network (GAN) learning.
The most dominant technique to optimize the loss function is Stochastic Gradient Descent (SGD). SGD is a method to find the optimal parameter configuration for a ML algorithm. SGD is applied in various privacy-preserving deep learning models [@abadi2016deep; @melis2019exploiting; @mohassel2017secureml; @nasr2018comprehensive]. PS receives the gradients from mobile edge devices by using different approaches like round robin, random order [@
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
The energy of a graph $G$, denoted by $E(G)$, is defined as the sum of the absolute values of all eigenvalues of $G$. Let $n$ be an even number and $\mathbb{U}_{n}$ be the set of all conjugated unicyclic graphs of order $n$ with maximum degree at most $3$. Let $S_n^{\frac{n}{2}}$ be the radialene graph obtained by attaching a pendant edge to each vertex of the cycle $C_{\frac{n}{2}}$. In \[Y. Cao et al., On the minimal energy of unicyclic Hückel molecular graphs possessing Kekulé structures, Discrete Appl. Math. 157 (5) (2009), 913–919\], Cao et al. showed that if $n\geq 8$, $S_n^{\frac{n}{2}}\ncong G\in \mathbb{U}_{n}$ and the girth of $G$ is not divisible by $4$, then $E(G)>E(S_n^{\frac{n}{2}})$. Let $A_n$ be the unicyclic graph obtained by attaching a $4$-cycle to one of the two leaf vertices of the path $P_{\frac{n}{2}-1}$ and a pendent edge to each other vertices of $P_{\frac{n}{2}-1}$. In this paper, we prove that $A_n$ is the unique unicyclic graph in $\mathbb{U}_{n}$ with minimal energy.\
[**Keywords:**]{} Minimal energy; Unicyclic graph; Perfect matching; Characteristic polynomial; Degree\
[**AMS Subject Classification 2000:**]{} 15A18; 05C50; 05C90; 92E10
---
=0.30in
\[section\] \[lem\][Theorem]{} \[lem\][Corollary]{} \[lem\][Conjecture]{} \[lem\][Remark]{} \[lem\][Definition]{}
[**On the minimal energy of conjugated unicyclic graphs with maximum degree at most 3**]{}
=0.20in
=0.20in
[ Hongping Ma$^{1}$, Yongqiang Bai$^{1}$[^1], Shengjin Ji$^{2}$\
$^{1}$ School of Mathematics and Statistics, Jiangsu Normal University,\
Xuzhou 221116, China\
$^{2}$ School of Science, Shangdong University of Technology,\
Zibo 255049, China\
Email: hpma@163.com, bmbai@163.com, jishengjin2013@163.com ]{}
=0.27in
Introduction
============
Let $G$ be a simple graph with $n$ vertices and $A(G)$ the adjacency matrix of $G$. The eigenvalues $\lambda_{1}, \lambda_{2},\ldots, \lambda_{n}$ of $A(G)$ are said to be the eigenvalues of the graph $G$. The energy of $G$ is defined as $$E=E(G)=\sum_{i=1}^{n}|\lambda_{i}|.$$ This concept was intensively studied in chemistry, since it can be used to approximate the total $\pi$-electron energy of a molecular. Further details on the mathematical properties and chemical applications of $E(G)$, see the recent book [@LSG], reviews [@G2; @GLZ], and papers [@BB; @DM; @DMG; @GFAHG; @LSWL; @MMZ; @Z]. One of the fundamental question that is encountered in the study of graph energy is which graphs (from a given class) have minimal and maximal energies. A large of number of papers were published on such extremal problems, especially for various subclasses of trees and unicyclic graphs, see Chapter 7 in [@LSG]. A conjugated unicyclic graph is a connected graph with one unique cycle that has a perfect matching. The problem of determining the conjugated unicyclic graph with minimal energy has been considered in [@LZZ; @WCL], and Li et al. [@LZZ] proved that the conjugated unicyclic graph of order (even) $n$ with minimal energy is $U_1$ or $U_2$, as shown in Figure \[fig-minimal\]. It has been shown that $E(U_1)<E(U_2)$ by Li and Li [@LL]. Recently, results on ordering of conjugated unicyclic graphs by minimal energies have been extended in [@W; @Z]. In particular, $U_2$ is unique conjugated unicyclic graph of order $n$ with second-minimal energy.
(1784.8, 562.2)(0,0) (0,0)[![The conjugated unicyclic graphs with minimal and second-minimal energy.[]{data-label="fig-minimal"}](fig-minimal "fig:")]{} (73.92,38.20)
(150.0, 60.0)\[l\]
$U_1$
(1294.83,382.88)
(450.0, 60.0)\[l\]
$\frac{n}{2}-2$
(561.90,383.72)
(450.0, 60.0)\[l\]
$\frac{n}{2}-3$
(944.44,58.39)
(150.0, 60.0)\[l\]
$U_2$
The degree of a vertex $v$ in a graph $G$ is denoted by $d_G(v)$. Denote by $\Delta$ the maximum degree of a graph. From now on, let $n$ be an even number. Let $\mathbb{U}_{n}$ be the set of all conjugated unicyclic graphs of order $n$ with $\Delta\leq 3$. Let $G\in \mathbb{U}_{n}$, the length of the unique cycle of $G$ is denoted by $g(G)$, or simply $g$, and the unique cycle of $G$ is denoted by $C_g(G)$, or simply $C_g$. Let $S_n^{\frac{n}{2}}$ be the radialene graph obtained by attaching a pendant edge to each vertex of the cycle $C_{\frac{n}{2}}$. Wang et al. [@WCZL] showed the following results: Assume that $n\geq 6$ and $S_n^{\frac{n}{2}}\ncong G\in
\mathbb{U}_{n}$. Then if one of the following conditions holds: (i) $\frac{n}{2}\equiv g\equiv 1$ (mod $2$) and $g\leq\frac{n}{2}$, (ii) $g\not\equiv \frac{n}{2}\equiv 0$ (mod $4$), (iii) $\frac{n}{2}\equiv g\equiv 2$ (mod $4$), and $g\leq\frac{n}{2}$, then $E(G)>E(S_n^{\frac{n}{2}})$. Y. Cao et al. [@CLLZ] improved the above results by proving the following Lemma.
[[@CLLZ]]{.nodecor}\[lem g-non-4-multiple\] If $n\geq 8$, $S_n^{\frac{n}{2}}\ncong G\in \mathbb{U}_{n}$ with $g\not\equiv 0$ [(mod $4$)]{.nodecor}, then $E(G)>E(S_n^{\frac{n}{2}})$.
Let $A_n$, $B_n$, $D_n$ and $E_n$ be the graphs shown in Figure \[fig-minimal-degree-3\].
(2792.5, 525.4)(0,0) (0,0)[![Four graphs in $\mathbb{U}_{n}$.[]{data-label="fig-minimal-degree-3"}](fig-minimal-degree-3 "fig:")]{} (185.67,39.45)
(150.0, 60.0)\[l\]
$A_n$
(374.88,419.32)
(450.0, 60.0)\[l\]
$\frac{n}{2}-2$
(847.63,38.20)
(150.0, 60.0)\[l\]
$B_n$
(1012.53,423.62)
(450.0, 60.0)\[l\]
$\frac{n}{2}-3$
(1472.33,42.99)
(150.0, 60.0)\[l\]
$D_n$
(1578.87,409.06)
(450.0, 60.0)\[l\]
$\frac{n}{2}-3$
(1578.87,409.06)
(450.0, 60.0)\[l\]
$\frac{n}{2}-3$
(2196.00,41.96)
(150.0, 60.0)\[l\]
$E_n$
(2302.54,408.04)
(450.0, 60.0)\[l\]
$\frac{n}{2}-4$
In this paper, we completely characterize the graph with minimal energy in $\mathbb{U}_{n}$ by showing the following result.
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Ultra-cold bosons in zig-zag optical lattices present a rich physics due to the interplay between frustration, induced by lattice geometry, two-body interaction and three-body constraint. Unconstrained bosons may develop chiral superfluidity and a Mott-insulator even at vanishingly small interactions. Bosons with a three-body constraint allow for a Haldane-insulator phase in non-polar gases, as well as pair-superfluidity and density wave phases for attractive interactions. These phases may be created and detected within the current state of the art techniques.'
author:
- 'S. Greschner'
- 'L. Santos'
- 'T. Vekua'
title: 'Ultra-cold bosons in zig-zag optical lattices'
---
#### Introduction
Atoms in optical lattices offer extraordinary possibilities for the controlled emulation and analysis of lattice models and quantum magnetism [@Lewenstein2007]. Various lattice geometries are attainable by means of proper laser arrangements, including triangular [@Becker2010] and Kagome [@Jo2012] lattices, opening fascinating possibilities for the study of geometric frustration, which may result in flat bands in which the constrained mobility may largely enhance the role of interactions [@Huber2010]. Moreover, the value and sign of inter-site hopping may be modified by means of shaking techniques [@Eckardt2005; @Zenesini2009], allowing for the study of frustrated antiferromagnets with bosonic lattice gases [@Struck2011].
Interatomic interactions may be controlled basically at will by means of Feshbach resonances [@Chin2010]. In particular, large on-site repulsion may allow for the suppression of double occupancy in bosonic gases at low fillings (hard-core regime). Interestingly, it has been recently suggested that, due to a Zeno-like effect, large three-body loss rates may result in an effective three-body constraint, in which no more than two bosons may occupy a given lattice site [@Daley2009]. This constraint opens exciting novel scenarios, especially in what concerns stable Bose gases with attractive on-site interactions, including color superfluids in spinor Bose gases [@Titvinidze2011] and pair-superfluid phases [@Daley2009; @Bonnes2011; @Chen2011]. The suppression of three-body occupation has been hinted in recent experiments [@Mark2012].
Under proper conditions, lattice gases may resemble to a large extent effective spin models, e.g. hard-core bosons may be mapped into a spin-$1/2$ XY Heisenberg model [@Lewenstein2007]. Lattice bosons at unit filling resemble to a large extent spin-$1$ chains [@DallaTorre2006], and in the presence of inter-site interactions, as it is the case of polar gases [@Lahaye2009], have been shown to present a gapped Haldane-like phase [@Haldane1983] (dubbed Haldane-insulator (HI) [@DallaTorre2006; @Berg2008]) characterized by a non-local string-order [@DenNijs1989].
In this Letter we analyze the physics of ultra-cold bosons in zig-zag optical lattices. We show that the interplay of frustration and interactions lead to a different physics for unconstrained and constrained (with up to two particles per site) bosons. For unconstrained bosons, geometric frustration induces chiral superfluidity, and allows for a Mott-insulator phase even at vanishingly small interactions. For constrained bosons, we show that a Haldane-insulator phase becomes possible even for non-polar gases. Moreover, pair-superfluid [@Daley2009; @Bonnes2011; @Chen2011] and density-wave phases may occur for attractive on-site interactions. A direct first-order phase transition from Haldane-insulator to pair-superfluid is observed and explained. These phases may be realized and detected with existing state of the art techniques.
![Zig-zag chains formed by an incoherent superposition between a triangular lattice [@Becker2010] $V_1({\vec r}\equiv(x,y))=V_{10}\left [ \sin^2\left ({\vec b}_1\cdot{\vec r}/2\right )+\sin^2\left ({\vec b}_2\cdot {\vec r}/2\right )
+\sin^2\left (({\vec b}_1-{\vec b}_2)\cdot{\vec r}/2\right )\right ]$, with $k$ the laser wavenumber, ${\vec b}_1=\sqrt{3}k {\vec e}_y$ and ${\vec b}_2=\sqrt{3}k(\sqrt{3}{\vec e}_x/2-{\vec e}_y/2)$, and an additional lattice $V_2({\vec r})=V_{20}\sin^2(\sqrt{3}ky/4 - \pi/4)$. In the figure, in which $V_{20}/V_{10}=2$, darker regions mean lower potential. The hopping rate between nearest (next-nearest) neighbors is $t>$ ($t'$).](fig1.eps){width="5.0cm"}
\[fig:1\]
#### Zig-zag lattices.
In the following we consider bosons in zig-zag optical lattices. As shown in Fig. \[fig:1\], this particular geometry may result from the incoherent superposition of a triangular lattice with elementary cell vectors $\vec{a}_1=a {\vec e}_x$ and $\vec{a}_2=a(\frac{1}{2}{\vec e_x}+\frac{\sqrt{3}}{2}{\vec e}_y)$ (formed by three laser beams of wavenumber $k=4\pi/3a$ oriented at $120$ degrees from each other, as discussed in Ref. [@Becker2010]) and a superlattice with lattice spacing $\sqrt{3}a$ oriented along $y$. For a sufficiently strong superlattice, zig-zag ladders are formed, and the hopping between ladders may be neglected. We will hence concentrate in the following on the physics of bosons in a single zig-zag ladder, which is to a large extent given by the rates $t$ and $t'$ characterizing the hopping along the two directions ${\vec a}_{1,2}$ (Fig. \[fig:1\]). As shown in Ref. [@Struck2011], a periodic lattice shaking may be employed to control the value of $t$ and $t'$ independently. Interestingly, their sign may be controlled as well. In the following we consider an inverted sign for both hoppings, which result in an anti-ferromagnetic coupling between sites [@Struck2011].
#### Model.
Ordering the sites as indicated in Fig. \[fig:1\], the physics of the system is given by a Bose-Hubbard Hamiltonian with on-site interactions characterized by the coupling constant $U$, nearest-neighbor hopping $t<0$ and next-nearest-neighbor hopping $t'<0$: $$\begin{aligned}
\label{eq:H-BH}
H&=&\sum_i \left [ -\frac{t}{2} b_i^\dag b_{i+1} - \frac{t'}{2} b_i^\dag b_{i+2} + {\rm H.c} \right ]\\
&+&\frac{U}{2}\sum_i n_i(n_i-1)+U_3\sum_i n_i(n_i-1)(n_i-2),\nonumber\end{aligned}$$ where $b_i^\dag,b_i$ are the bosonic creation/annihilation operators of particles at site $i$, $n_i=b_i^\dag b_i$, and we have added the possibility of three-body interactions, characterized by the coupling constant $U_3$. We assume below an average unit filling ${\bar n}=1$.
#### Unconstrained bosons.
We discuss first the ground-state properties of unconstrained bosons ($U_3=0$). At $U=0$, the Hamiltonian is diagonalized in quasi-momentum space $H=\sum_k \epsilon(k) b_k^\dag b_k$, with the dispersion $\epsilon(k)= |t| (\cos k +j\cos 2k)$, with $j\equiv t'/t$. Depending on the frustration $j$ we may distinguish two regimes. If $j<1/4$, the dispersion $\epsilon(k)$ presents a single minimum at $k=\pi$, and hence small $U$ will introduce a superfluid (SF) phase, with quasi-condensate at $k=\pi$. If $j>1/4$, $\epsilon(k)$ presents two non-equivalent minima at $k=k0\equiv\pm \arccos[-1/4j]$. As shown below, interactions favor the predominant population of one of these minima, and the system enters a chiral superfluid (CSF) phase with a non-zero local boson current characterized by a finite chirality $\langle \kappa_i\rangle$, with $\kappa_i=\frac{i}{2}(b_{i}^\dag b_{i+1}-{\rm H.c.})$. At $j=1/4$, the Lifshitz point, the dispersion becomes quartic at the $k=\pi$ minimum, $\epsilon(k)\sim (k-\pi)^4$,
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
In this paper we provide a complete algebraic characterization of elementary equivalence of rings with a finitely generated additive group in the language of pure rings. The rings considered are arbitrary otherwise.\
[**2010 MSC:**]{} 03C60\
[**Keywords:**]{} Ring, Elementary Equivalence, Largest Ring of a Bilinear Map
author:
- 'Alexei Miasnikov, Mahmood Sohrabi[^1]'
title: On elementary equivalence of rings with a finitely generated additive group
---
Introduction
============
This paper continues the authors’ efforts [@MS; @MS2010; @MS2016], in providing a comprehensive and uniform approach to various model-theoretical questions on algebras and nilpotent groups. By a *scalar ring* we mean a commutative associative unitary ring. Assume $A$ is a scalar ring. We say that $R$ is an *$A$-algebra* if $R$ is abelian group equipped with an $A$-bilinear binary operation. We use the term *ring* for a ${{\mathbb}{Z}}$-algebra where ${{\mathbb}{Z}}$ is the ring of rational integers, reserving the term scalar ring for commutative associative unitary rings. The ring $R$ is said to be a *finite dimensional ${{\mathbb}{Z}}$-algebra* or an *FDZ-algebra* for short if the additive group $R^+$ of the ring $R$ is finitely generated as an abelian group.
The main problem we tackle here is to characterize the elementary equivalence of FDZ-algebras via a complete set of elementary invariants. The invariants will be purely algebraic.
Statements of the main results {#approach:sec}
------------------------------
In this paper the language $L$ denotes the language of pure rings without a constant for multiplicative identity. That is because an arbitrary ring may not have a unit. By $L_1$ we mean the usual language of rings with identity.
For us an $A$-module $M$ is a two-sorted structure $\langle M, A, s \rangle$, where $M$ is an abelian group, $A$ is a scalar ring and $s$ is the predicate describing the action of $A$ on $M$. Denote the language by $L_2$. We often drop $s$ from our notation. Since the scalar ring are always assumed to be commutative we do not specify whether the modules are left or right modules.
Here is our first main result.
\[elemmod:thm\]Let $A$ be an FDZ-scalar ring and let $M$ be a finitely generated $A$-module. Then there exists a sentence $\psi_{M,A}$ of the language $L_2$ such that $\langle M,A\rangle\models \psi_{M,A}$ and for any FDZ-scalar ring $B$ any finitely generated $B$-module $N$, we have $$\langle N,B\rangle\models \psi_{M,A} \Leftrightarrow \langle N,B\rangle
\cong \langle M,A\rangle.$$
The proof of the theorem appears at the end of Section \[Z-interpret:sec\]. Indeed Theorem \[elemmod:thm\] implies the next three statements. The first two state the same result.
\[scalarrings:cor\] For any FDZ-scalar ring $A$ there exists a formula $\psi_A$ of $L_1$ such that $A\models \psi_A$ and for FDZ-scalar ring $B$ we have $$B\models \psi_A \Leftrightarrow A\cong B.$$
\[scalarrings:cor2\]Let $\mathcal{K}$ be the class of all FDZ-scalar rings. Then any $A$ from $\mathcal{K}$ is finitely axiomatizable inside $\mathcal{K}$.
Let us denote by $L_3$ the first-order language of two-sorted algebras. An algebra $\langle C, A\rangle$ consists of an arbitrary ring $C$, and the scalar ring $A$ (and a predicate describing the scalar multiplication which is dropped from the notation). As mentioned it is actually a corollary of Theorem \[elemmod:thm\]. We provide a brief of it at the beginning of Section \[main:sec\].
\[elem-iso-alg:thm\] Let $\mathcal{A}$ be the class of all two-sorted algebras $\langle C, A\rangle$ where $C$ is finitely generated as an $A$-module and $A^+$ is finitely generated as an abelian group. For each $\langle C, A\rangle\in \mathcal{A}$ there exists a formula $\phi_{C,A}$ of $L_3$ such that $\langle C, A\rangle\models \phi_{C,A}$ and for any $\langle D, B\rangle \in \mathcal{A}$, $$\langle D, B\rangle\models \phi_{C,A} \Leftrightarrow \langle C,A\rangle \cong \langle D,B\rangle$$ as two-sorted algebras.
To state the main result of the paper we need to introduce some more definitions and notation. Consider an arbitrary ring $R$. Define the *two-sided annihilator ideal* of $R$ by $$Ann(R)=\{x\in R: xy=yx=0, \forall y\in R\}.$$ By $R^2$ we denote the ideal of $R$ generated by all products $x\cdot y$ (or $xy$ for short) of elements of $R$.
Consider a scalar ring $A$ and let $R$ be an $A$-algebra. Assume $I$ is an ideal of $R$. Let $$Is_A(I){\stackrel{\text{def}}{=}}\{ x\in R: ax\in I,\text{ for some }a\in A\setminus\{0\}\}.$$ It is easy to show that $Is_A(I)$ is an ideal in $R$. We simply denote $Is_{{\mathbb}{Z}}(I)$ by $Is(I)$. Now assume $R$ is an FDZ-algebra. An *addition* $R_0$ of $R$ is a direct complement of the ideal $\Delta(R){\stackrel{\text{def}}{=}}Is(R^2)\cap Ann(R)$ in $Ann(R)$. Such a complement exists in this situation since $Ann(R)$ is a finitely generated abelian group and $Ann(R)/\Delta(R)$ is free abelian. It is clear that $R_0$ is actually an ideal of $R$. The quotient $R_F{\stackrel{\text{def}}{=}}R/R_0$ is called the *foundation* of $R$ associated to the addition $R_0$.
Finally for an FDZ-algebra $R$ set $M(R){\stackrel{\text{def}}{=}}Is(R^2+Ann(R))$ and $N(R){\stackrel{\text{def}}{=}}Is(R^2)+Ann(R)$. Note that $M(R)/N(R)$ is a finite abelian group.
We are now ready to state the main result of this paper.
\[mainnice:thm\] Assume $R$ and $S$ are FDZ-algebras. Then the following are equivalent.
1. $R\equiv S$ as arbitrary rings.
2. Either $M(R)=N(R)$ and $R\cong S$, or there exists a monomorphism $\phi:R\to S$ of rings and additions $R_0$ and $S_0$ of $R$ and $S$ respectively such that
1. $\phi$ induces an isomorphism $R/R_0\cong S/S_0$,
2. $\phi$ induces an isomorphism $\dfrac{M(R)}{N(R)}\cong \dfrac{M(S)}{N(S)}$,
3. $\phi$ restricts to a monomorphism from $R_0$ into $S_0$, where the index $[S_0:\phi(R_0)]$ is finite and prime to the index $[M(R):N(R)]\neq 1$.
The direction $(1.)\Rightarrow (2.)$ is called the *Characterization Theorem* and will be proved in Section \[main:sec\]. The direction $(2.)\Rightarrow (1.)$ called naturally the *converse of the characterization theorem*, stated in somewhat different terms, is given by Theorem \[converse\].
An FDZ-algebra is called *regular* if for some addition (and therefore for any addition) $R_0$ there exists a subring $R_F$ of $R$ containing $R^2$ such that $R\cong R_F \times R_0$. In Lemma \[regular-M=N:lem\] we shall prove that $R$ is a regular FDZ-algebra if and only if $M(R)=N(R)$. So the following statement was actually embedded in Theorem \[mainnice:thm\].
\[regular:cor\] Let $R$ be a regular FDZ-algebra. Then for an FDZ-algebra $S$, $$R\equiv S \Leftrightarrow R\cong S.$$
Finally we call an FDZ-algebra $R$ *tame* if $Ann(R)\leq Is(R^2)$. The following theorem is the generalization of Corollary \[scalarrings:cor
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'Let $A$ be a $K$-subalgebra of the polynomial ring $S=K[x_1,\ldots,x_d]$ of dimension $d$, generated by finitely many monomials of degree $r$. Then the Gauss algebra $\mathbb{G}(A)$ of $A$ is generated by monomials of degree $(r-1)d$ in $S$. We describe the generators and the structure of $\mathbb{G}(A)$, when $A$ is a Borel fixed algebra, a squarefree Veronese algebra, generated in degree $2$, or the edge ring of a bipartite graph with at least one loop. For a bipartite graph $G$ with one loop, the embedding dimension of $\mathbb{G}(A)$ is bounded by the complexity of the graph $G$.'
address:
- 'Jürgen Herzog, Fachbereich Mathematik, Universität Duisburg-Essen, Campus Essen, 45117 Essen, Germany'
- 'Raheleh Jafari, Mosaheb Institute of Mathematics, Kharazmi University, and School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran.'
- 'Abbas Nasrollah Nejad, Department of Mathematics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan 45137-66731, Iran'
author:
- Jürgen Herzog
- Raheleh Jafari
- Abbas Nasrollah Nejad
title: On the Gauss algebra of toric algebras
---
[^1]
Introduction {#introduction .unnumbered}
============
Let $V\subseteq \PP_K^{n-1}$ be a projective variety of dimension [$d-1$]{} over an algebraically closed field $K$ of characteristic zero. Denote by $V_{\mathrm{sm}}$ the set of non-singular points of $V$ and by $\GG( {d-1},n-1)$ the Grassmannian of [$d-1$]{}-planes in $ {\PP_K^{n-1}}$. The *Gauss map* of $V$ is the morphism $$\gamma: V_{\mathrm{sm}}\longrightarrow \GG( {d-1},n-1),$$ which sends each point $p\in V_{\mathrm{sm}}$ to the embedded tangent space $\mathrm{T}_{p}V $ of $V$ at the point $p$. The closure of the image of $\gamma$ in $\GG( {d-1},n-1)$ is called the *Gauss image* of $V$, or *the variety of tangent planes* and is denoted by $\gamma(V)$. The homogeneous coordinate ring of $\gamma(V)$ in the Plücker embedding of the Grassmannian $\mathbb{G}( {d-1},n-1)$ of $ {(d-1)}$-planes is called the *Gauss algebra* of $V$. The Gauss map is a classical subject in algebraic geometry and has been studied by many authors. For example, it is known that the Gauss map of a smooth projective variety is finite [@GH; @Zak]; in particular, a smooth variety and its Gauss image have the same dimension with the obvious exception of a linear space. Zak [@Zak Corollary 2.8] showed that, provided $V$ is not a linear subvariety of $\PP_K^n$, the dimension of the Gauss image satisfies the inequality $\dim V-\dim \Sing(V)-1\leq \dim \gamma(V)\leq \dim V$, where $\Sing(V)$ denotes the singular locus of $V$. For an algebraic proof of Zak’s inequality, see [@AKB].
We take up the situation where $V\subset \PP_K^{n-1}$ is a unirational variety. To elaborate on the algebraic side of the picture, consider the polynomial ring $S=K[x_1,\ldots,x_d]$. Let $\gg=g_1,\ldots,g_n$ be a sequence of non-constant homogeneous polynomials of the same degree in $S$ generating the $K$-subalgebra $A=K[\gg]\subseteq S$ of dimension $d$. Then the Jacobian matrix $\Theta(\gg)$ of $\gg$ has rank $d$ [@Aron1 Proposition 1.1]. In this situation we define the *Gauss algebra* associated with $\gg$ as the $K$-subalgebra generated by the set of $d\times d$ minors of $\Theta(\gg)$ [@BGS Definition 2.1]. Since the definition does not depend on the choice of the homogeneous generators of $A$, we simply denote the Gauss algebra associated with $\gg$, by $\GG(A)$, and call it the Gauss algebra of $A$. The Gauss algebra $\mathbb{G}(A)$ is isomorphic to the coordinate ring of the Gauss image of the projective variety defined parametrically by $\gg$ in the Plücker embedding of the Grassmannian $\mathbb{G}( {d-1},n-1)$ of [$d-1$]{}-planes. Moreover, there is an injective homomorphism of $K$-algebras $\GG(A){\hookrightarrow}A$ inducing the rational map from $\proj{A}$ to its Gauss image [@BGS Lemma 2.3].
In this paper, we study the Gauss algebra of toric algebras. If $A\subset S$ is a toric algebra with monomial generators $\gg=g_1,\ldots,g_n$ of the same degree, then all minors of $\Theta(\gg)$ are monomials. In particular, the Gauss algebra is a toric algebra. For example, it has been shown that the Gauss algebra of a Veronese algebra is again Veronese [@BGS Proposition 3.2]. Veronese algebras are special cases of a more general class of algebras, namely the class of Borel fixed algebras. As a generalization of the above mentioned result, we show that the Gauss algebra of any Borel fixed algebra is again Borel fixed, see Theorem \[Borel\_fix\]. This approach provides a simple proof for [@BGS Proposition 3.2]. Veronese algebras are actually principal Borel fixed algebras, that is, the Borel set defining the algebra admits precisely one Borel generator. In general the number of Borel generators of the Borel fixed algebra $A$ and that of $\GG(A)$ may be different. However, in Theorem \[principal\] we show that the Gauss algebra of a principal Borel fixed algebra is again principal. This has the nice consequence that the Gauss algebra of a principal Borel fixed algebra is a normal Cohen-Macaulay domain, and its defining ideal is generated by quadrics. Note that in general the property of $A$ being normal does not imply that $\GG(A)$ is normal, and vice versa (Example \[non-nomral\] and Theorem \[2-Veronese\](d)).
The Gauss algebra of a squarefree Veronese algebra is much harder to understand. We can give a full description of $\GG(A)$, when $A$ is a squarefree Veronese algebra generated in degree $2$. In Theorem \[2-Veronese\] we show that $\GG(A)$ is defined by all monomials $u$ of degree $d$ and $|\supp(u)|\geq 3$, provided $d\geq 5$. Algebras of this type may be viewed as the base ring of a polymatroid. In particular, $\GG(A)$ is normal and Cohen–Macaulay. However, $\GG(A)$ is not normal for $d=4$. Yet for any $d$, the Gauss map $\gamma: \proj{A}\dashrightarrow \proj{\GG(A)}$ is birational.
In the last section of this paper we study the Gauss algebra of the edge ring of a finite graph. Let $G$ be a loop-less connected graph with $d$ vertices. It is well-known that the dimension of the edge ring $A=K[G]$ of $G$ is $d$, if $G$ is not bipartite, and is $d-1$ if $G$ is bipartite. In our setting, $\GG(A)$ is defined under the assumption that $\dim A=d$. By using a well-known theorem [@GKS] of graph theory, the generators of $\GG(A)$, when $G$ is not bipartite, correspond to $d$-sets $E$ of edges of $G$, satisfying the property that the subgraph with edges $E$ has an odd cycle in each of its connected components. In the bipartite case we form the graph $G^L$, where $L$ is a non-empty subset of the vertex set of $G$, by adding a loop to $G$ for each vertex in $L$. Then $A=K[G^L]$ has dimension $d$, and there is bijective map from the set of pairs $(V,T)$ to the set of monomial generators of $\GG(A)$, where $V$ is a non-empty subset of $L$ and $T$ is a set of edges which form a spanning forest $G(T)$ of $G$ with the property that each connected component of $G(T)$ contains exactly one vertex of $V$. From this description it follows that if $|L|=1$, then the embedding dimension of the Gauss algebra is bounded by the complexity of the graph, which by definition, is the number of spanning trees of the graph. This is an important graph invariant. The number of spanning trees provides a measure for the
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
[*Abstract:*]{} We consider a degenerate stochastic differential equation that has a sticky point in the Markov process sense. We prove that weak existence and weak uniqueness hold, but that pathwise uniqueness does not hold nor does a strong solution exist.
.2cm *Subject Classification: Primary 60H10; Secondary 60J60, 60J65*
author:
- 'Richard F. Bass'
title: A stochastic differential equation with a sticky point
---
Introduction {#S:intro}
============
The one-dimensional stochastic differential equation \[intro-E0\] dX\_t=(X\_t) dW\_t has been the subject of intensive study for well over half a century. What can one say about pathwise uniqueness when $\sigma$ is allowed to be zero at certain points? Of course, a large amount is known, but there are many unanswered questions remaining.
Consider the case where $\sigma(x)=|x|^\al$ for $\al\in (0,1)$. When $\al \ge 1/2$, it is known there is pathwise uniqueness by the Yamada-Watanabe criterion (see, e.g., [@stoch Theorem 24.4]) while if $\al<1/2$, it is known there are at least two solutions, the zero solution and one that can be constructed by a non-trivial time change of Brownian motion. However, that is not the end of the story. In [@xtoal], it was shown that there is in fact pathwise uniqueness when $\al<1/2$ provided one restricts attention to the class of solutions that spend zero time at 0.
This can be better understood by using ideas from Markov process theory. The continuous strong Markov processes on the real line that are on natural scale can be characterized by their speed measure. For the example in the preceding paragraph, the speed measure $m$ is given by $$m(dy)=1_{(y\ne 0)} |y|^{-2\al}\, dy+\gamma\delta_0(dy),$$ where $\gamma\in [0,\infty]$ and $\delta_0$ is point mass at 0. When $\gamma=\infty$, we get the 0 solution, or more precisely, the solution that stays at 0 once it hits 0. If we set $\gamma=0$, we get the situation considered in [@xtoal] where the amount of time spent at 0 has Lebesgue measure zero, and pathwise uniqueness holds among such processes.
In this paper we study an even simpler equation: \[intro-E1\] dX\_t=1\_[(X\_t0)]{} dW\_t,X\_0=0, where $W$ is a one-dimensional Brownian motion. One solution is $X_t=W_t$, since Brownian motion spends zero time at 0. Another is the identically 0 solution.
We take $\gamma\in (0,\infty)$ and consider the class of solutions to which spend a positive amount of time at 0, with the amount of time parameterized by $\gamma$. We give a precise description of what we mean by this in Section \[S:SMM\].
Representing diffusions on the line as the solutions to stochastic differential equations has a long history, going back to Itô in the 1940’s, and this paper is a small step in that program. For this reason we characterize our solutions in terms of occupation times determined by a speed measure. Other formulations that are purely in terms of stochastic calculus are possible; see the system – below.
We start by proving weak existence of solutions to for each $\gamma\in (0,\infty)$. We in fact consider a much more general situation. We let $m$ be any measure that gives finite positive mass to each open interval and define the notion of continuous local martingales with speed measure $m$.
We prove weak uniqueness, or equivalently, uniqueness in law, among continuous local martingales with speed measure $m$. The fact that we have uniqueness in law not only within the class of strong Markov processes but also within the class of continuous local martingales with a given speed measure may be of independent interest.
We then restrict our attention to and look at the class of continuous martingales that solve and at the same time have speed measure $m$, where now \[intro-E3\] m(dy)=1\_[(y0)]{} dy+\_0(dy) with $\gamma\in (0,\infty)$.
Even when we fix $\gamma$ and restrict attention to solutions to that have speed measure $m$ given by , pathwise uniqueness does not hold. The proof of this fact is the main result of this paper. The reader familiar with excursions will recognize some ideas from that theory in the proof.
Finally, we prove that for each $\gamma\in (0,\infty)$, no strong solution to among the class of continuous martingales with speed measure $m$ given by exists. Thus, given $W$, one cannot find a continuous martingale $X$ with speed measure $m$ satisfying such that $X$ is adapted to the filtration of $W$. A consequence of this is that certain natural approximations to the solution of do not converge in probability, although they do converge weakly.
Besides increasing the versatility of , one can easily imagine a practical application of sticky points. Suppose a corporation has a takeover offer at \$10. The stock price is then likely to spend a great deal of time precisely at \$10 but is not constrained to stay at \$10. Thus \$10 would be a sticky point for the solution of the stochastic differential equation that describes the stock price.
Regular continuous strong Markov processes on the line which are on natural scale and have speed measure given by are known as sticky Brownian motions. These were first studied by Feller in the 1950’s and Itô and McKean in the 1960’s.
A posthumously published paper by Chitashvili ([@Chitashvili]) in 1997, based on a technical report produced in 1988, considered processes on the non-negative real line that satisfied the stochastic differential equation \[one-sided\] dX\_t=1\_[(X\_t0)]{} dW\_t+1\_[(X\_t=0)]{} dt, X\_t0, X\_0=x\_0, with $\theta\in (0,\infty)$. Chitashvii proved weak uniqueness for the pair $(X,W)$ and showed that no strong solution exists.
Warren (see [@Warren1] and also [@Warren2]) further investigated solutions to . The process $X$ is not adapted to the filtration generated by $W$ and has some “extra randomness,” which Warren characterized.
While this paper was under review, we learned of a preprint by Engelbert and Peskir [@Engelbert-Peskir] on the subject of sticky Brownian motions. They considered the system of equations $$\begin{aligned}
dX_t&=1_{(X_t\ne 0)}\, dW_t, \label{EPeq1}\\
1_{(X_t=0)}\, dt&=\frac{1}{\mu}\, d\ell^0_t(X),\label{EPeq2}\end{aligned}$$ where $\mu\in (0,\infty)$ and $\ell^0_t$ is the local time in the semimartingale sense at 0 of $X$. (Local times in the Markov process sense can be different in general.) Engelbert and Peskir proved weak uniqueness of the joint law of $(X,W)$ and proved that no strong solution exists. They also considered a one-sided version of this equation, where $X\ge 0$, and showed that it is equivalent to . Their results thus provide a new proof of those of Chitashvili.
It is interesting to compare the system – investigated by [@Engelbert-Peskir] with the SDE considered in this paper. Both include the equation . In this paper, however, in place of we use a side condition whose origins come from Markov process theory, namely: $$\begin{aligned}
X &\mbox{\rm is a continuous martingale with speed measure }\label{RBeq2}\\
&~~~~~ m(dx)=
dx+\gamma \delta_0(dx),\nn\end{aligned}$$ where $\delta_0$ is point mass at 0 and “continuous martingale with speed measure $m$” is defined in . One can show that a solution to the system studied by [@Engelbert-Peskir] is a solution to the formulation considered in this paper and vice versa, and we sketch the argument in Remark \[comparison\]. However, we did not see a way of proving this without first proving the uniqueness results of this paper and using the uniqueness results of [@Engelbert-Peskir].
Other papers that show no strong solution exists for stochastic differential equations that are closely related include [@Barlow-skew], [@Barlow-LMS], and [@Karatzasetal].
After a short section of preliminaries, Section \[S:prelim\], we define speed measures for local martingales in Section \[S:SMM\] and consider the existence of such local martingales. Section \[S:WU\] proves weak uniqueness, while in Section \[S:SDE\] we prove that continuous martingales with speed measure $m$ given by satisfy . Sections \[S:approx\], \[S:est\], and \[S:PU\] prove that pathwise uniqueness and strong existence fail. The first of these sections considers some approximations to a solution to , the second proves some needed estimates, and the proof is
|
{
"pile_set_name": "ArXiv"
}
| null |
[**General aspects of heterotic string compactifications**]{}\
[**on stacks and gerbes**]{}
Lara B. Anderson$^1$, Bei Jia$^2$, Ryan Manion$^3$, Burt Ovrut$^4$, Eric Sharpe$^2$
[cc]{}
------------------------------------
$^1$ Center for the
$\: \:$ Fundamental Laws of Nature
Jefferson Laboratory
Harvard University
17 Oxford Street
Cambridge, MA 02138
------------------------------------
&
----------------------------
$^2$ Department of Physics
Robeson Hall, 0435
Virginia Tech
Blacksburg, VA 24061
----------------------------
\
--------------------------------
$^3$ Department of Mathematics
David Rittenhouse Laboratory
209 South 33rd Street
University of Pennsylvania
Philadelphia, PA 19104-6395
--------------------------------
&
------------------------------
$^4$ Department of Physics
David Rittenhouse Laboratory
209 South 33rd Street
University of Pennsylvania
Philadelphia, PA 19104-6395
------------------------------
[lara@physics.harvard.edu]{}, [beijia@vt.edu]{}, [rymanion@gmail.com]{}, [ovrut@elcapitan.hep.upenn.edu]{}, [ersharpe@vt.edu]{}
$\,$
In this paper we work out some basic results concerning heterotic string compactifications on stacks and, in particular, gerbes. A heterotic string compactification on a gerbe can be understood as, simultaneously, both a compactification on a space with a restriction on nonperturbative sectors, and also, a gauge theory in which a subgroup of the gauge group acts trivially on the massless matter. Gerbes admit more bundles than corresponding spaces, which suggests they are potentially a rich playground for heterotic string compactifications. After we give a general characterization of heterotic strings on stacks, we specialize to gerbes, and consider three different classes of ‘building blocks’ of gerbe compactifications. We argue that heterotic string compactifications on one class is equivalent to compactification of the same heterotic string on a disjoint union of spaces, compactification on another class is dual to compactifications of other heterotic strings on spaces, and compactification on the third class is not perturbatively consistent, so that we do not in fact recover a broad array of new heterotic compactifications, just combinations of existing ones. In appendices we explain how to compute massless spectra of heterotic string compactifications on stacks, derive some new necessary conditions for a heterotic string on a stack or orbifold to be well-defined, and also review some basic properties of bundles on gerbes.
July 2013
Introduction
============
The compactification of heterotic superstrings on smooth Calabi-Yau threefolds has led to realistic $N=1$ supersymmetric particle physics in four-dimensions. For the $E_{8} \times E_{8}$ heterotic string, the generic structure of such vacua was presented in [@Donagi:2004qk; @Donagi:2004ia; @Donagi:2004su; @Donagi:2004ub]. Building upon these results, many phenomenologically relevant low-energy theories with MSSM-like matter spectra have been constructed, see for example [@Bouchard:2005ag; @Braun:2005ux; @Braun:2005bw; @Braun:2005zv; @Anderson:2009mh; @Anderson:2011ns; @Anderson:2012yf; @Braun:2011ni] for constructions and related work. However, the limitation of these vacua to equivariant vector bundles over smooth Calabi-Yau manifolds seems overly restrictive, and it is of considerable interest to try to construct heterotic vacua over more general backgrounds.
The purpose of this paper is to outline basic results and general issues in making sense of heterotic string compactifications on stacks, generalized spaces admitting metrics, spinors, and all the other items needed to make sense of a string compactification. This essentially completes a program started many years ago to understand the basics of string compactifications on stacks, see [*e.g.*]{} [@kps; @nr; @msx; @glsm; @summ; @cdhps; @karp1; @karp2; @ps5; @me-tex; @me-qts]. The original hope of this program was to find new SCFT’s, new string compactifications, arising from these generalized spaces. Although that has not proven to be the case, much has been learned about the structure of string compactifications, as we shall review.
One of the most physically interesting kinds of stacks are known as gerbes. The worldsheet theory of a string compactification on a gerbe can be understood in two more or less[^1] equivalent ways:
- as a sigma model on a space, but with a (combinatorial[^2]) restriction on allowed nonperturbative sectors, or
- as a gauge theory in which a (finite) subgroup of the gauge group acts trivially on the massless matter.
Viewed from the first perspective, it is clear that there is a potential problem with cluster decomposition in these theories. For (2,2) SCFT’s, this issue was addressed in [@summ], where it was argued that the SCFT is equivalent to that on a disjoint union of spaces with variable $B$ fields, a result listed there as the ‘decomposition conjecture.’ A sigma model on a disjoint union also violates cluster decomposition, but in an extremely mild fashion, easily understood. This duality has since proven crucial for understanding physics issues in many GLSM’s, see [*e.g.*]{} [@cdhps; @hori2; @ed-nick-me; @ncgw; @hkm; @enstx], and also has been used to make predictions for Gromov-Witten invariants of gerbes, predictions which have been checked in [*e.g.*]{} [@ajt1; @ajt2; @ajt3; @t1; @gt1; @xt1].
Viewed from the second perspective, there are analogous issues concerning whether and how physics can see a trivially-acting finite group. This was addressed in [@nr; @msx; @glsm], and will be reviewed later in this paper. Massless spectra of (2,2) SCFT’s are computed[^3] to contain multiple dimension zero operators, another sign of cluster decomposition issues. These multiple dimension zero operators are (discrete Fourier transforms of) identity operators counting the number of components in the corresponding disjoint union of spaces [@summ].
These ideas have also been recently been applied to four-dimensional supergravity theories[^4] [@nati0; @git-sugrav; @banks-seib; @sugrav-g]. For example, gerbes admit line bundles with fractional Chern classes, so the Bagger-Witten [@bw1] quantization condition on cohomology classes of Kähler forms is modified when the supergravity moduli space admits a gerbe structure. More generally, a general introduction to four-dimensional supergravities whose moduli spaces are stacks (generic in Calabi-Yau compactification) is in [@sugrav-g]. Furthermore, it was shown in [@js1]\[appendix B\] that four-dimensional supergravity anomalies have a natural description in terms of stacks. See for example [@bgcmru; @bgcmu] for other applications.
This paper is concerned with heterotic string compactifications on stacks and, in particular, gerbes. As the introduction above alludes, there are many more bundles on gerbes than on corresponding spaces, which naively suggests that there could be a rich new landscape of (0,2) SCFT’s and heterotic string compactifications obtainable from heterotic compactifications on gerbes. Our results break into three fundamental building blocks or classes:
- For heterotic compactifications on gerbes in which the gauge bundle is a pullback from the base (equivalently, when the group that acts trivially on the base, also acts trivially on the bundle), the heterotic string compactification is consistent, and is equivalent to a compactification on a disjoint union of spaces. Compactifications of this form are discussed in section \[sect:het-decomp\].
- For heterotic compactifications on ${\mathbb Z}_2$ gerbes in which the ${\mathbb Z}_2$ acts nontrivially on a rank 8 bundle, these compactifications do not decompose, and (we conjecture) are T-dual to ordinary heterotic compactifications (on spaces) with a different left-moving GSO. In other words, a Spin$(32)/{\mathbb Z}_2$ compactification on such a gerbe is equivalent to an $E_8 \times E_8$ compactification on a space. Compactifications of this form are described in section \[sect:het-gsomods\].
- We conjecture when the bundle is nontrivial over the gerbe, but not rank 8 or the gerbe is not ${\mathbb Z}_2$, a perturbative heterotic string compactification is not consistent. That said, we do provide some seemingly consistent (0,2) SCFT’s defined by gerbes and bundles of this form, but unfortunately they do not seem to be useful for heterotic string compactification. Compactifications of this form are discussed in section \[sect:type3:twisted\].
In addition, it is also possible to build examples displaying combinations of these classes, which are discussed
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The top quark pair production and decay are considered in the framework of the smeared-mass unstable particles model. The results for total and differential cross sections in vicinity of $t\bar{t}$ threshold are in good agreement with the previous ones in the literature. The strategy of calculations of the higher order corrections in the framework of the model is discussed. Suggested approach significantly simplifies calculations compared to the standard perturbative one and can serve as a convenient tool for fast and precise preliminary analysis of processes involving intermediate time-like top quark exchanges in the near-threshold region.'
address:
- 'Institute of Physics, Southern Federal University, Rostov-on-Don 344090, Russia'
- |
Theoretical High Energy Physics, Department of Astronomy and Theoretical Physics,\
Lund University, SE 223-62 Lund, Sweden
- 'Institute of Physics, Southern Federal University, Rostov-on-Don 344090, Russia'
author:
- 'V. I. KUKSA[^1]'
- 'R. S. PASECHNIK[^2]'
- 'D. E. VLASENKO[^3]'
title: MASS SHELL SMEARING EFFECTS IN TOP PAIR PRODUCTION
---
Introduction
============
The top pair production and decay are the key processes for precision tests of the Standard Model (SM) (see e.g. Ref. \[\] and references therein). They were intensively studied in the framework of the Quantum Chromodynamics (QCD) and Electro-Weak (EW) perturbation theory during last two decades, and various methods and schemes were proposed. The major goal of these investigations is to define the basic physical parameters of the top quark, such as its mass, width and couplings with other SM particles. In the past, the top quark physics was one of the primary research objectives at Tevatron. Nowadays, the biggest attention is paid to the process of the top quark production at the LHC (see e.g. Refs. \[\]). However, the highest precision measurements of the top quark properties can best be reached at the future Linear Collider (LC) which supposedly operates in a clean experimental environment. The top quark physics is one of the most interesting and challenging targets for future $e^+e^-$ or $\mu^+\mu^-$ LC experiments \[\].
The top pair production is followed by a decay chain with intermediate gauge boson states, i.e. the full process under consideration is $e^+e^-\to t^*\bar{t^*}\to b\bar{b}W^+W^-\to
b\bar{b}4f$. The widths of both the top quark and the $W$-boson are large, and one necessarily needs to take into account corresponding Finite-Width Effects (FWE). In the framework of the standard perturbative approach, these effects are typically described by means of dressed propagators which are regularized by the total decay width. In order to analyze the full process of the top pair production relevant for phenomenological studies, we also have to take into account the background contribution coming from many other topologically different diagrams leading to the same six-fermion final states, which is a rather non-trivial task.
The Born-level cross-sections of the processes $e^+e^-\to
b\bar{b}u\bar{d}\mu^-\bar{\nu}_{\mu}$ and $e^+e^-\to b\bar{b}4q$ were calculated in Refs. \[\] and \[\], respectively. Other exclusive reactions with $b\bar{b}d\bar{u}\mu^+\nu_{\mu},\,b\bar{b}c\bar{s}d\bar{u}$ and $b\bar{b}\mu^+\nu_{\mu}\tau^-\bar{\nu}_{\tau}$ final states were considered in Ref. \[\]. In particular, it was shown that the contribution of the top-pair signal $e^+e^-\to t^*\bar{t}^*\to
b\bar{b}4f$ is dominant, but the background (caused by one-resonant or non-resonant diagrams) can be quite significant too. However, it can be drastically decreased by applying certain kinematical cuts on the appropriate invariant masses.
The QCD corrections for the reaction $e^+e^-\to t\bar{t}$ in the continuum above the threshold were previously obtained in Refs. \[\]. As well as the one-loop EW corrections were calculated in many papers (for corresponding references, see e.g. Introduction in Ref. \[\]). Concerning radiative corrections (RC) to reaction $e^+e^-\to b\bar{b}4f$ with six-fermion final states, the situation is more complicated and less clear \[\]. At the tree level, any of the reactions receives contributions from several hundreds of diagrams. The calculations of the full $O(\alpha)$ radiative corrections are very complicated, and different approximation schemes are typically applied. The most detailed analysis of the exclusive reactions $e^+e^-\to
b\bar{b}\mu^+\nu_{\mu}\mu^-\bar{\nu}_{\mu}$ and $e^+e^-\to
b\bar{b}d\bar{u}\mu^-\bar{\nu}_{\mu}$ was performed in Ref. \[\]. In this paper, the cross-sections were calculated taking into account the leading radiative corrections, such as the initial state radiation (ISR) and factorizable EW corrections to the on-shell top-pair production, to the decay of the top quark into $bW$ and to the subsequent decays of the $W$-bosons. Usually, such calculations are carried out automatically by Monte Carlo techniques (see Ref. \[\] and references therein).
In this work, we consider reactions like $e^+e^-\to t^*\bar{t}^*\to
b\bar{b}4f$ with any four-fermion final states $4f$. The analysis is performed in the framework of the smeared mass unstable particles model (below, SMUP model) \[\]. Due to exact factorization at intermediate $t,\bar{t}$ and $W^+,W^-$ states, the cross-section can be represented in a simple analytical form which is convenient for analytical and numerical analysis. So far, we have applied the SMUP approach only for unstable gauge boson production and decay (see e.g. Refs. \[\]). As a continuation of our earlier studies, in this work we test the SMUP approach for the case of unstable fermions, i.e., specifically, top quarks. In our calculations, we take into account NLO radiative EW and QCD factorizable corrections which dominate close to $t\bar{t}$ threshold. Also, we illustrate the influence of the mass smearing effects and various radiative corrections (RC’s) on the differential cross-sections. The results are compared with ones calculated by using the standard perturbative methods \[\], where cross-sections were represented for case of full $2\to 6$ process and, separately, for the top signal contribution alone. It was shown that in the Born approximation the results coincide with a rather high precision, and deviations of the higher-order corrected results from the standard ones are at the percentage level. So, the suggested approach can be applied in a fast preliminary analysis of various complicated processes involving intermediate top quark exchanges in the Standard Model and beyond.
Note, here we do not consider the near-threshold effects caused by the generation of the coupled $t\bar{t}$ state, which were considered in detail in many previous studies (see, for instance, Ref. \[\] and references therein). We postpone this issue for a forthcoming study.
The model cross-section of the top-pair production and decay at the tree level
==============================================================================
The process of top-pair production with subsequent decay $e^+e^-\to
t^*\bar{t^*}\to b\bar{b}W^+W^-\to b\bar{b}4f$ is schematically represented in Fig. \[fig1\]. The full process contains two steps with unstable intermediate time-like states, namely, $t,\bar{t}$ and $W^+,W^-$ states. In this case, as was shown in Ref. \[\], the double factorization takes place and can be described in the framework of the SMUP model \[\]. Due to this factorisation, the full process can be divided into three stages: $e^+e^-\to t^*\bar{t}^*$, $t^*\bar{t}^*\to b\bar{b}W^+W^-$ and $W^+W^-\to 4f$. Here, the top-quarks and $W$-bosons are treated as unstable particles, and finite-width effects should be taken into account.
![Feynman diagram of the top quark signal process $e^+e^-\to
t^*\bar{t^*}\to b\bar{b}W^+W^-\to b\bar{b}4f$.[]{data-label="fig1"}](tbart1.eps){width=".6\textwidth"}
The SMUP model cross-section of the first reaction $e^+e^-\to
t^*\bar{t}^*$ can be written as \[\] $$\label{2.1}
\sigma(e^+e^-\to
t^*\bar{t}^*)=\int_{m^2_0}^s\int_{m^2_0}^{(\sqrt{s}-m_1)^2}
\sigma(e^+e^
|
{
"pile_set_name": "ArXiv"
}
| null |
---
bibliography:
- 'sample.bib'
---
[**Detecting Memory and Structure in Human Navigation Patterns Using Markov Chain Models of Varying Order** ]{}\
Philipp Singer$^{1,\ast}$, Denis Helic$^{2}$, Behnam Taraghi$^{3}$, Markus Strohmaier$^{1,4}$\
**[1]{} GESIS - Leibniz Institute for the Social Sciences, Cologne, Germany\
**[2]{} Technical University of Graz, Knowledge Technologies Institute, Graz, Austria\
**[3]{} Technical University of Graz, Institute for Information Systems and Computer Media, Graz, Austria\
**[4]{} University Koblenz-Landau, Institute for Web Science and Technologies, Koblenz, Germany\
$\ast$ E-mail: philipp.singer@gesis.org********
Abstract {#abstract .unnumbered}
========
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google’s PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.
Introduction {#sec:intro .unnumbered}
============
Navigation represents a fundamental activity for users on the Web. Modeling this activity, i.e., understanding how predictable human navigation is and whether regularities can be detected has been of interest to researchers for nearly two decades – an example of early work would be work by Catledge and Pitkow [@catledge]. Another example would be [@xing], who focused on trying to understand preferred user navigation patterns in order to reveal users’ interests or preferences. Not only has our community been interested in gaining deeper insights into human behavior during navigation, but also in understanding how models of human navigation can improve user interfaces or information network structures [@borges1999]. Further work has focused on understanding whether models of human navigation can help to predict user clicks in order to prefetch Web sites (e.g., [@bestavros]) or enhance a site’s interface or structure (e.g., [@perkowitz]). More recently, such models have also been deployed in the field of recommender systems (e.g., [@rendle]).
However, models of human navigation can only be useful to the extent human navigation itself exhibits regularities that can be exploited. An early study on user navigation in the Web by Huberman et al. [@huberman], for example, already identified interesting regularities in the distributions of user page visits on a Web site. More recently, Wang and Huberman [@wang2012] confirmed these observations and Song et al. [@song] argued that the regularities in human activities might be based on the inherent regularities of human behavior in general.
The most prominent model for describing human navigation on the Web is the Markov chain model (e.g., [@pirolli]), where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, the Markov chain model has been memoryless in a wide range of works (e.g., Google’s PageRank [@brin]) indicating that the next state only depends on the current state of a user’s Web trail. Recently, a study [@chierichetti] suggested that human navigation might be better modeled with memory – i.e., the next page depends on a longer history of past clicks. However, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used.
![**Example of a navigation sequence in the WikiGame dataset.** Bottom row of nodes: A user navigates a series of Wikipedia articles, which can be represented as a sequence of Web pages. Top row of nodes: Each Wikipedia article can be mapped to a corresponding topic through Wikipedia’s system of categories. This results in a sequence of topics.[]{data-label="fig:pathexample"}](paths_categories_cropped){width="85.00000%"}
[**Research questions.**]{} In this paper, we are interested in shedding a deeper light on regularities in human navigation on the World Wide Web by studying memory and structure in human navigation patterns. We start by investigating memory of human navigational paths over Web sites by determining the order of corresponding Markov chains. We are specifically interested in detecting if the benefit of a larger memory (or higher order Markov chain) can compensate for the higher complexity of the model. In order to understand whether and to what extent human navigation exhibits memory on a topical level, we abstract away from specific page transitions and study memory effects on a topical level by representing click streams as sequences of topics[^1] (cf. Figure \[fig:pathexample\]). This enables us to (i) move up from the page to topical level and (ii) significantly reduce the complexity of higher order models and therefore (iii) gain deeper insights into memory and structure of human navigational patterns. Finally, we discuss our findings and demonstrate interesting differences between human navigation in free browsing vs. more goal-oriented settings.
[**Methods and Materials.**]{} We study memory and structure in human navigation patterns on three similarly structured datasets: WikiGame (a navigation dataset with known navigation goals), Wikispeedia (another goal-oriented navigation dataset) and MSNBC (a free navigation dataset). [[ For analyzing memory, we use Markov chains to model human behavior and analyze the appropriate Markov chain order – i.e., we investigate whether human navigation is memoryless or not. For model selection – i.e., the process of finding the most appropriate Markov chain order – we resort to a highly diverse array of methods stemming from distinct statistical schools: (i) likelihood [@stigler2002statistics; @tong1975], (ii) Bayesian [@Strelioff] and (iii) information-theoretic methods [@akaike; @katz; @murphy; @schwarz; @tong1975]. We supplement these with a (iv) cross validation approach for a prediction task [@murphy]. We thoroughly elaborate each method, put them into relation to each other and also highlight strengths and weaknesses of each. Such detailed derivation of model parameters and the model comparison is, for example, missing in previous work [@chierichetti], which prevents us from drawing definite conclusions. We apply these methods to our human navigational data in order to get an exhaustive picture about memory in human navigation. Finally, we identify structural aspects by analyzing transition matrices produced by our Markov chain analyses. ]{}]{}
[**Contributions.**]{} The main contributions of this work are three-fold:
- First, we deploy four different, yet complementary, approaches for order selection of Markov chain models (likelihood, Bayesian, information-theoretic and cross validation methods) and elaborate their strengths and weaknesses. [[Hence, our work extends existing studies that model human navigation on the Web using Markov chain models [@chierichetti].]{}]{} By applying these methods on navigational Web data, our work presents – to the best of our knowledge – the most comprehensive and systematic evaluation of Markov model orders for human navigational sequences on the Web to date. Furthermore, we make our methods in the form of an open source framework available online[^2] to aid future work [@github].
- Our empirical results confirm what we inferred from theory: It is difficult to make plausible statements about the appropriate Markov chain order having insufficient data but a vast amount of states, which is a common situation for Web page navigational paths. All evaluation approaches would favor a zero or first order because the number of parameters grows exponentially with the chain order and the available data is too sparse for proper parameter inferences. Thus, we show further evidence that the memoryless model seems to be a quite practical and legitimate model for human navigation on a page level.
- By abstracting away from the page level to a topical level, the results are different. By representing all datasets as navigational sequences of topics that describe underlying Web pages (cf. Figure \[fig:pathexample\]), we find evidence that topical navigation of humans is not memoryless at all. On three rather different datasets of navigation – free navigation (MSNBC) and goal-oriented navigation (WikiGame and Wikispeedia) – we find mostly consistent memory regularities on a topical level: In all cases, Markov chain models of order two (respectively three) best explain the observed navigational sequences. We analyze the structure of such navigation, identify strategies and the most salient common sequences of human navig
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- |
Sho Tanaka[^1]\
Kurodani 33-4, Sakyo-ku, Kyoto 606-8331, Japan
title: 'Holographic Relation in Yang’s Quantized Space-Time Algebra and Area-Entropy Relation in $D_0$ Brane Gas System'
---
addtoreset[equation]{}[section]{}
In the preceding paper, we derived a kind of kinematical holographic relation (KHR) in the Lorentz- covariant Yang’s quantized space-time algebra (YSTA). It essentially reflects the fundamental nature of the noncommutative geometry of YSTA and its representation, that is, a definite kinematical reduction of spatial degrees of freedom in comparison with the ordinary lattice space. On the basis of the relation and its extension to various spatial dimensions, we derive a new area-entropy relation in a simple $D_0$ brane gas system subject to YSTA, following the idea of M-theory. Furthermore, we make clear its inner relation with the Bekenstein-Hawking area-entropy relation in connection with Schwarzschild black hole.
Key words: Yang’s quantized space-time algebra(YSTA); kinematical reduction of spatial degrees of freedom; holographic relation in YSTA; area-entropy relation; Schwarzschild black hole; $D_0$ brane gas model.
Introduction
============
In the preceding paper, $^{[1]}$ referred hereafter as I, we derived a kind of holographic relation in the Lorentz-covariant Yang’s quantized space-time algebra(YSTA),$^{[1],[2],[3]}$ which we called the kinematical holographic relation (KHR). As was emphasized in I, the relation essentially reflects the fundamental nature of the noncommutative geometry of YSTA, that is, a definite kinematical reduction of spatial degrees of freedom in comparison with the ordinary lattice space. As will be shown in the present paper, this relation seems also to give an important clue to resolve the long-pending problem encountered in the Bekenstein-Hawking area-entropy relation$^{[4]}$ or the holographic principle,$^{[5]}$ that is, the apparent gap between the degrees of freedom of any bounded spatial region associated with entropy and of its boundary area.
In addition to the last problem, in the arguments of the holographic principle, the limit of the present local field theory has been discussed, as seen, for instance, in the unified regularization or cutoff of UV/IR divergences. With respect to this problem, as was emphasized in refs. \[1\], \[2\], YSTA which is intrinsically equipped with short- and long-scale parameters, $\lambda$ and $R$, gives a finite number of spatial degrees of freedom for any finite spatial region and provides a basis for the field theory free from ultraviolet- and infrared-divergences.
In fact, we found in I, the following form of kinematical holographic relation (KHR) in YSTA, $$\begin{aligned}
\hspace{-3cm} [KHR] \hspace{2cm} n^L_{\rm dof}= {\cal A} / G,
\nonumber\end{aligned}$$ that is, the proportional relation between $n^L_{\rm dof}$ and ${\cal A}$ with proportional constant $G$, where $n^L_{\rm dof}$ and ${\cal A}$, respectively, denote the number of degrees of freedom of any spherical bounded spatial region with radius $L$ in Yang’s quantized space-time and the boundary area in unit of $\lambda.$
In this paper, we derive a new area-entropy relation \[AER\] on the basis of the above \[KHR\] and make clear its inner relation with the ordinary Bekenstein-Hawking area-entropy relation in connection with Schwarzschild black hole. It will be made through a simple $D_0$ brane gas model$^{[6]}$ on Yang’s quantized space-time according to the idea of M-theory,$^{[7]}$ with the aid of a kind of Gedanken-experiment on the present static toy model.
The present paper is organized as follows. In Sec. 2, we briefly recapitulate Yang’s quantized space-time algebra (YSTA) and its representations. Sec. 3 is devoted to the recapitulation of the kinematical holographic relation (KHR) and to its extension to the lower-dimensional bounded regions, $V_d^L$. In section 4, we introduce a simple $D_0$ brane (D-particle) gas model on $V_d^L$ and find a new area-entropy relation in the system in connection with Schwarzschild black hole. In the final section, we discuss the inner relation between our area-entropy relation based on KHR in YSTA and the ordinary Bekenstein-Hawking area-entropy relation and point out our future task beyond the present simple $D_0$ brane gas model.
Yang’s Quantized Space-Time Algebra (YSTA) and Its Representations
==================================================================
Yang’s Quantized Space-Time Algebra (YSTA)
-------------------------------------------
Let us first recapitulate briefly the Lorentz-covariant Yang’s quantized space-time algebra (YSTA). $D$-dimensional Yang’s quantized space-time algebra is introduced$^{[1],[2]}$ as the result of the so-called Inonu-Wigner’s contraction procedure with two contraction parameters, $R$ and $\lambda$, from $SO(D+1,1)$ algebra with generators $\hat{\Sigma}_{MN}$; $$\begin{aligned}
\hat{\Sigma}_{MN} \equiv i (q_M \partial /{\partial{q_N}}-q_N\partial/{\partial{q_M}}),\end{aligned}$$ which work on $(D+2)$-dimensional parameter space $q_M$ ($M= \mu,a,b)$ satisfying $$\begin{aligned}
- q_0^2 + q_1^2 + \cdots + q_{D-1}^2 + q_a^2 + q_b^2 = R^2.\end{aligned}$$
Here, $q_0 =-i q_D$ and $M = a, b$ denote two extra dimensions with space-like metric signature.
$D$-dimensional space-time and momentum operators, $\hat{X}_\mu$ and $\hat{P}_\mu$, with $\mu =1,2,\cdots,D,$ are defined in parallel by $$\begin{aligned}
&&\hat{X}_\mu \equiv \lambda\ \hat{\Sigma}_{\mu a}
\\
&&\hat{P}_\mu \equiv \hbar /R \ \hat{\Sigma}_{\mu b}, \end{aligned}$$ together with $D$-dimensional angular momentum operator $\hat{M}_{\mu \nu}$ $$\begin{aligned}
\hat{M}_{\mu \nu} \equiv \hbar \hat{\Sigma}_{\mu \nu}\end{aligned}$$ and the so-called reciprocity operator $$\begin{aligned}
\hat{N}\equiv \lambda /R\ \hat{\Sigma}_{ab}.\end{aligned}$$ Operators $( \hat{X}_\mu, \hat{P}_\mu, \hat{M}_{\mu \nu}, \hat{N} )$ defined above satisfy the so-called contracted algebra of the original $SO(D+1,1)$, or Yang’s space-time algebra (YSTA): $$\begin{aligned}
&&[ \hat{X}_\mu, \hat{X}_\nu ] = - i \lambda^2/\hbar \hat{M}_{\mu \nu}
\\
&&[\hat{P}_\mu,\hat{P}_\nu ] = - i\hbar / R^2\ \hat{M}_{\mu \nu}
\\
&&[\hat{X}_\mu, \hat{P}_\nu ] = - i \hbar \hat{N} \delta_{\mu \nu}
\\
&&[ \hat{N}, \hat{X}_\mu ] = - i \lambda^2 /\hbar \hat{P}_\mu
\\
&&[ \hat{N}, \hat{P}_\mu ] = i \hbar/ R^2\ \hat{X}_\mu,\end{aligned}$$ with familiar relations among ${\hat M}_{\mu \nu}$’s omitted.
Quasi-Regular Representation of YSTA
------------------------------------
Let us further recapitulate briefly the representation$^{[1],[2]}$ of YSTA for the subsequent consideration in section 4. First, it is important to notice the following elementary fact that ${\hat\Sigma}_{MN}$ defined in Eq.(2.1) with $M, N$ being the same metric signature have discrete eigenvalues, i.e., $0,\pm 1 ,
\pm 2,\cdots$, and those with $M, N$ being opposite metric signature have continuous eigenvalues, $\footnote{The corresponding eigenfunctions are explicitly given in ref. [9].}$ consistently with covariant commutation relations of YSTA. This fact was first emphasized by Yang$^{[3]}$ in connection with the preceding Snyder’s quantized space-time.$^{[8]}$ This conspicuous aspect is well understood by means of the familiar example of the three-dimensional angular momentum in quantum mechanics, where individual components, which are noncommutative among themselves, are able to have discrete eigenvalues, consistently with the three-dimensional rotation-invariance.
This fact implies that Yang’s space-time algebra (YSTA) pres
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The PVLAS collaboration has recently reported the observation of a rotation of the polarization plane of light propagating through a transverse static magnetic field. Such an effect can arise from the production of a light, $m_A\sim$ meV, pseudoscalar coupled to two photons with coupling strength $g_{A\gamma}\sim 5\times 10^{-6}$ GeV$^{-1}$. Here, we review these experimental findings, discuss how astrophysical and helioscope bounds on this coupling can be evaded, and emphasize some experimental proposals to test the scenario.'
address: 'Deutsches Elektronen-Synchrotron DESY, Notkestraße 85, D–22607 Hamburg, Germany'
author:
- Andreas Ringwald
title: '[-1cmDESY 05-229]{} Axion interpretation of the PVLAS data?[^1]'
---
There are various proposals in the literature in favour of the existence of light pseudoscalar particles beyond the Standard Model which have, so far, remained undetected, due to their weak coupling to ordinary matter. Such light particles would arise if there was a global continuous symmetry in the theory that is spontaneously broken in the vacuum. A well known example is the axion [@Weinberg:1978ma], which arises from a natural solution to the strong $CP$ problem. It appears as a pseudo Nambu-Goldstone boson of a spontaneously broken Peccei-Quinn symmetry [@Peccei:1977hh], whose scale $f_A$ determines its mass, ${m_A} = [z^{1/2}/(1+z)]\,
m_\pi f_\pi/ f_A= { 0.6\, {\rm meV}}
\times
(
10^{10}\, {\rm GeV}/{ f_A}
)
$ in terms of the mass $m_\pi$ and decay constant $f_\pi$ of the pion and the current quark mass ratio $z=m_u/m_d$. Only invisible axion models [@Kim:1979if; @Zhitnitsky:1980tq], where $f_A\gg 247$ GeV, are viable experimentally [@Eidelman:2004wy].
Clearly, it is of great interest to set stringent constraints on the properties of such light pseudoscalars. The interactions of axions and similar light pseudoscalars with Standard Model particles are model dependent, i.e. not a function of $1/f_A$ only. The most stringent constraints to date come from their coupling to photons, $g_{A\gamma}$, which arises via the axial anomaly [@Bardeen:1977bd], $$\label{eq:ax_ph}
{\mathcal L}_{\rm int} =
-\frac{1}{4}\,{ g_{A\gamma}}\,A\ F_{\mu\nu} \tilde{F}^{\mu\nu}
=
{ g_{A\gamma}}\,A\ {\mathbf E}\cdot {\mathbf B}\, ;
\hspace{5ex}
{ g_{A\gamma}} = -\frac{\alpha}{2\pi { f_A}}
\left( { \frac{E}{N}} - \frac{2}{3}\,\frac{4+z}{1+z}\right)
\,,$$ where $A$ is the pseudoscalar field, $F_{\mu\nu}$ ($\tilde{F}^{\mu\nu}$) the (dual) electromagnetic field strength tensor, $\alpha$ the fine-structure constant, and $E/N$ the ratio of electromagnetic over color anomalies. As illustrated in Fig. \[fig:ax\_ph\], two quite distinct invisible axion models, namely the KSVZ [@Kim:1979if] (or hadronic) and the DFSZ [@Zhitnitsky:1980tq] (or grand unified) one, lead to quite similar $g_{A\gamma}$. The strongest constraints currently involve cosmological and astrophysical considerations. Only the laser experiments in Fig. \[fig:ax\_ph\] aim also at the production of axions in the laboratory.
![Exclusion region in mass $m_A$ vs. axion-photon coupling $g_{A\gamma}$ for various current and future experiments. The laser experiments [@Cameron:mr; @Zavattini:2005tm; @Ringwald:2001cp; @Ringwald:2003ns] aim at axion production and detection in the laboratory. The galactic dark matter experiments [@Eidelman:2004wy] exploit microwave cavities to detect axions under the assumption that axions are the dominant constituents of our galactic halo, and the solar experiments search for axions from the sun [@Andriamonje:2004hi]. The constraint from horizontal branch (HB) stars [@Eidelman:2004wy; @Raffelt:1999tx] arises from a consideration of stellar energy losses through axion production. \[fig:ax\_ph\]](ax_ph_lim_taup05.eps){width="15.5cm"}
Let us discuss such laser experiments in some detail. The most straightforward ones exploit photon regeneration. They are based on the idea [@Sikivie:ip] to send a polarized laser beam, with average power $\langle P\rangle$ and frequency $\omega$, along a superconducting dipole magnet of length $\ell$, such that the laser polarization is parallel to the magnetic field. In the latter, the photons may convert into axions via a Primakoff process. If another identical dipole magnet is set up in line with the first magnet, with a sufficiently thick wall between them to absorb the incident laser photons, then photons may be regenerated from the pure axion beam in the second magnet and detected with an efficiency $\epsilon$. The expected counting rate of such an experiment is given by $$\label{eq:ax_counting_rate}
\frac{{\rm d}N_\gamma}{{\rm d}t} =
{\frac{\langle P\rangle }{\omega}}\
\frac{N_r+2}{2}\,
\frac{1}{16} \left( g_{A\gamma}\,{B}\,\ell\right)^4
\sin^2
\left( \frac{m_A^2\,\ell }{4\,\omega }
\right)
\left( \frac{m_A^2\,\ell }{4\,\omega }
\right)^2
\approx { \frac{\langle P\rangle }{\omega}}\
\frac{N_r+2}{2}\,
\frac{1}{16} \left( g_{A\gamma}\,{B}\,\ell\right)^4
\eta
\,,$$ if one makes use of the possibility of putting the first magnet into an optical cavity with a total number $N_r$ of reflections. For $m_A \ll \sqrt{2\,\pi\,\omega/\ell} = 4\times 10^{-4}\, {\rm eV}
\sqrt{({ \omega}/1\, {\rm eV}) (10\, {\rm m}/\ell ) }$, the approximate sign in (\[eq:ax\_counting\_rate\]) applies and the expected counting rate for a photon regeneration experiment is independent of the axion mass. A pilot photon regeneration experiment was performed by the Brookhaven-Fermilab-Rutherford-Trieste (BFRT) collaboration [@Cameron:mr]. It employed an optical laser of wavelength $\lambda =2\pi/\omega = 514$ nm and power $\langle P\rangle = 3$ W for $t=220$ minutes in an optical cavity with $N_r=200$, and used two superconducting dipole magnets with $B = 3.7$ T and $\ell = 4.4$ m. No signal of photon regeneration was found, which leads, taking into account a detection efficiency of $\eta =0.055$, to a $2\,\sigma$ upper limit of $g_{A\gamma}<6.7\times 10^{-7}$ GeV$^{-1}$ for axion-like pseudoscalars with mass $m_A< 10^{-3}$ eV.
Another possibility to probe $g_{a\gamma}$ is to measure changes in the polarization state when photons have traversed a transverse magnetic field [@Maiani:1986md]. In particular, the real production of axions leads to a rotation of the polarization plane of an initially linearly polarized laser beam by an angle $$\begin{aligned}
\label{ax_rot}
\epsilon &=& N_r\,\frac{g_{A\gamma}^2\,B^2\,\omega^2}{m_A^4}\,
\sin^2\left( \frac{m_A^2\,\ell}{4\,\omega}\right)\,
\sin 2\,\theta
\approx \frac{N_r}{16} \left( g_{A\gamma}\,B\,\ell \right)^2\,\sin 2\,\theta
\,,\end{aligned}$$ where $\theta$ is the angle between the light polarization direction and the magnetic field component normal to the light propagation vector. The BFRT collaboration has also performed a pilot polarization experiment along these lines, with the same laser and magnets described before. For $\ell = 8.8$ m, $B=2$ T, and $N_r=254$, an upper limit on the rotation angle $\epsilon< 3.5\times 10^{-10}$ rad was set, leading to a limit $g_{A\gamma}< 3.6\times
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
The energies and wave functions of stationary many-body states are analyzed to look for the signatures of quantum chaos. Shell model calculations with the Wildenthal interaction are performed in the $J-T$ scheme for 12 particles in the $sd$-shell. The local level statistics are in perfect agreement with the GOE predictions. The analysis of the amplitudes of the eigenvectors in the shell model basis with the aid of the informational entropy and moments of the distribution function shows evidence for local chaos with a localization length reaching 90% of the total dimension in the middle of the spectrum. The degree of chaoticity is sensitive to the the strength of the residual interaction as compared to the single particle energy spacing.
[**[PACS numbers:]{}**]{} 24.60.-k, 24.60.Lz, 21.10.-k, 21.60.Cs
---
**CHAOS AND ORDER IN THE SHELL MODEL EIGENVECTORS\
Vladimir Zelevinsky$^{1,2}$, Mihai Horoi$^{1,3}$ and B. Alex Brown$^{1}$\
**
[*$^{1}$National Superconducting Cyclotron Laboratory, East Lansing, MI 48824\
$^{2}$Budker Institute of Nuclear Physics, Novosibirsk 630090, Russia\
$^{3}$Institute of Atomic Physics, Bucharest, Romania\
*]{}
Quantum chaos in many-body systems was studied mostly from the viewpoint of level statistics which displays a clear relation to the notion of classical chaos [@Haake]. Presumably much more information could be obtained from an analysis of the wave functions and transition amplitudes. Here one expects to encounter the transition from the simple picture of almost independent elementary excitations to extremely mixed compound states which would display new specific features as, for example, so called dynamic enhancement of weak interactions [@SF].
To perform such an analysis and to check various hypotheses concerning complicated quantum dynamics, one needs a rich set of data which would allow one to make statistically reliable conclusions. Realistic nuclear shell model calculations are one of the most promising candidates for studying this largely unknown structure of quantum chaotic states.
We studied the behavior of the basis-state amplitudes of the shell model eigenvectors produced in the $J-T$ scheme for 12 particles in the $sd$ shell. Our model hamiltonian describing a many-body system of valence particles within a major shell contains a one-body part, which is due to an existing core (e.g. $^{16}$O for the $sd$ shell) and a two-body antisymmetrized interaction of the valence particles $$H = \sum \epsilon_{\mu} a^{\dagger}_{\mu} a_{\mu} +
\frac{1}{4} \sum V_{\mu \nu \lambda \rho}
a^{\dagger}_{\mu} a^{\dagger}_{\nu}
a_{\lambda} a_{\rho}\ .
\label{eq:ham}$$
In our calculations the Wildenthal interaction along with the well known procedure to project out of the $m$-scheme the states with correct values of the total angular momentum $J$ and isospin $T$ were utilized [@Wild; @OXBA]. The $J-T$ projected states $\mid k\rangle$ are used to build the matrix of the many-body hamiltonian, $H_{k k'} = \langle J T;\ k \mid H \mid J T;\ k' \rangle$, which is eventually diagonalized producing the eigenvalues $E_{\alpha}$ and the eigenvectors $$\mid J T ;\ \alpha \rangle = \sum_{k} C_{k}^{\alpha} \mid J T ;\ k\rangle .
\label{eq:eigv}$$ They represent the object of our investigation.
The matrix dimension for the $J^{\pi}T = 2^{+}0$ states is 3273. The density of states steeply increases along with excitation energy, reaches its maximum and then decreases again for the highest energy. This high-energy behavior, as well as the approximate symmetry with respect to the middle of the spectrum, are artificial features of models with finite Hilbert space in contrast to actual many-body systems. For the analysis of the level statistics we used levels 200 - 3000.
Fig. 1 shows the standard quantities which define the chaoticity of a quantum system [@Haake; @Brody], the unfolded distribution of the nearest neighbor spacings $P(s)$ and the spectral rigidity $\Delta_{3}$, for this class of states. The solid lines in both parts of the figure describe the random matrix results for the Gaussian Orthogonal Ensemble (GOE). The dashed line on the right corresponds to the Poisson level distribution which is characteristic of an ordered system. The closeness of $\Delta_{3}$ to the random matrix results even for very large values of $L$ is remarkable. Previous to this study the largest value of $L$ considered was 80 [@Arve]. Thus, the level statistics manifest generic chaotic behavior.
We next look to the structure of the wave functions which could reveal in more detail how close to chaoticity we are. The appropriate quantities to measure the degree of complexity of a given eigenstate $|\alpha\rangle$, eq.(2), with respect to the original shell model basis are, for instance, the informational entropy [@Izr; @Reichl], $$S^{\alpha} = - \sum_{k}\mid C^{\alpha}_{k} \mid^{2}
\ln \mid C^{\alpha}_{k} \mid^{2},
\label{eq:end}$$ or the moments of the distribution of amplitudes $|C^{\alpha}_{k}|^{2}$. The second moment determines the number of principal components ($NPC$) of an eigenvector $|\alpha \rangle$, $$(NPC)^{\alpha} = \left(\sum_{k}\mid C^{\alpha}_{k} \mid^{4}
\right)^{-1}.
\label{eq:npc}$$ In the GOE all basis states are completely mixed so that the resulting eigenvectors are totally delocalized and cover uniformly the $N$-dimensional sphere of radius 1 [@Brody; @Perc; @Berry]. Gaussian fluctuations with zero mean, $\overline{C^{\alpha}_{k}}=0$, and width $\overline{|C^{\alpha}_{k}|^{2}} = 1/N$ lead to the values $\ln (0.48N)$ and $N/3$ for the quantities (3) and (4) respectively. Here $N$ is the total dimension of the model space. In reality, the incomplete mixing of basis states determined by specific properties of the hamiltonian can coexist with the GOE-type level correlations.
The left upper part of Fig. 2 presents the $\exp (S^{\alpha})$ quantity for the $2^{+}0$ states. On the $x$-axis are the eigenstates numbered in order of their energies. This simple “numbered” scale is equivalent to the “unfolding” procedure described for example by Brody [*et al*]{} [@Brody]. “Unfolding” is introduced to separate local correlations and fluctuations from the global spectral properties. The solid line represents the GOE result ($0.48 N$). One observes a semicircle-type behavior and a 12% deviation from the GOE even for the maximum entropy in the middle of the spectrum.
It is interesting to study the role of single-particle energies (see eq. (1)) for the chaotic behavior of the amplitudes. The upper right part of Fig. 2 shows the $\exp(S^{\alpha})$ quantity for the $2^{+}0$ states for the same hamiltonian but with all single particle energies, $\epsilon_{\mu}$ in eq. (1), set to zero. In this degenerate case the GOE limit (solid line) is attained and the chaotic regime extends over a larger part of the spectrum.
To further quantify these effects, we look to the distribution $P(l_{S})$ of $l_{S}^{\alpha} = \exp (S^{\alpha}) / 0.48 N$. One can interprete $l_{S}^{\alpha}$ as a delocalization length $N^{\alpha}/N$. It is expected to be shaped around $l_S = 1$ in the chaotic limit. The lower left panel of Fig. 3 presents the results for the values of $l_{S}$ calculated for the normal hamiltonian and shown on the upper left panel. Here the limit of $l_S = 1$ is not reached. On the other hand, for degenerate single-particle orbitals (upper and lower panels on the right side), the distribution of localization lengths is more narrow and the full chaotic limit is reached. This is related to the fact that the mean field in general tends to smooth out the chaotic aspects of many-body dynamics [@Zel93].
The number of principal components (4) behaves in a very similar way gradually increasing from the edges of the spectrum to the middle, Fig.3 (left). Even the most complicated states are shifted down from the GOE limit of complete mixing. However, for the ratio $\exp S^{\alpha}/(NPC)^{\alpha}$ one obtains the results in the right part of Fig. 3. For a Gaussian distribution of amplitudes $C^{\alpha}_{k}$ of a given eigenvector $|\alpha\rangle$, this ratio would be given by the universal ($N$-independent) random matrix result equal to 1.44 (solid line). The flattened region indicates that the chaotic dynamics, even if not complete, extends far beyond the region nearby the maximum of the informational entropy. Again we use the “unfolded” numbered scale rather than the energy scale. The “unfolding” reveals the presence of “local” chaos: in a given small energy range, the eigenstates are
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'While Jeffreys priors usually are well-defined for the parameters of mixtures of distributions, they are not available in closed form. Furthermore, they often are improper priors. Hence, they have never been used to draw inference on the mixture parameters. We study in this paper the implementation and the properties of Jeffreys priors in several mixture settings, show that the associated posterior distributions most often are improper, and then propose a noninformative alternative for the analysis of mixtures.'
author:
- 'Clara Grazian[^1]'
- 'Christian P. Robert[^2]'
title: 'Jeffreys priors for mixture estimation: properties and alternatives'
---
Introduction {#intro}
============
Bayesian inference in mixtures of distributions has been studied quite extensively in the literature. See, e.g., [@maclachlan:peel:2000] and [@fruhwirth:2006] for book-long references and [@lee:marin:mengersen:robert:2008] for one among many surveys. From a Bayesian perspective, one of the several difficulties with this type of distribution, $$\label{eq:theMix}
\sum_{i=1}^k p_i\,f(x|\theta_i)\,,\quad \sum_{i=1}^k p_i=1\,,$$ is that its ill-defined nature (non-identifiability, multimodality, unbounded likelihood, etc.) leads to restrictive prior modelling since most improper priors are not acceptable. This is due in particular to the feature that a sample from may contain no subset from one of the $k$ components $f(\cdot|\theta_i)$ (see. e.g., [@titterington:smith:makov:1985]). Albeit the probability of such an event is decreasing quickly to zero as the sample size grows, it nonetheless prevents the use of independent improper priors, unless such events are prohibited [@diebolt:robert:1994]. Similarly, the exchangeable nature of the components often induces both multimodality in the posterior distribution and convergence difficulties as exemplified by the [*label switching*]{} phenomenon that is now quite well-documented [@celeux:hurn:robert:2000; @stephens:2000b; @jasra:holmes:stephens:2005; @fruhwirth:2006; @geweke:2007; @puolamaki:kaski:2009]. This feature is characterized by a lack of symmetry in the outcome of a Monte Carlo Markov chain (MCMC) algorithm, in that the posterior density is exchangeable in the components of the mixture but the MCMC sample does not exhibit this symmetry. In addition, most MCMC samplers do not concentrate around a single mode of the posterior density, partly exploring several modes, which makes the construction of Bayes estimators of the components much harder.
When specifying a prior over the parameters of , it is therefore quite delicate to produce a manageable and sensible non-informative version and some have argued against using non-informative priors in this setting (for example, [@maclachlan:peel:2000] argue that it is impossible to obtain proper posterior distribution from fully noninformative priors), on the basis that mixture models were ill-defined objects that required informative priors to give a meaning to the notion of a component of . For instance, the distance between two components needs to be bounded from below to avoid repeating the same component over and over again. Alternatively, the components all need to be informed by the data, as exemplified in [@diebolt:robert:1994] who imposed a completion scheme (i.e., a joint model on both parameters and latent variables) such that all components were allocated at least two observations, thereby ensuring that the (truncated) posterior was well-defined. [@wasserman:2000] proved ten years later that this truncation led to consistent estimators and moreover that only this type of priors could produce consistency. While the constraint on the allocations is not fully compatible with the i.i.d. representation of a mixture model, it naturally expresses a modelling requirement that all components have a meaning in terms of the data, namely that all components genuinely contributed to generating a part of the data. This translates as a form of weak prior information on how much one trusts the model and how meaningful each component is on its own (by opposition with the possibility of adding meaningless artificial extra-components with almost zero weights or almost identical parameters).
While we do not seek Jeffreys priors as the ultimate prior modelling for non-informative settings, being altogether convinced of the lack of unique reference priors [@robert:2001; @robert:chopin:rousseau:2009], we think it is nonetheless worthwile to study the performances of those priors in the setting of mixtures in order to determine if indeed they can provide a form of reference priors and if they are at least well-defined in such settings. We will show that only in very specific situations the Jeffreys prior provides reasonable inference.
In Section \[sec:jeffreys\] we provide a formal characterisation of properness of the posterior distribution for the parameters of a mixture model, in particular with Gaussian components, when a Jeffreys prior is used for them. In Section \[sec:prosper\] we will analyze the properness of the Jeffreys prior and of the related posterior distribution: only when the weights of the components (which are defined in a compact space) are the only unknown parameters it turns out that the Jeffreys prior (and so the relative posterior) is proper; on the other hand, when the other parameters are unknown, the Jeffreys prior will be proved to be improper and in only one situation it provides a proper posterior distribution. In Section \[sec:alternative\] we propose a way to realize a noninformative analysis of mixture models and introduce improper priors for at least some parameters. Section \[sec:concl\] concludes the paper.
Jeffreys priors for mixture models {#sec:jeffreys}
==================================
We recall that the Jeffreys prior was introduced by [@jeffreys:1939] as a default prior based on the Fisher information matrix $$\pi^\text{J}(\theta) \propto |I(\theta)|^{{\nicefrac{1}{2}}}\,,$$ whenever the later is well-defined; $I(\cdot)$ stand for the expected Fisher information matrix and the symbol $|\cdot|$ denotes the determinant. Although the prior is endowed with some frequentist properties like matching and asymptotic minimal information [@robert:2001 Chapter 3], it does not constitute the ultimate answer to the selection of prior distributions in non-informative settings and there exist many alternative such as reference priors [@berger:bernardo:sun:2009], maximum entropy priors [@rissanen:2012], matching priors [@ghosh:carlin:srivastava:1995], and other proposals [@kass:wasserman:1996]. In most settings Jeffreys priors are improper, which may explain for their conspicuous absence in the domain of mixture estimation, since the latter prohibits the use of most improper priors by allowing any subset of components to go “empty" with positive probability. That is, the likelihood of a mixture model can always be decomposed as a sum over all possible partitions of the data into $k$ groups at most, where $k$ is the number of components of the mixture. This means that there are terms in this sum where no observation from the sample brings any amount of information about the parameters of a specific component.
Approximations of the Jeffreys prior in the setting of mixtures can be found, e.g., in [@figueiredo:jain:2002], where the Authors revert to independent Jeffreys priors on the components of the mixture. This induces the same negative side-effect as with other independent priors, namely an impossibility to handle improper priors.
[@rubio:steel:2014] provide a closed-form expression for the Jeffreys prior for a location-scale mixture with two components. The family of distributions they consider is $$\dfrac{2\epsilon}{\sigma_1}f\left(\frac{x-\mu}{\sigma_1}\right)\mathbb{I}_{x<\mu}+
\dfrac{2(1-\epsilon)}{\sigma_2}f\left(\frac{x-\mu}{\sigma_2}\right) \mathbb{I}_{x>\mu}$$ (which thus hardly qualifies as a mixture, due to the orthogonality in the supports of both components that allows to identify which component each observation is issued from). The factor $2$ in the fraction is due to the assumption of symmetry around zero for the density $f$. For this specific model, if we impose that the weight $\epsilon$ is a function of the variance parameters, $
\epsilon=\nicefrac{\sigma_1}{\sigma_1+\sigma_2},
$ the Jeffreys prior is given by $
\pi(\mu,\sigma_1,\sigma_2) \propto \nicefrac{1}{\sigma_1\sigma_2\{\sigma_1+\sigma_2\}}.
$ However, in this setting, [@rubio:steel:2014] demonstrate that the posterior associated with the (regular) Jeffreys prior is improper, hence not relevant for conducting inference. (One may wonder at the pertinence of a Fisher information in this model, given that the likelihood is not differentiable in $\mu$.) [@rubio:steel:2014] also consider alternatives to the genuine Jeffreys prior, either by reducing the range or even the number of parameters, or by building a product of conditional priors. They further consider so
|
{
"pile_set_name": "ArXiv"
}
| null |
---
bibliography:
- 'short.bib'
title: Mathematical Formulae in Wikimedia Projects 2020
---
|
{
"pile_set_name": "ArXiv"
}
| null |
[Jancar’s formal system for deciding bisimulation of first-order grammars\
and its non-soundness.]{}\
by Géraud Sénizergues\
LaBRI and Université de Bordeaux I [^1]
#### Abstract
: We construct an example of proof within the main formal system from [@Jan10], which is intended to capture the bisimulation equivalence for non-deterministic first-order grammars, and show that its conclusion is semantically false. We then locate and analyze the flawed argument in the soundness (meta)-proof of [@Jan10].\
The grammar
===========
We consider the alphabet of actions ${\cal A}$, an intermediate alphabet of labels ${\cal T}$ and a map ${{\rm LAB}_{\cal A}}: {\cal T} \rightarrow {\cal A}$ defined by: $${\cal T} := \{ x,y,z,\ell_1\},\;\;{\cal A} := \{a,b,\ell_1\},\;\;\mbox{ and }$$ $${{\rm LAB}_{\cal A}}: x \mapsto a,\;\;y \mapsto a,\;\; z \mapsto b,\;\; \ell_1 \mapsto \ell_1.$$ (these intermediate objects ${\cal T}$, ${{\rm LAB}_{\cal A}}$ will ease the definition of ${{\rm ACT}}$ below). We define a first-order grammar ${\cal G} = ({\cal N},{\cal A},{\cal R})$ by: $${\cal N} := \{A, A', A'', B, B', B'', C, D, E, L_1\}$$ and the set of rules ${\cal R}$ consists of the following: $$\begin{aligned}
A(v) &{\stackrel{y}{\longrightarrow_{}}} & C(v)\\ A(v) &{\stackrel{x}{\longrightarrow_{}}} & A'(v)\\B(v) &{\stackrel{x}{\longrightarrow_{}}} & C(v)\\B(v) &{\stackrel{y}{\longrightarrow_{}}} & B'(v)\\C(v) &{\stackrel{x}{\longrightarrow_{}}} & D(v)\\C(v) &{\stackrel{y}{\longrightarrow_{}}} & E(v)\\A'(v) &{\stackrel{x}{\longrightarrow_{}}} & A''(v)\\B'(v) &{\stackrel{x}{\longrightarrow_{}}} & B''(v)\\A''(v) &{\stackrel{x}{\longrightarrow_{}}} & D(v)\\B''(v) &{\stackrel{x}{\longrightarrow_{}}} & E(v)\\D(v) &{\stackrel{x}{\longrightarrow_{}}} & v \label{ruleD}\\E(v) &{\stackrel{x}{\longrightarrow_{}}} & v \label{ruleE1}\\E(v) &{\stackrel{z}{\longrightarrow_{}}} & v \label{ruleE2}\\L_1 &{\stackrel{\ell_1}{\longrightarrow_{}}} & \bot \label{ruleL1}$$ Let us name rule $r_i$ (for $1 \leq i \leq 14$), the rule appearing in order $i$ in the above list. We define a map ${{\rm LAB}_{\cal T}}: {\cal R} \rightarrow {\cal T}$ by: ${{\rm LAB}_{\cal T}}(r_i)$ is the terminal letter used by the given rule $r_i$. Subsequently we define ${{\rm ACT}}(r_i):= {{\rm LAB}_{\cal A}}({{\rm LAB}_{\cal T}}(r_i))$. Namely, ${{\rm ACT}}$ maps all the rules $r_1, \ldots , r_{12}$ onto $a$, $r_{13}$ on $b$ and $r_{14}$ on $\ell_1$.
The formal system
=================
We consider the formal systems ${\cal J}(T_0,T'_0,S_0,{\cal B})$ defined in page 22 of [@Jan10], which are intended to be sound and complete for the bisimulation-problem for non-deterministic first-order grammars. Let us denote by $\TERMS$ the set of all terms over the ranked alphabet ${\cal N} \cup \{L_i\mid i \in \N\} \cup\{\bot\}$ (here the symbols $L_i$ have arity $0$).
Prefixes of strategies
----------------------
The notion of [*finite prefix of a D-strategy*]{} is mentionned p. 23, line 11. We assume it has the following meaning
Let $T,T' \in \TERMS$. A finite prefix of a D-strategy w.r.t. $(T,T')$ is a subset $S \subseteq ({\cal R}\times{\cal R})^*$ of the form $$S = S'\cap ({\cal R}\times{\cal R})^{\leq n}$$ for some $n \in \N$ and some D-strategy $S'$ w.r.t. $(T,T')$. \[def-PDstrategy\]
In order to make clear that the above notion is effective, we consider the following notion of D-q-strategy (Defender’s quasi-strategy).
Let $T,T' \in \TERMS$. A [*D-q-strategy*]{} w.r.t. $(T,T')$ is a subset $S \subseteq ({\cal R}\times{\cal R})^*$ such that:\
DQ1: $(\varepsilon,\varepsilon) \in S$\
DQ2: $S$ is prefix-closed\
DQ3: $S\subseteq {{\rm PLAYS}}(T,T')$\
DQ4: $\forall \alpha \in S$,\
either $\alpha \backslash S=\{(\varepsilon,\varepsilon)\}$\
or ${{\rm NEXT}}((T,T'),\alpha) \notin \sim_1$\
or \[${{\rm NEXT}}((T,T'),\alpha) \in \sim_1$ and the set $\{ (\pi,\pi') \in {\cal R}\times{\cal R} \mid \alpha\cdot (\pi,\pi')\in S\}$ is full for ${{\rm NEXT}}((T,T'),\alpha)$\]. \[def-Dqstrategy\]
Note that a D[*-strategy*]{} is a D-q-strategy where, condition DQ4 is replaced by:\
DQ’4: $\forall \alpha \in S$,\
${{\rm NEXT}}((T,T'),\alpha) \notin \sim_1$\
or \[${{\rm NEXT}}((T,T'),\alpha) \in \sim_1$ and the set $\{ (\pi,\pi') \in {\cal R}\times{\cal R} \mid \alpha\cdot (\pi,\pi')\in S\}$ is full for ${{\rm NEXT}}((T,T'),\alpha)$\].\
A [*winning*]{} D-strategy, is a D-q-strategy where condition DQ4 is replaced by:\
DQ”4: $\forall \alpha \in S$,\
${{\rm NEXT}}((T,T'),\alpha) \in \sim_1$ and the set $\{ (\pi,\pi') \in {\cal R}\times{\cal R} \mid \alpha\cdot (\pi,\pi')\in S\}$ is full for ${{\rm NEXT}}((T,T'),\alpha)$.\
Every finite prefix of a strategy is a D-q-strategy. \[L-PD\_implies\_DQ\]
Let $S'$ be a D-strategy w.r.t. $(T,T')$ and $$S= S'\cap ({\cal R}\times{\cal R})^{\leq n}$$ for some $n \in \N$, $S'$ D-strategy w.r.t. $(T,T')$.\
DQ1: Since $S'$ is non-empty and prefix-closed $(\varepsilon,\varepsilon) \in S'$, hence $(\varepsilon,\varepsilon) \in S'\cap S({\cal R}\times{\cal R})^{\leq n}$.\
DQ2: $S'$ and $({\cal R}\times{\cal R})^{\leq n}$ are both prefix-closed, hence their intersection is also prefix-closed.\
DQ3: $S'\subseteq {{\rm PLAYS}}(T,T')$ and $S \subseteq S'$, hence $S\subseteq {{\rm PLAYS}}(T,T')$\
DQ4: $\forall \alpha \in S$,\
${{\rm NEXT}}((T,T'),\alpha) \notin \sim_1$\
or \[${{\rm NEXT}}((T,T'),\alpha) \in \sim_1$ and the set $\{ (\pi,\pi') \in {\cal R}\times{\cal R} \mid \alpha\cdot (\pi,\pi')\in S'\}$ is full for ${{\rm NEXT}}((T,T'),\alpha)$\]. If $|\alpha| < n$, the above property holds in $S$.\
If $|\alpha| = n$, the property $\alpha \backslash S=\{(\varepsilon,\varepsilon)\}$ holds. In all cases DQ4 is fulfilled.\
We define the [*extension*]{} ordering over ${\cal P}(({\cal R}\times{\cal R})^*)$ as follows: for every $S_1,S_2 \in {\cal P}(({\cal R}\times{\cal R})^*)$, $S_1 \sqsubseteq S_2$ iff the two conditions below hold:\
E1- $S_1 \subseteq S_2$\
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The transport and magnetic properties of correlated La$_{0.53}$Sr$_{0.47}$MnO$_{3}$ ultrathin films, grown epitaxially on SrTiO$_{3}$, show a sharp cusp at the structural transition temperature of the substrate. Using a combination of experiment and theory we show that the cusp is a result of resonant coupling between the charge carriers in the film and a soft phonon mode in the SrTiO$_{3}$, mediated through oxygen octahedra in the film. The amplitude of the mode diverges towards the transition temperature, and phonons are launched into the first few atomic layers of the film affecting its electronic state.'
author:
- 'Y. Segal'
- 'K. F. Garrity'
- 'C. A. F. Vaz'
- 'J. D. Hoffman'
- 'F. J. Walker'
- 'S. Ismail-Beigi'
- 'C. H. Ahn'
bibliography:
- 'cusp.bib'
title: 'Resonant phonon coupling across the La$_{1-x}$Sr$_{x}$MnO$_{3}$/SrTiO$_{3}$ interface'
---
The coupling of phonons to charge carriers is a process of key importance for a broad set of phenomena, ranging from carrier mobility in semiconductors to Cooper pairing. In recent times, phonon effects at interfaces emerged as a topic of great importance in the understanding and design of nano-structured materials [@interfacialphonons]. Coupling between charge, structure and magnetic ordering is particularly strong in the Mn oxides [@RefWorks:462], which are used as a component in heterostructure multiferroics [@CarlosPRLPaper]. In these materials, localized spins and mobile carriers reside on the Mn sites, each surrounded by an oxygen octahedra. Intersite hopping occurs through orbital overlap of the Mn with neighbouring oxygens, making it highly sensitive to the static orientation of the octahedra and to phonons that alter the octahedra’s orientation [@RefWorks:463]. This interplay between structure and properties has been exploited to control the electronic phase of CMR films via strain, and also via coherent photoexcitation of a specific octahedra vibration mode [@RefWorks:454].\
In this Letter, we use a specially designed thin film device to isolate and characterize phonon-carrier coupling within a few atomic layers of an interface between the perovskite SrTiO$_{3}$ (STO) and the CMR oxide La$_{0.53}$Sr$_{0.47}$MnO$_{3}$ (LSMO). A soft octahedral rotation phonon with a divergent amplitude in the STO couples to the corresponding mode of the film. This coupling results in a marked change in the electronic and magnetic properties, including a sharp cusp in the resistivity and a dip in the magnetic moment. The sensitivity of LSMO to octahedra orientation allows us to experimentally probe the microscopic character of this interfacial phonon coupling, and compare it to theory. The thin film devices consist of La$_{0.53}$Sr$_{0.47}$MnO$_{3}$ films grown by molecular beam epitaxy on TiO$_{2}$-terminated STO (001) substrates and overlaid by Pb(Zr$_{0.2}$Ti$_{0.8}$)O$_{3}$ (PZT), which is used to provide ferroelectric field effect modulation of the number and distribution of carriers in the film. Details concerning fabrication and structural characterization are described elsewhere [@CarlosGrowthPaper]. In the bulk LSMO phase diagram, the $x=0.5$ composition separates the ferromagnetic metallic phase from an insulating antiferromagnetic phase [@RefWorks:476]. When grown commensurate to the STO, the substrate induces tensile strain in the film, which is known to stabilize an A-type antiferromagnetic metallic phase (AF-M) [@RefWorks:414]. Using X-ray diffraction, we verified that our films are under tensile strain, with $c/a=0.975$, in agreement with previous studies [@RefWorks:414].
Transport measurements of an 11unit cell (uc) LSMO film are shown in Fig.\[fig:Transport\]a. The broad peak in resistivity at 250K corresponds to a metal-insulator transition, typical of this material. In addition, a unique feature is observed in our films: a large and sharp resistance peak centered around 108K, which corresponds to the temperature of the STO soft phonon peak. We observe further that the magnitude of the resistivity cusp decreases when the thickness of the film increases by a few unit cells. Indeed, in previous studies of films $\approx$80uc thick, only a trace of this feature was observed [@RefWorks:477]. This film thickness dependence implies that the strength of the mechanism creating the cusp decays quickly away from the STO/LSMO interface. We can verify this by switching the polarization state of the PZT. When the PZT is switched to the “depletion” state, holes are removed from the top layer of the LSMO, pushing the conducting region closer to the substrate. The opposite occurs in the “accumulation” state [@CarlosPRLPaper]. We find that the PZT has a pronounced effect on the cusp (Fig.\[fig:Transport\]b), making it much larger in the depletion state, in agreement with the notion of a rapid decay into the film. We note, however, that presence of PZT is not required to observe the effect: the same features are observed on uncapped LSMO films. We also observe a striking dip in the magnetic moment centered around the STO transition temperature (Fig.\[fig:Transport\]a). While the majority of the LSMO is in an antiferromagnetic-metallic state, a small ferromagnetic component remains [@RefWorks:477]. The dip in magnetic moment corresponds to a decrease in magnetic order within the ferromagnetic phase.\
![\[fig:Transport\]Enhanced carrier-phonon scattering. (a) Left axes: Resistivity of an 11uc La$_{0.53}$Sr$_{0.47}$MnO$_{3}$ film showing a strong cusp at 108K. The PZT overlayer is in the depletion state. Right axis: Magnetic moment of a 15uc La$_{0.55}$Sr$_{0.45}$MnO$_{3}$ film. The moment is measured along the [\[]{}100[\]]{} direction under an applied magnetic field of 1kOe. A dip in the moment is observed, overlapping the temperature range of the resistivity cusp (emphasized by grey box). The dashed line is a linear interpolation between the edges of the dip region. b) The resistivity of the 11uc film for the two polarization states of the PZT. c) Energy of the $\Gamma_{25}$ phonon mode in STO, showing the softening around the STO transition temperature (after Ref.[@RefWorks:458]). Lines are a guide to the eye. Below the structural phase transition the mode splits due to the breaking of cubic symmetry.](fig1){width="8.5cm"}
We attribute the transport and magnetism anomaly to a coupling between the LSMO and the phonon softening that occurs in STO around the 108K structural transition. The $\Gamma_{25}$ $(111)$ zone edge phonon [@RefWorks:460; @RefWorks:458] becomes lower in energy as the transition is approached from both temperature directions. Fig.\[fig:Transport\]c, reproduced from Ref.[@RefWorks:458], shows the $\Gamma_{25}$ phonon energy as a function of temperature. The softening leads to a divergent increase in mode occupation or amplitude. The motion associated with this mode is a rotation of the TiO$_{6}$ octahedra. Below the transition temperature, the octahedra stabilize into a rotated antiferrodistortive (AFD) configuration accompanied by a tetragonal distortion of the unit cell. Since the film is mechanically constrained to the substrate at the atomic level, motions of the TiO$_{6}$ octahedra couple to the MnO$_{6}$ ones, inducing both static and dynamic changes in their configuration.
![Side view of STO-LSMO interface geometry. The plot shows calculated ground-state atomic positions. Away from from the interface, the STO is fixed to have bulk-like octahedral rotations around the $x$ axis (into the page). The LSMO geometry at the interface is modified by the STO; however, the LSMO relaxes to its bulk-like octahedral rotations around both in-plane axes within 2-3 unit cells. \[fig:interface\]](fig2)
We examine two mechanisms whereby the resistance of the LSMO layer might increase: $(i)$ static changes of the LSMO structure causing a change of electronic band parameters; $(ii)$ decreased carrier relaxation times due to enhanced phonon scattering, i.e. a dynamic effect. The static and dynamic contributions are reflected in the expression for the conductivity in the relaxation time approximation $\sigma_{ij}\propto\tau m_{ij}^{-1}$, where $\tau$ is the relaxation time and $m_{ij}^{-1}$ is the reciprocal effective mass tensor [@ashcroft].
To treat the temperature-dependent character of the coupling phenomena, we perform finite temperature simulations by building a classical model of the energetics of the system as a function of oxygen displacements. Our model includes harmonic coupling between oxygens, 4$^{th}$ order on-site anharmonic terms to stabilize the symmetry breaking, and lowest order coupling between oxygen displacements and stress, thus capturing the STO phase transition [@sto1]. Model parameters are obtained via density functional theory calculations using the spin-polarized PBE GGA functional [@GGA] and ultrasoft pseudopot
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We introduce the positive intersection product in Arakelov geometry and prove that the arithmetic volume function is continuously differentiable. As applications, we compute the distribution function of the asymptotic measure of a Hermitian line bundle and several other arithmetic invariants.'
address: 'Université Paris Diderot — Paris 7, Institut de mathématiques de Jussieu, case 247, 4 place Jussieu, 75252 Paris Cedex'
author:
- Huayi Chen
bibliography:
- 'chen.bib'
title: Differentiability of the arithmetic volume function
---
On introduit le produit d’intersection positive en géométrie d’Arakelov et on démontre que la fonction volume arithmétique est continuement dérivable. Comme applications, on calcule la fonction de répartition de la mesure de probabilité asymptotique d’un fibré inversible hermitien ainsi que quelques d’autres invariants arithmétiques.
Introduction
============
Let $K$ be a number field, $\mathcal O_K$ be its integer ring and $\pi:X\rightarrow\operatorname{Spec}\mathcal O_K$ be an arithmetic variety of relative dimension $d$. Recall that the [*arithmetic volume*]{} of a continuous Hermitian line bundle $\overline L$ on $X$ is by definition $$\label{Equ:arithmetic volume}\widehat{\mathrm{vol}}(\overline
L):=\limsup_{n\rightarrow\infty}
\frac{\widehat{h}^0(X,\overline L^{\otimes n
})}{n^{d+1}/(d+1)!},$$ where $$\widehat{h}^0(X,\overline L^{\otimes n})
=\log\#\{s\in\pi_*(L^{\otimes n})\mid
\forall\,\sigma:K\rightarrow\mathbb C,\;
\|s\|_{\sigma,\sup}\leqslant 1\}.$$ The properties of the arithmetic volume $\widehat{\mathrm{vol}}$ (see [@Moriwaki07; @Moriwaki08; @Yuan07; @Yuan08; @Chen_bigness; @Chen_Fujita]) are quite similar to the corresponding properties of the classical volume function in algebraic geometry. Recall that if $Y$ is a projective variety defined over a field $k$ and if $L$ is a line bundle on $Y$, then the volume of $L$ is defined as $$\mathrm{vol}(L):=\limsup_{n\rightarrow\infty}
\frac{\operatorname{rk}_k H^0(Y,L^{\otimes n})}{n^{\dim Y}/(\dim Y
)!}.$$
In [@Bou_Fav_Mat06], Boucksom, Favre and Jonsson have been interested in the regularity of the geometric volume function. They have actually proved that the function $\mathrm{vol}(L)$ is continuously differentiable on the big cone. The same result has also been independently obtained by Lazarsfeld and Musţatǎ [@Lazarsfeld_Mustata08], by using Okounkov bodies. Note that the geometric volume function is not second order derivable in general, as shown by the blow up of $\mathbb P^2$ at a closed point, see [@LazarsfeldI 2.2.46] for details. In the differential of $\mathrm{vol}$ appears the positive intersection product, initially defined in [@Boucksom_Demailly_Paun_Peternell] in the analytic-geometrical framework, and redefined algebraically in [@Bou_Fav_Mat06].
Inspired by [@Bou_Fav_Mat06], we introduce an analogue of the positive intersection product in Arakelov geometry and prove that the arithmetic volume function $\widehat{\mathrm{vol}}$ is continuously differentiable on $\widehat{\mathrm{Pic}}(X)$. We shall establish the following theorem:
\[Thm:main theorem\] Let $\overline L$ and $\overline M$ be two continuous Hermitian line bundles on $X$. Assume that $\overline
L$ is big. Then $$D_{\overline L}\widehat{\mathrm{vol}}(\overline M):=\lim_{n\rightarrow+\infty}\frac{\widehat{\mathrm{vol}}
(\overline L^{\otimes n}\otimes\overline
M)-\widehat{\mathrm{vol}}(\overline L^{\otimes
n})}{n^d}$$ exists in $\mathbb R$, and the function $D_{\overline L}\widehat{\mathrm{vol}}$ is additive on $\widehat{\mathrm{Pic}}(X)$. Furthermore, one has $$D_{\overline L}\widehat{\mathrm{vol}}(\overline M)=
(d+1)\big\langle\widehat{c}_1(\overline L
)^d\big\rangle\cdot\widehat{c}_1(\overline M).$$
Here the [*positive intersection product*]{} $\big\langle\widehat{c}_1(\overline L )^d\big\rangle$ is defined as the least upper bound of self intersections of ample Hermitian line bundles dominated by $\overline L$ (see §\[SubSec:Posit\] [*infra*]{}). In particular, one has $\big\langle\widehat{c}_1(\overline L
)^d\big\rangle\cdot\widehat{c}_1(\overline
L)=\big\langle\widehat{c}_1(\overline L
)^{d+1}\big\rangle=\widehat{\mathrm{vol}}(\overline
L)$, which shows that the arithmetic Fujita approximation is asymptotically orthogonal.
As an application, we calculate explicitly the distribution function of the asymptotic measure (see [@Chen08; @Chen_bigness]) of a generically big Hermitian line bundle in terms of positive intersection numbers. Let $\overline L$ be a Hermitian line bundle on $X$ such that $L_K$ is big. The asymptotic measure $\nu_{\overline L}$ is the vague limit (when $n$ goes to infinity) of Borel probability measures whose distribution functions are determined by the filtration of $H^0(X_K,L_K^{\otimes n})$ by successive minima (see ). Several asymptotic invariants can be obtained by integration with respect to $\nu_{\overline L}$. Therefore, it is interesting to determine completely the distribution of $\nu_{\overline L}$, which will be given in Proposition \[Pro:distribution function\] by using the positive intersection product.
The article is organized as follows. In the second section, we recall some positivity conditions for Hermitian line bundles and discuss their properties. In the third section, we define the positive intersection product in Arakelov geometry. It is in the fourth section that we establish the differentiability of the arithmetic volume function. Finally in the fifth section, we present applications on the asymptotic measure and we compare our result to some known results on the differentiability of arithmetic invariants.
[**Acknowledgement:**]{} I would like to thank R. Berman, D. Bertrand, J.-B. Bost, S. Boucksom, C. Favre and V. Maillot for interesting and helpful discussions. I am also grateful to M. Jonsson for remarks.
Notation and preliminaries
==========================
In this article, we fix a number field $K$ and denote by $\mathcal O_K$ its integer ring. Let $\overline K$ be an algebraic closure of $K$. Let $\pi:X\rightarrow\operatorname{Spec}\mathcal O_K$ be a projective and flat morphism and $d$ be the relative dimension of $\pi$. Denote by $\widehat{\mathrm{Pic}}(X)$ the group of isomorphism classes of (continuous) Hermitian line bundles on $X$. If $\overline L$ is a Hermitian line bundle on $X$, we denote by $\pi_*(\overline L)$ the $\mathcal O_K$-module $\pi_*(L)$ equipped with sup norms.
In the following, we recall several notions about Hermitian line bundles. The references are [@Gillet-Soule; @Zhang95; @BGS94; @Moriwaki00].
Assume that $x\in X(\overline K)$ is an algebraic point of $X$. Denote by $K_x$ the field of definition of $x$ and by $\mathcal O_x$ its integer ring. The morphism $x:\operatorname{Spec}\overline K\rightarrow X$ gives rise to a point $P_x$ of $X$ valued in $\mathcal O_x$. The pull-back of $\overline L$ by $P_x$ is a Hermitian line bundle on $\operatorname{Spec}\mathcal O_x$. We denote by $h_{\overline L}(x)$ its normalized Arakelov degree, called the [*height*]{} of $x$. Note that the height function is additive with respect to $\overline L$.
Let $\overline L$ be a Hermitian line bundle on $X$. We say that a section $s\in\pi_*(L)$ is [*effective*]{} (resp. [*strictly effective*]{}) if for any $\sigma:K\rightarrow\mathbb C$, one has $\|s\|_{\sigma,\sup}\leqslant 1$ (resp. $\|s\|_{\sigma,\sup}< 1$). We say that the Hermitian line bundle $\overline L$ is [*effective*]{} if it admits a non-zero effective section.
Let $\overline L_1$ and $\overline L_2$ be two Hermitian line bundles on $X$. We say that $\overline
L_1$ is [*smaller*]{} than $\overline L_2$ and we denote by $\overline L_1\leqslant\overline L_2$ if the Hermitian line
|
{
"pile_set_name": "ArXiv"
}
| null |
---
author:
- Shauna Revay
- Matthew Teschke
bibliography:
- 'sample.bib'
title: Multiclass Language Identification using Deep Learning on Spectral Images of Audio Signals
---
Introduction
============
Recently, voice assistants have become a staple in the flagship products of many big technology companies such as Google, Apple, Amazon, and Microsoft. One challenge for voice assistant products is that the language that a speaker is using needs to be preset. To improve user experience on this and similar tasks such as automated speech detection or speech to text transcription, automatic language detection is a necessary first step.
The technique described in this paper, language identification for audio spectrograms (LIFAS), uses spectrograms of raw audio signals as input to a convolutional neural network (CNN) to be used for language identification. One benefit of this process is that it requires minimal pre-processing. In fact, only the raw audio signals are input into the neural network, with the spectrograms generated as each batch is input to the network during training. Another benefit is that the technique can utilize short audio segments (approximately 4 seconds) for effective classification, necessary for voice assistants that need to identify language as soon as a speaker begins to talk.
LIFAS binary language classification had an accuracy of 97%, and multi-class classification with six languages had an accuracy of 89%.
Background
==========
Finding a dataset of audio clips in various languages sufficiently large for training a network was an initial challenge for this task. Many datasets of this type are not open sourced [@mozilla]. VoxForge [@voxforge], an open-source corpus that consists of user-submitted audio clips in various languages, is the source of data used in this paper.
Previous work in this area used deep networks as feature extractors, but did not use the networks themselves to classify the languages [@conference; @unified]. LIFAS removes any feature extraction performed outside of the network. The network is fed a raw audio signal, and the spectrogram of the data is passed to the neural network during training. The last layer of the network outputs a vector of probabilities with one prediction per language. Thus, the whole process from raw audio signal to prediction of language is performed automatically by the neural network.
In [@lstmpaper], a CNN was combined with a long short-term memory (LSTM) network to classify language using spectrograms generated from audio. The network presented in [@lstmpaper] classified 4 languages using 10-second audio clips for training [@blog], while LIFAS achieves similar performance for 6 languages using 4-second audio clips. This demonstrates the robustness of the architecture and its improvement upon earlier techniques.
Residual and Convolutional Neural Networks
------------------------------------------
CNNs have been shown to give state of the art results for image classification and a variety of other tasks. As neural networks using back propagation were constructed to be deeper, with more layers, they ran into the problem of vanishing gradient [@gradient]. A network updates its weights based on the partial derivatives of the error function from the previous layers. Many times, the derivatives can become very small and the weight updates become insignificant. This can lead to a degradation in performance.
One way to mitigate this problem is the use of Residual Neural Networks (ResNets [@resnet]). ResNets utilize skip connections in layers which connects two non-adjacent layers. ResNets have shown state-of-the-art performance on image recognition tasks, which makes them a natural choice for a network architecture for this task [@imageresidual].
Spectrogram Generation
----------------------
A spectrogram is an image representation of the frequencies present in a signal over time. The frequency spectrum of a signal can be generated from a time series signal using a Fourier Transform.
In practice, the Fast Fourier Transform (FFT) can be applied to a section of the time series data to calculate the magnitude of the frequency spectrum for a fixed moment in time. This will correspond to a line in the spectrogram. The time series data is then windowed, usually in overlapping chunks, and the FFT data is strung together to form the spectrogram image which allows us to see how the frequencies change over time.
Since we were generating spectrograms on audio data, the data was converted to the mel scale, generating “melspectrograms”. These images will be referred to as simply “spectrograms” for the duration of this paper. The conversion from $f$ hertz to $m$ mels that we use is given by,
$$m = 2595 \log_{10} \left( 1 + \frac{f}{700} \right).$$
An example of a spectrogram generated by an English data transmission is shown in figure \[spec\].
![Spectrogram generated from an English audio file.[]{data-label="spec"}](spec.png){width="\textwidth"}
Data Preparation
================
Audio data was collected from VoxForge [@voxforge]. Each audio signal was sampled at a rate of 16kHz and cut down to be 60,000 samples long. In this context, a sample refers to the number of data points in the audio clip. This equates to 3.75 seconds of audio. The audio files were saved as WAV files and loaded into Python using the librosa library and a sample rate of 16kHz.
Each audio file of 60,000 samples was saved separately and is referred to as a clip. The training set consisted of 5,000 clips per language, and the validation set consisted of 2,000 clips per language.
Audio clips were gathered in English, Spanish, French, German, Russian, and Italian. Speakers had various accents and were of different genders. The same speakers may be speaking in more than one clip, but there was no cross contamination in the training and validation sets.
Spectrograms were generated using parameters similar to the process discussed in [@audioblog] which used a frequency spectrum of 20Hz to 8,000Hz and 40 frequency bins. Each FFT was computed on a window of 1024 samples. No other pre-processing was done on the audio files. Spectrograms were generated on-the-fly on a per-batch basis while the network was running (i.e. spectrograms were not saved to disk).
Network
=======
We utilized the fast.ai [@fastai] deep learning library built on PyTorch [@pytorch]. The network used was a pretrained Resnet50. The spectrograms were generated on a per-batch basis, with a batch size of 64 images. Each image was $432 \times 288$ pixels in size.
During training, the 1-cycle-policy described in [@leslie] was used. In this process, the learning rate is gradually increased and then decreased in a linear fashion during one cycle [@onecycleblog]. The learning rate finder within the fast.ai library was first used to determine the maximum learning rate to be used in the 1-cycle training of the network. The maximum learning rate was then set to be $1 \times 10^{-2}$. The learning rate increases until it hits the maximum learning rate, and then it gradually decreases again. The length of the cycle was set to be 8 epochs, meaning that throughout the cycle 8 epochs are evaluated.
Experiments
===========
Binary Classification with Varying Number of Samples
----------------------------------------------------
Binary classification was performed on two languages using clips of 60,000 samples. English and Russian were chosen to use for training and validation. To test the impact of the number of samples on classification while keeping the sample rate constant, binary classification was also performed on clips of 100,000 samples.
Multiple Language Classification
--------------------------------
For each language (English, Spanish, German, French, Russian, and Italian), 5,000 clips were placed in the training set. Each clip was 60,000 samples in length. 2,000 clips per language were placed in the validation set, and no speakers or clips appeared in both the training and validation sets.
Results
=======
Accuracy was calculated for both binary classification and multiclass classification as: $$Accuracy = \frac{Number \; of \; Correct \; Predictions}{Total \;Number \;of \; Predictions}.$$ LIFAS binary classification accuracy for Russian and English clips of length 60,000 samples was 94%. In comparison, LIFAS binary classification accuracy on the clips of 100,000 samples was 97 %. The accuracy totals given in the experiments above are calculated on the total number of clips in the validation set. The accuracy can also be broken up into accuracy for English clips, or accuracy for Russian clips, where there was essentially no difference in the accuracy for English clips and the accuracy for Russian clips.
To confirm that the network performance was not dependent on English and Russian language data, binary classification was tested on other languages with little to no impact on validation accuracy.
LIFAS accuracy for the multi-class network with six languages was 89 %. These results were based on clips of 60,000 samples since a sufficient number of longer clips were unavailable. Results from the 100,000 sample clips in the binary classification model suggest performance could be improved in the multi-class setting with longer clips.
The confusion matrix for the multi-class classification is shown in figure \[confusion\].
![The confusion matrix for the multiclass language identification problem.[]{data-label="confusion"}](confusion.png){width="80.00000%"}
Discussion and Limitations
==========================
Notably, the highest rate of false negative classifications came when Spanish clips were classified as Russian, and when Russian clips were classified as Spanish. Additionally,
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'We have assembled a sample of high spatial resolution far-UV (Hubble Space Telescope Advanced Camera for Surveys Solar Blind Channel) and H$\alpha$ (Maryland-Magellan Tunable Filter) imaging for 15 cool core galaxy clusters. These data provide a detailed view of the thin, extended filaments in the cores of these clusters. Based on the ratio of the far-UV to H$\alpha$ luminosity, the UV spectral energy distribution, and the far-UV and H$\alpha$ morphology, we conclude that the warm, ionized gas in the cluster cores is photoionized by massive, young stars in all but a few (Abell 1991, Abell 2052, Abell 2580) systems. We show that the extended filaments, when considered separately, appear to be star-forming in the majority of cases, while the nuclei tend to have slightly lower far-UV luminosity for a given H$\alpha$ luminosity, suggesting a harder ionization source or higher extinction. We observe a slight offset in the UV/H$\alpha$ ratio from the expected value for continuous star formation which can be modeled by assuming intrinsic extinction by modest amounts of dust (E(B-V) $\sim$ 0.2), or a top-heavy IMF in the extended filaments. The measured star formation rates vary from $\sim$ 0.05 M$_{\odot}$ yr$^{-1}$ in the nuclei of non-cooling systems, consistent with passive, red ellipticals, to $\sim$ 5 M$_{\odot}$ yr$^{-1}$ in systems with complex, extended, optical filaments. Comparing the estimates of the star formation rate based on UV, H$\alpha$ and infrared luminosities to the spectroscopically-determined X-ray cooling rate suggests a star formation efficiency of 14$^{+18}_{-8}$%. This value represents the time-averaged fraction, by mass, of gas cooling out of the intracluster medium which turns into stars, and agrees well with the global fraction of baryons in stars required by simulations to reproduce the stellar mass function for galaxies. This result provides a new constraint on the efficiency of star formation in accreting systems.'
author:
- 'Michael McDonald, Sylvain Veilleux, David S. N. Rupke, Richard Mushotzky, and Christopher Reynolds'
title: Star Formation Efficiency in the Cool Cores of Galaxy Clusters
---
Introduction
============
The high densities and low temperatures of the intracluster medium (hereafter ICM) in the cores of some galaxy clusters suggests that massive amounts (100–1000 M$_{\odot}$ yr$^{-1}$) of cool gas should be deposited onto the central galaxy. The fact that this gas reservoir is not observed has been used as prime evidence for feedback-regulated cooling (see review by Fabian 1994). By invoking feedback, either by active galactic nuclei (hereafter AGN) (e.g., Guo [et al. ]{}2008; Rafferty [et al. ]{}2008; Conroy [et al. ]{}2008), mergers (e.g., Gómez [et al. ]{}2002; ZuHone 2010), conduction (e.g., Fabian [et al. ]{}2002; Voigt [et al. ]{}2004), or some other mechanism, theoretical models can greatly reduce the efficiency of ICM cooling, producing a better match with what is observed in high resolution X-ray grating spectra of cool cores (0–100 M$_{\odot}$ yr$^{-1}$, Peterson [et al. ]{}2003). However, these modest cooling flows had remained unaccounted for at low temperatures until only recently.
The presence of warm, ionized gas in the form of H$\alpha$ emitting filaments has been observed in the cores of several cooling flow clusters to date (e.g., Hu [et al. ]{}1985, Heckman [et al. ]{}1989, Crawford [et al. ]{}1999, Jaffe [et al. ]{}2005, Hatch [et al. ]{}2007). More recently, it has been shown by McDonald [et al. ]{}(2010, 2011; herafter M+10 and M+11, respectively) that this emission is intimately linked to the cooling ICM and may be the result of cooling instabilities. However, while it is possible that the warm gas may be a byproduct of ICM cooling, the source of ionization in this gas remains a mystery. A wide variety of ionization mechanisms are viable in the cores of clusters (see Crawford [et al. ]{}2005 for a review), the least exotic of which may be photoionization by massive, young stars.
------------ ------------- -------------- -------- -------- ------ ----------- --------------
Name RA Dec z E(B-V) M F$_{1.4}$ Proposal No.
(1) (2) (3) (4) (5) (6) (7) (8)
Abell 0970 10h17m25.7s -10d41m20.3s 0.0587 0.055 – $<$ 2.5 11980
Abell 1644 12h57m11.6s -17d24m33.9s 0.0475 0.069 3.2 98.4 11980
Abell 1650 12h58m41.5s -01d45m41.1s 0.0846 0.017 0.0 $<$ 2.5 11980
Abell 1795 13h48m52.5s +26d35m33.9s 0.0625 0.013 7.8 924.5 11980, 11681
Abell 1837 14h01m36.4s -11d07m43.2s 0.0691 0.058 0.0 4.8 11980
Abell 1991 14h54m31.5s +18d38m32.4s 0.0587 0.025 14.6 39.0 11980
Abell 2029 15h10m56.1s +05d44m41.8s 0.0773 0.040 3.4 527.8 11980
Abell 2052 15h16m44.5s +07d01m18.2s 0.0345 0.037 2.6 5499.3 11980
Abell 2142 15h58m20.0s +27d14m00.4s 0.0904 0.044 1.2 $<$ 2.5 11980
Abell 2151 16h04m35.8s +17d43m17.8s 0.0352 0.043 8.4 2.4 11980
Abell 2580 23h21m26.3s -23d12m27.8s 0.0890 0.024 – 46.4 11980
Abell 2597 23h25m19.7s -12d07m27.1s 0.0830 0.030 9.5 1874.6 11131
Abell 4059 23h57m00.7s -34d45m32.7s 0.0475 0.015 0.7 1284.7 11980
Ophiuchus 17h12m27.7s -23d22m10.4s 0.0285 0.588 0.0 28.8 11980
WBL 360-03 11h49m35.4s -03d29m17.0s 0.0274 0.028 – $<$ 2.5 11980
------------ ------------- -------------- -------- -------- ------ ----------- --------------
*-0.2 in (1): Cluster name, (2–4): NED RA, Dec, redshift of BCG (<http://nedwww.ipac.caltech.edu>), (5): Reddening due to Galactic extinction from Schlegel [et al. ]{}(1998), (6): Spectroscopically-determined X-ray cooling rates (M$_{\odot}$ yr$^{-1}$) from McDonald [et al. ]{}(2010), (7): 1.4 GHz radio flux (mJy) from NVSS (<http://www.cv.nrao.edu/nvss/>) (8) HST proposal number for FUV data. Proposal PIs are W. Jaffe (\#11131), W. Sparks (\#11681), S. Veilleux (\#11980).\
$^a$: No available *Chandra* data. \[sample\]*
The identification of star-forming regions in cool core clusters has a rich history in the literature. Early on, it was noted by several groups that brightest cluster galaxies (hereafter BCGs) in cool core clusters have higher star formation rates than non-cool core BCGs (Johnstone [et al. ]{}1987; Romanishin [et al. ]{}1987; McNamara and O’Connell 1989; Allen 1995; Cardiel [
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: 'The aim of this work is to show how Einstein’s quantum hypothesis leads immediately and necessarily to a departure from classical mechanics. First we note that the classical description and predictions are in terms of idealized measurements that are exact, instantaneous, non-perturbative, independent of each other and process agnostic. If we assume we cannot arbitrarily reduce the strength of a signal, measurements are ultimately perturbative to some degree. We show how a physical description in which the best measurement conceivable, i.e. the ideal measurement, perturbs the system leads to all the concepts present in quantum mechanics including conjugate variables, probabilistic predictions and measurements connected to symmetries.'
author:
- Gabriele Carcassi
date: 'February 15, 2009'
title: 'How Einstein’s quantum hypothesis requires a departure from classical mechanics'
---
Introduction
============
It is unfortunate that, after more than half a century that quantum mechanics has been a core part of our scientific understanding, it is still surrounded by a cloud of mystery and perceived as strange and nonintuitive. It is true that it does predict behavior that is odd and counter to our intuition, but does that have to imply we are bound to feel like something escapes us?
Quantum mechanics is not the only 20th century theory that has strange consequences: the concept of spacetime, time dilation, length contraction, equivalence of mass and energy, curved space and black holes are some of the landmarks of special and general relativity, yet both are usually presented as natural, in fact necessary. We believe that the main difference is that they are presented as coming from a simple physical idea, the invariance of the speed of light in the first case and the equivalence principle in the second, which help us make sense of all the other physical consequences.
Quantum mechanics, with its uncertainty principle, interference and probabilistic predictions, is usually presented as a set of mathematical postulates[@sudbery], usually prefaced by a historical perspective[@liboff] or by a heuristic account that gives some sort of justification for them[@feynman; @griffiths; @shankar; @sakurai]. We are not told why a Hilbert space must be used as the phase space, or why observables are associated with operators: that is the starting point. The mathematical results derived from the postulates need to be subsequently interpreted physically, with nothing else to connect them together but the mathematical framework. Should the physics not come first? Should the math not be derived from the physics?
We are left to wonder whether we are missing something: what is the “big physical idea” that requires us to abandon the classical description? Maybe if we were to present quantum mechanics derived from it, it would increase our sense of understanding: what is understanding if not being able to identify, in the midst of all that is confusing and misleading, that simple truth from which all others descend?
We are convinced that this idea is something that is already present in all quantum textbooks: Einstein’s quantum hypothesis. This states that light consists of and propagates in discrete packets of energy. The aim of this paper is to convince the reader that this assumption, by itself, is sufficient to require departure from the classical description. The language and the level of math used are appropriate for an introductory class, where typically more importance is given to thought experiments and concepts. In fact, the arguments are designed so that they could be used “as is” during the first lecture of an introductory class in quantum mechanics.
In section II we will show how the classical description is in terms of idealized measurements that are exact, instantaneous, non-perturbative, independent of each other and process agnostic. In section III we will show such an idealized measurement is in principle possible using the classical electromagnetic field. In section IV we will show how the introduction of the quantum hypothesis requires all measurements to be perturbative and that this leads to many of the features present in quantum mechanics. In section V we show that the concepts we developed fit extremely well in the mathematical framework. In section VI we extend the arguments to show how the introduction of the equivalence principle and gravitation necessarily leads to a departure from quantum mechanics, as measurements are no longer independent of each other and cannot be regarded as exact or instantaneous. In section VII we go through some common reactions to these arguments, comparing to other types of works.
Ideal measurements in classical mechanics
=========================================
Predictions and measurements are fundamental in physics: devising experiments, performing them and comparing their results to our predictions are in essence the activities of a physicist. In this section we are going to review some aspects of these basic concepts in the context of classical mechanics.
In classical mechanics we describe a system by a set of quantities that vary in time. For example, we write $x=x(t)$ for position or $p=p(t)$ for momentum. At each moment in time, we have a prediction for each quantity: if we were to measure, and our description were correct, we would obtain that value. There are a few details, though, that we have to keep in mind.
First we note that while an actual experiment will only measure a quantity within a certain accuracy, the prediction is at least in principle exact. If we increase the accuracy of our measurement, the result will have to be closer to our prediction for the prediction to be correct. The prediction is really for an idealized measurement: one for which the uncertainty is so small that it can be neglected. [^1]
Secondly, an actual experiment will measure a quantity within a finite interval of time while the prediction is given for an instant. We can imagine that we improve our measurement so that the interval is smaller and smaller. The prediction is again in terms of an idealized measurement, one in which the time interval is so small that it can be neglected.
The third thing that we notice is that the classical description does not require us to say what quantity we are measuring and when: the evolution does not depend on it. In general, an actual experiment will modify the evolution of the system. We can, once again, imagine that we improve our technique so that it affects less and less the future evolution. The prediction is really in terms of an idealized measurement, one in which the perturbation caused by the experiment is so small that it can be neglected.
The fourth point: if we had two or more instances of the same experiment conducted at the same time, or within an interval so small that it could be neglected, the prediction would be the same. While in practice this may be difficult to achieve, ideal measurements do not interfere with each other, so our description does not depend on how many idealized measurements are performed at a particular time: we can have several observers or just one and we obtain the same result.
Fifth and last point: the prediction does not depend on which particular physical process or measurement technique we use for our measurement.
So, when we write $x(t)$ there are quite a few assumptions that go with it. It assumes that, at least in principle, we can imagine an ideal measurement that is exact, instantaneous, non-destructive, independent of other measurements (or “inter-independent”) and process agnostic. *This is the best measurement we can conceive.* Our description is given by the outcomes of such idealized measurements at every moment in time. Even when we do describe a “real” measurement, we do so by describing the entire process in terms of these idealized quantities. In other words: we describe the world using the best measurement we can conceive, and from this determine predictions for our real, less perfect, measurement.
It should be noted that if any of those conditions cannot be satisfied conceptually, our description of position as $x=x(t)$ would not make much sense. Also note that since the ideal measurement is non-perturbative, inter-independent and process agnostic, our description does not depend on in what way, how many times, what or even if we measure. *This is what allows us to imagine the outcomes as properties of the system we are studying*: they are going to be the same no matter what we do. But we always have to keep in the back of our mind that even these quantities are the outcome of a process, however idealized it may be.
An ideal process for an ideal measurement
=========================================
We have seen how the classical description is in terms of ideal measurements. In this section we identify a physical process that we could conceptually use in classical mechanics to perform such an ideal measurement. Given that we assume that the results do not depend on what process we use, we can choose the process we prefer. For reasons that will become obvious in the next section, we will focus on an ideal position measurement through the use of the electromagnetic field.
In the simplest case, we can think of sending an electromagnetic signal toward our target. The electromagnetic signal will interact with the target, leaving it affected in general. The signal itself will also be changed and when it is received by the detector it will give us some information about the target. In measuring position, we can imagine the signal bounces off the target, changing its momentum a bit, and by knowing the initial and final position and angles of the electromagnetic signal, we can infer the position of our target at the time of impact. Can this process satisfy, under ideal conditions, the requirements of an ideal measurements?
To measure the position with greater accuracy, we can reduce the width of the electromagnetic signal and, ideally, we would make the packet so small that its length can be disregarded. We can also make the duration of the signal as short as we desire, and ideally it will be so short that it can be disregarded. We can make the measurement non-perturbative by lowering the intensity of the signal enough so that the effect on our target can be neglected. And regarding
|
{
"pile_set_name": "ArXiv"
}
| null |
---
abstract: |
Space-based microlens parallax measurements are a powerful tool for understanding planet populations, especially their distribution throughout the Galaxy. However, if space-based observations of the microlensing events must be specifically targeted, it is crucial that microlensing events enter the parallax sample without reference to the known presence or absence of planets. Hence, it is vital to define objective criteria for selecting events where possible and to carefully consider and minimize the selection biases where not possible so that the final sample represents a controlled experiment.
We present objective criteria for initiating observations and determining their cadence for a subset of events, and we define procedures for isolating subjective decision making from information about detected planets for the remainder of events. We also define procedures to resolve conflicts between subjective and objective selections. These procedures maximize planet sensitivity of the sample as a whole by allowing for planet detections even if they occur before satellite observations for objectively-selected events and by helping to trigger fruitful follow-up observations for subjectively-chosen events. This paper represents our public commitment to these procedures, which is a necessary component of enforcing objectivity on the experimental protocol.
author:
- 'Jennifer C. Yee, Andrew Gould, Charles Beichman, Sebastiano Calchi Novati, Sean Carey, B. Scott Gaudi, Calen Henderson, David Nataf, Matthew Penny, Yossi Shvartzvald, Wei Zhu'
title: 'Criteria for Sample Selection to Maximize Planet Sensitivity and Yield from Space-Based Microlens Parallax Surveys'
---
[Introduction]{} \[sec:intro\]
==============================
Measuring the Distances to Microlensing Planets
-----------------------------------------------
While more than 6000 planets (and strong planetary candidates) have been found within about 1 kpc of the Sun (the great majority discovered via the transit and radial velocity techniques), there are only a handful of confirmed planets with known distances that are greater than 4 kpc and only one confirmed planet in the Galactic bulge [@mb11293B]. All of these distant planets were found using gravitational microlensing, and in most cases the distances were determined using the “microlens parallax” technique [@gould92]. Microlensing would therefore appear to be the most natural method to measure the Galactic distribution of planets, i.e., to determine planet frequency as a function of Galactic environment. Such a measurement would provide important constraints on planet formation theories. For example, @thompson13 has suggested that gas-giant formation may have been inhibited in the Galactic bulge due to the high intensity of ambient radiation during the main epoch of star formation.
However, while roughly half of the $\sim30$ published microlensing planets have measured distances, this sample is heavily biased toward nearby systems. The reasons for this are well understood and are closely related to the general biases in astronomy toward nearby objects. First, nearby lenses have larger lens-source trigonometric parallaxes, $\pi_\rel = \au(D_L^{-1}-D_S^{-1})$, which gives rise to larger microlens parallaxes $$\bpi_\e \equiv {\pi_\rel\over\theta_\e}\,{\bmu\over\mu};
\qquad
\theta_\e^2 = \kappa M\pi_\rel,
\qquad \kappa \equiv {4 GM\over c^2\au}\simeq 8.1\,{{\rm mas}\over M_\odot},
\label{eqn:pie}$$ where $\bmu$ is the lens-source relative proper motion (in either the heliocentric or geocentric frame), $\theta_\e$ is the angular Einstein radius, and $M$ is the lens mass. As explained in some detail by @gouldhorne, the magnitude of $\bpi_\e$ quantifies the amplitude of the parallax distortion on the microlens light curve, so that all other things being equal, larger $\pi_\e$ implies easier detection. The most common method for measuring microlens parallax has been to observe the effect of Earth’s acceleration on the light curve (so-called orbital parallax). However, for typical Einstein timescales $t_\e\sim 20\,$day, this effect is quite modest. This means that in addition to nearby lenses and low mass lenses, one is biased toward abnormally long duration events. It is difficult (though probably not impossible) to quantify these biases, but the main problem is that due to these biases, there are simply no microlens planets in the Galactic bulge with measured microlens parallaxes. Indeed, the one confirmed Bulge planet had its distance measured by other means.
This brings us to the other method of measuring lens distances: direct detection of the lens. The main difficulty is that the lens is superposed on a (usually) substantially brighter source, and remains so for typically a decade or more after the event. If the lens is sufficiently bright, then it is possible to directly detect it by measuring the combined source and lens light using high-resolution imaging (adaptive optics or [*Hubble Space Telescope (HST)*]{}) and subtracting out the source contribution, which is known from the light curve model. This, in fact, is how the distance to the only planet known to be in the Galactic bulge was measured [MOA-2011-BLG-293Lb; @mb11293B]. At the present time, this method is primarily limited to lenses that are at least 15% as bright as the source: otherwise the excess light due to the lens cannot be reliably detected. Hence, it is biased toward luminous (i.e., massive) and nearby lenses.
The alternative is to wait until the source and lens separate due to their relative proper motions (typically a few mas yr$^{-1}$) and can be individually resolved. Again, this method is more easily applied to brighter lenses and with current facilities, one must wait $\sim10$ yr for the source and lens to separate sufficiently. When the next generation of 30 m telescopes are available, it will be applicable to much fainter lenses because these will separate sufficiently from the sources to be resolved within a few years due to their relative proper motions [@alcock01; @gould14; @ob05169a; @ob05169b; @henderson15].
Therefore, the only path at present to routinely measure the distances to lenses (especially faint lenses), and hence to measure the Galactic distribution of planets, is via space-based microlens parallaxes. In this approach, one observes a microlensing event simultaneously from Earth and from a satellite in solar orbit, and derives $\bpi_\e$ from the difference in the two light curves [@refsdal66]. There are some challenges to this method (over and above the problem of gaining routine access to such a satellite). First, the results are subject to a four-fold degeneracy in $\bpi_\e$, including a two-fold degeneracy in $\pi_\e$. However, @21event showed that it is possible in practice to break this degeneracy in the great majority of cases. Second, $\pi_\e$ does not by itself yield distances and masses. Rather this requires knowledge of $\theta_\e$, $$\pi_\rel = \theta_\e \pi_\e,
\qquad
M = {\theta_\e\over \kappa \pi_\e},
\label{eqn:massidst}$$ and of the source parallax $\pi_S$ ($\pi_L = \pi_\rel + \pi_S$), although the latter is usually known quite adequately. Fortunately, $\theta_\e$ is usually measured for planetary events because the normalized source size $\rho\equiv\theta_*/\theta_\e$ can usually be measured from the source crossing of the planetary caustic, while the angular source size $\theta_*$ is almost always known from its color and magnitude. Moreover, even for non-planetary (and non-binary) events, which generally lack such crossings, the lens distance (and so mass) can usually be estimated quite well from the measured $\bpi_\e$ and kinematic arguments [@21event]. Finally, for the case that the source proper motion can be measured, this estimate becomes even more accurate [@ob140939].
Hence, as shown by @21event, one can obtain an accurate estimate of the cumulative distribution of lens distances from a given sample, and can in principle compare this to the cumulative distance distribution of detected planets.
[[*Spitzer*]{}]{} and the Galactic Distribution of Planets
----------------------------------------------------------
To determine the Galactic distribution of planets, however, the detected planets must be compared to the underlying distribution of planet sensitivities, not simply of events. @21event did not attempt to do this because there was only one planet in their sample [@ob140124], making a meaningful comparison impossible. The small number of planet detections was rooted in the nature of the observing campaign, which was a 100-hr “pilot project” to determine the feasibility of making such microlens parallax measurements using [*Spitzer*]{}. Thus, the [*Spitzer*]{} observations were limited to the subset of events judged most likely to yield $\bpi_\e$, and no special effort was made to find planets within these events via, for example, intensive follow-up observations.
@21event argued, nevertheless, that it would be possible to estimate the cumulative distribution of sensitivities, simply by measuring the sensitivity of each event in the standard fashion [@rhie00; @gaudisackett; @gaudi02] and multiplying these sensitivities by the distance distributions in their Figure 3, even though the selection function of the events was unknown (and probably unknowable). This argument rested critically on the fact that the events were monitored from the ground and chosen for [*Sp
|
{
"pile_set_name": "ArXiv"
}
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.