text
stringlengths 0
12.5k
| meta
dict |
|---|---|
---
abstract: 'Assume $G$ is a finite abstract simplicial complex with $f$-vector $(v_0,v_1, \dots)$, and generating function $f(x) = \sum_{k=1}^{\infty} v_{k-1} x^k=v_0 x + v_1 x^2+ v_2 x^3 + \cdots$, the Euler characteristic of $G$ can be written as $\chi(G)=f(0)-f(-1)$. We study here the functional $f_1''(0)-f_1''(-1)$, where $f_1''$ is the derivative of the generating function $f_1$ of $G_1$. The Barycentric refinement $G_1$ of $G$ is the Whitney complex of the finite simple graph for which the faces of $G$ are the vertices and where two faces are connected if one is a subset of the other. Let $L$ is the connection Laplacian of $G$, which is $L=1+A$, where $A$ is the adjacency matrix of the connection graph $G''$, which has the same vertex set than $G_1$ but where two faces are connected they intersect. We have $f_1''(0)={\rm tr}(L)$ and for the Green function $g=L^{-1}$ also $f_1''(-1)={\rm tr}(g)$ so that $\eta_1(G) = f_1''(0)-f_1''(-1)$ is equal to $\eta(G)={\rm tr}(L-L^{-1})$. The established formula ${\rm tr}(g)=f_1''(-1)$ for the generating function of $G_1$ complements the determinant expression ${\rm det}(L)={\rm det}(g)=\zeta(-1)$ for the Bowen-Lanford zeta function $\zeta(z)=1/{\rm det}(1-z A)$ of the connection graph $G''$ of $G$. We also establish a Gauss-Bonnet formula $\eta_1(G) = \sum_{x \in V(G_1)} \chi(S(x))$, where $S(x)$ is the unit sphere of $x$ the graph generated by all vertices in $G_1$ directly connected to $x$. Finally, we point out that the functional $\eta_0(G) = \sum_{x \in V(G)} \chi(S(x))$ on graphs takes arbitrary small and arbitrary large values on every homotopy type of graphs.'
address: |
Department of Mathematics\
Harvard University\
Cambridge, MA, 02138
author:
- Oliver Knill
date: 'May 29, 2017'
title: 'On a Dehn-Sommerville functional for simplicial complexes'
---
Setup
=====
####
A [**finite abstract simplicial complex**]{} $G$ is a finite set of non-empty sets with the property that any non-empty subset of a set in $G$ is in $G$. The elements in $G$ are called [**faces**]{} or [**simplices**]{}. Every such complex defines two finite simple graphs $G_1$ and $G'$, which both have the same vertex set $V(G_1)=V(G')=G$. For the graph $G_1$, two vertices are connected if one is a subset of the other; in the graph $G'$, two faces are connected, if they intersect. The graph $G_1$ is called the [**Barycentric refinement**]{} of $G$; the graph $G'$ is the [**connection graph**]{} of $G$. The graph $G_1$ is a subgraph of $G'$ which shares the same topological features of $G$. On the other hand, the connection graph is fatter and be of different topological type: already the Euler characteristic $\chi(G)$ and $\chi(G')$ can differ. Both graphs $G_1$ and $G$ are interesting on their own but they are linked in various ways as we hope to illustrate here. Terminology in this area of combinatorics is rich. One could stay within simplicial complexes for example and deal with “flag complexes", complexes which is a Whitney complex of its $1$-skeleton graphs. The complexes $G_1$ and $G'$ are by definition of this type. We prefer in that case to use terminology of graph theory.
####
Let $A$ be the adjacency matrix of the connection graph $G'$. Its Fredholm matrix $L=1+A$ is called the [**connection Laplacian**]{} of $G$. We know that $L$ is unimodular [@Unimodularity] so that the [**Green function operator**]{} $g=L^{-1}$ has integer entries. This is the [**unimodularity theorem**]{} [@Helmholtz]. The Bowen-Lanford zeta function of the graph $G'$ is defined as $\zeta(s) = {\rm det}((1-sA)^{-1})$. As $\zeta(-1)$ is either $1$ or $-1$, we can see the determinant of $L$ as the value of the zeta function at $s=-1$. We could call $H=L-L^{-1}$ the [**hydrogen operator**]{} of $G$. The reason is that classically, if $L=-\Delta$ is the Laplacian in $R^3$, then $L^{-1}$ is an integral operator with entries $g(x,y) = 1/|x-y|$. Now, $H \psi(y) = (L \psi)(y) - \psi(y)/|x-y|$ is the Hamiltonian of a Hydrogen atom located at $x$, so that $H$ is a sum of a kinetic and potential part, where the potential is determined by the inverse of $L$. When replacing the multiplication operation with a convolution operation, then $L^{-1}$ takes the role of the potential energy. Anyway, we will see that the trace of $H$ is an interesting variational problem.
####
There are various variational problems in combinatorial topology or in graph theory. For the later, see [@BollobasExtremal]. An example in polyhedral combinatorics is the upper bound theorem, which characterizes the maxima of the discrete volume among all convex polytopes of a given dimension and number of vertices [@Stanley1996]. An other example problem is to maximize the Betti number $b(G)=\sum_{i=0} b_i$ which is bounded below by $\chi(G)=\sum_{i=0} (-1)^i b_i $ which we know to grow exponentially in general in the number of elements in $G$ and for which upper bounds are known too [@Adamaszek]. We have looked at various variational problems in [@KnillFunctional] and at higher order Euler characteristics in [@valuation]. Besides extremizing functionals on geometries, one can also define functionals on the on the set of unit vectors of the Hilbert space $H^n$ generated by the geometry. An example is the free energy $(\psi,L\psi) - T S(|\psi|^2)$ which uses also entropy $S$ and temperature variable $T$ [@Helmholtz].
####
Especially interesting are functionals which characterize geometries. An example is a necessary and sufficient condition for a $f$-vector of a simplicial d-polytope to be the $f$-vector of a simplicial complex polytope, conjectured 1971 and proven in 1980 [@BilleraLee; @Stanley1980]. Are there variational conditions which filter out discrete manifolds? We mean with a discrete manifold a connected finite abstract simplicial complex $G$ for which every unit sphere $S(x)$ in $G_1$ is a sphere. The notion of sphere has been defined combinatorially in discrete Morse approaches using critical points [@forman95] or discrete homotopy [@I94a]. A $2$-complex for example is a discrete $2$-dimensional surface. In a 2-complex, we ask that every unit sphere in $G_1$ is a circular graph of length larger than $3$. For a 2-complex the $f$-vector of $G_1$ obviously satisfies $2 v_1-3v_2=0$ as we can count the number of edges twice by adding up 3 times the number of triangles. The relation $2v_1-3v_2=0$ is one of the simplest Dehn-Sommerville relations. It also can be seen as a zero curvature condition for $3$-graphs [@cherngaussbonnet] or then related to eigenvectors to Barycentric refinement operations [@valuation; @KnillBarycentric2]. Dehn-Sommerville relations can be seen as zero curvature conditions for Dehn-Sommerville invariants in a higher dimensional complex.
####
One can wonder for example whether a condition like $\eta(G) = 2v_1-3v_2=0$ for the $f$-vector $(v_0,v_1,v_2)$ of the Barycentric refinements $G_1$ of a general $2$-dimensional abstract finite simplicial complex $G$ forces the graph $G_1$ to have all unit spheres to be finite unions of circular graphs. For this particular functional, this is not the case. There are examples of discretizations of varieties with $1$-dimensional set of singular points for which $2v_1-3v_2$ is negative. An example is $C_n \times F_8$, the Cartesian product of a circular graph with
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Along with increasingly popular virtual reality applications, the three-dimensional (3D) point cloud has become a fundamental data structure to characterize 3D objects and surroundings. To process 3D point clouds efficiently, a suitable model for the underlying structure and outlier noises is always critical. In this work, we propose a hypergraph-based new point cloud model that is amenable to efficient analysis and processing. We introduce tensor-based methods to estimate hypergraph spectrum components and frequency coefficients of point clouds in both ideal and noisy settings. We establish an analytical connection between hypergraph frequencies and structural features. We further evaluate the efficacy of hypergraph spectrum estimation in two common point cloud applications of sampling and denoising for which also we elaborate specific hypergraph filter design and spectral properties. The empirical performance demonstrates the strength of hypergraph signal processing as a tool in 3D point clouds and the underlying properties.'
author:
- 'Songyang Zhang, Shuguang Cui, , and Zhi Ding, '
title: Hypergraph Spectral Analysis and Processing in 3D Point Cloud
---
3D point clouds, hypergraph signal processing, hypergraph construction, denoising, sampling.
Introduction {#intro}
============
Recent developments in depth sensors and softwares make it easier to capture the features and create a three-dimensional (3D) model for an object and its surroundings[@c1]. In particular, with the low-cost scanners such as light detection and ranging (LIDAR) and Kinect, a new data structure known as the point cloud has achieved significant success in many areas, including virtual reality, geographic information system, reconstruction of art document and high-precision 3D maps for self-driving cars [@c2]. A point cloud consists of 3D coordinates with attributes such as color, temperature, texture, and depth [@c3]. Owing to the easy access to scanning sensors and the huge need in describing the 3D features, the use of point clouds has attracted significant attentions in areas of computer vision, virtual reality, and medical science. How to process the point clouds efficiently becomes an important topic of research in many 3D imaging and vision systems.
To analyze the features of point cloud, the first step is to construct an analytical model to represent the 3D structures. The literature provides several different models. In [@c4], the 3D space is partitioned into several boxes or voxels, and the point clouds are then discretized therein. One disadvantage of voxels is that a dense grid is required to achieve fine resolution, leading to spatial inefficiency [@c3]. A spatially efficient approach [@c5; @c6] is the octree representation of point clouds. An octree is a tree data structure in which each node has exactly eight children. It can partition a 3D space recursively, and represent the point clouds with partitioned boxes. Although efficient, octree suffers from discretization errors [@c3]. The bd-tree is another spatial decomposition technique and is robust in highly cluttered point cloud dataset. However, compared to octree structures, bd-trees are more difficult to update.
Recently, graphs and graph signal processing (GSP) have found applications in modeling point clouds. For example, the authors of [@c3] construct a graph based on pairwise point distances. Some other works, such as [@c8], construct graphs based on the $k$-nearest neighbors, where each vertex (point) has an edge connection to its $k$ nearest neighbors. There are several clear connections between graph features and point cloud characteristics. For example, the smoothness over a graph can describe the flatness of surfaces in point clouds. GSP-based tools such as filters and graph learning methods can process the point clouds and have shown great success because of the graph model’s ability to capture the underlying geometric structures. However, graph-based methods still face some challenges such as limited orders and measurement inefficiency. In a traditional graph, each edge can only connect two nodes, constraining graph-based models to describe only pairwise relationships. However, a multilateral relationship among multiple nodes is far more informative as in a point cloud model. For example, the points (nodes) on the same surface of a point cloud exhibit a strong multilateral relationship, which cannot be easily captured by an edge of a traditional graph. In fact, construction of an efficient graph for a given dataset is always an open question. Thus, studies on point clouds can benefit from more general and efficient models.
To develop an efficient model for point clouds, we explore a high-dimensional graph model, known as hypergraph [@c9]. Hypergraph can be a useful model in processing 3D point clouds. A hypergraph $\mathcal{H}=\{\mathcal{V},\mathcal{E}\}$ consists of a set of nodes $\mathcal{V}=\{\mathbf{v}_1, \dots,\mathbf{v}_K\}$ and a set of hyperedges $\mathcal{E}=\{\mathbf{e}_1, \dots,\mathbf{e}_K\}$. Each hyperedge in a hypergraph can connect more than two nodes. For example, a 3D shape together with its hypergraph model are shown as Fig. \[hyper\]. Obviously, a normal graph is a special case of hypergraph, where each hyperedge degrades to connect two nodes exactly. The hyperedge in a hypergraph can characterize the multilateral relationship among several related nodes (e.g., on a surface), thereby making hypergraph a natural and intuitive model for point clouds. Moreover, advances in hypergraph signal processing (HGSP) [@c9] are providing more hypergraph tools, such as HGSP-based filters and spectrum analysis, for effective point cloud processing.
However, processing the point clouds based on hypergraph still poses several challenges. Similar to GSP, the first problem lies in the construction of hypergraph for point clouds. The traditional hypergraph construction method for a general dataset relies on data structure. For example, in [@c11], a hypergraph model is constructed according to the sentence structure in natural language processing. The $k$-nearest neighbor model is another method to construct the hypergraph. In [@c9], a hypergraph can be formed from the feature distances for an animal dataset to achieve clustering. However, such distance-based or structure-based model may be rather lossy in information preservation. For example, the structure-based method may not preserve the correlation of some irregular structures, whereas the $k$-nearest neighbor method may narrowly emphasize the distance information. In addition to hypergraph construction, another issue in analyzing point cloud with hypergraph tools is the computation complexity of the spectrum space. In the HGSP framework, spectrum-based analysis plays an important role but needs to compute the spectrum space. Usually, the computation of hypergraph spectrum is based on orthogonal-CP decomposition, which incurs high-complexity when there are many nodes. Another challenge in point cloud processing is the effect of noise and outliers. Since a hypergraph model is constructed from observed data, noise can distort the hypergraph and degrade the performances of HGSP. Thus, mitigating noise effect and robustly estimating the hypergraph model for point clouds pose a significant challenge.
This work addresses the aforementioned problems. We propose novel spectrum-based hypergraph construction methods for both clean and noisy point clouds. For clean point clouds, we first estimate their spectrum components based on the hypergraph stationary process and optimally determine their frequency coefficients based on smoothness to recover the original hypergraph structure. For noisy point clouds, we introduce a method for joint hypergraph structure estimation and data denoising. We shall illustrate the effectiveness of the proposed hypergraph construction and spectrum estimation in two point clould applications: sampling and denoising. Our experimental results clearly establish a connection between hypergraph frequencies and point cloud features. The performance improvement in both applications demonstrates the strength and power of hypergraph in point cloud processing and the practical value of our estimation methods.
We organize the rest of the paper as follows. In Section \[pre\], we lay the foundation with respect to the preliminaries and notations of point clouds, tensor basics and hypergraph signal processing. Next, we propose means in estimating hypergraph spectrum for basic point clouds in Section \[h1\] and further develop means for hypergraph structure estimation of noisy point clouds in Section \[h2\]. With the proposed estimation methods, we study two important application scenarios and establish the effectiveness of hypergraph signal processing in Section \[appli\]. Finally, we present the conclusion and future directions in Section \[con\].
Preliminaries and Notations {#pre}
===========================
In this section, we cover basic background with respect to point cloud, tensor basics and hypergraph signal processing.
Point Clouds
------------
A point cloud is a set of 3D points obtained from sensors, where each point is attributed with coordinates and other features, like colors [@c10]. Since the coordinates are basic features of a point cloud, in this work, we mainly focus on gray-scale point clouds, where each node is characterized by its coordinates. We consider a matrix representation of the gray-scale point clouds, where a point cloud with $N$ nodes is denoted by a location matrix $$\mathbf{s}=[\mathbf{X_1\quad X_2\quad X_3}]=
\begin{bmatrix}
\mathbf{s}_1^T\\
\mathbf{s}_2^T\\
\ddots\\
\mathbf{s}_N^T
\end{bmatrix}\in\mathbb{R}^{N\times 3},$$ where $\mathbf{X}_i$ denotes a vector of the $i$th coordinates of all the points, and $\mathbf{s}_i$ is the three coordinates of $i$th point.
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'main.bib'
---
[\
[Matthew W. Muterspaugh, Maciej Konacki, Benjamin F. Lane, and Eric Pfahl]{} ]{}
Why Focus Planet Searches on Binary Stars?
==========================================
Searches for planets in close binary systems explore the degree to which stellar multiplicity inhibits or promotes planet formation. There is a degeneracy between planet formation models when only systems with single stars are studied—several mechanisms appear to be able to produce such a final result. This degeneracy is lifted by searching for planets in binary systems; the resulting detections (or evidence of non-existence) of planets in binaries isolates which models may contribute to how planets form in nature. Studying relatively close pairs of stars, where dynamic perturbations are the strongest, provides the most restrictive constraints of this type [see, for example, @thebault2004; @Pfahl2005; @PfahlMute2006].
In this chapter, we consider observational efforts to detect planetary companions to binary stars in two types of hierarchical planet-binary configurations: first “S-type” planets which orbit just one of the stars, with the binary period being much longer than the planet’s; second, “P-type” or circumbinary planets, where the planet simultaneously orbits both stars, and the planetary orbital period is much longer than that of the binary [@Dvorak1982]. Simulations show each of these configurations has a large range of stable configurations .
S-Type Planets
==============
S-Type planets orbit just one of the stars in a binary, and the binary separation is much larger than that between the star and planet. Some of the binaries are so widely separated (projected semimajor axis $a_b \gtrsim 1$ arcsecond) that they can be spatially resolved by ground-based telescopes without active image correction; for these, traditional planet-finding techniques can be used. In fact, astrometric methods often perform best in this regime, as the secondary star serves as a convenient reference for the primary, and vice versa. Here, astrometric and radial velocity (RV) programs are considered as the most versatile search methods. (While transit searches might also be possible, these typically have very limited spatial resolutions, and the second star can act as a photometric “contaminant.”) When the binaries are not spatially resolved with simple imaging, modifications must be made to meet the measurement precisions required for detecting extrasolar planets.
Wide Binaries
-------------
From an observational standpoint, “wide” binaries are considered to be those that can be resolved by traditional (uncorrected) imaging techniques. Due to atmospheric seeing, this sets the projected sky separation at larger than roughly one arcsecond.
### Dualstar Astrometry
Interferometric narrow-angle astrometry [@shao92; @col94] promises astrometric performance at the 10-100 micro-arcsecond level for pairs of stars separated by 1-60 arcseconds. The lower limit of the allowable binary separation for this technique is that the binary is resolved by the individual telescopes in the interferometer; the upper limit is set by the scale over which the effects of atmospheric turbulence are correlated. This technique was first demonstrated with the Mark III interferometer for short integrations [@col94], was extended to longer integrations and shown to work at the 100 micro-arcsecond level at the Palomar Testbed Interferometer [PTI, @l00].
However, achieving such performance requires simultaneous measurement of the interferometric fringe positions of both stars, greatly complicating the instrument (two beam combiners and metrology throughout the entire array are required). In addition, the instrumental baseline vector $\overrightarrow{B}$ connecting the unit telescopes must be known to high precision ($\approx 100$ microns).
In an optical interferometer light is collected at two or more apertures and brought to a central location where the beams are combined and a fringe pattern produced on a detector. For a broadband source of central wavelength $\lambda$ and optical bandwidth $\Delta\lambda$ the fringe pattern is limited in extent and appears only when the optical paths through the arms of the interferometer are equalized to within a coherence length ($\Lambda =
\lambda^2/\Delta\lambda$). For a two-aperture interferometer, neglecting chromatic dispersion by unequal air paths, the intensity measured at one of the combined beams is given by $$\label{double_fringe}
I(x) = I_0 \left [ 1 + V \frac{\sin\left(\pi x/ \Lambda\right)}
{\pi x/ \Lambda} \sin \left(2\pi x/\lambda \right ) \right ]$$ where $V$ is the fringe contrast or “visibility”, which can be related to the morphology of the source, and $x$ is the optical path difference between arms of the interferometer; see Fig. \[fig:fringes\]. More detailed analysis of the operation of optical interferometers can be found in [*Principles of Long Baseline Stellar Interferometry*]{} [@Lawson2000].
![\[fig:fringes\] The response of an interferometer. The top two curves have been offset by 2 and 4 for clarity. The widths of the fringe packets are determined by the bandpass of the instrument, and the wavelength of fringes by an averaged wavelength of starlight. The top curve shows the intensity pattern obtained by observing two stars separated by a small angle on the sky—the observable is the distance between the fringe packets.](fringes.eps){height="3.5in"}
The location of the resulting interference fringes are related to the position of the target star and the observing geometry via $$\label{delayEquation}
d = \overrightarrow{B} \cdot \overrightarrow{S} +
\delta_a\left(\overrightarrow{S}, t\right) + c$$ where $d$ is the optical path-length one must introduce between the two arms of the interferometer to find fringes (often called the “delay”), $\overrightarrow{S}$ is the unit vector in the source direction, and $c$ is a constant additional scalar delay introduced by the instrument. The term $\delta_a\left(\overrightarrow{S}, t\right)$ is related to the differential amount of path introduced by the atmosphere over each telescope due to variations in refractive index.
If the other quantities are known or small, measurement of the instrumental path length $d$ required to observe fringes determines the position of the star $\overrightarrow{S}$. For a 100-m baseline interferometer, an astrometric precision of 10 $\mu$as corresponds to knowing $d$ to 5 nm, a difficult but not impossible proposition for all terms except that related to the atmospheric delay. Atmospheric turbulence, which changes over distances of tens of centimeters and on millisecond timescales, forces one to use very short exposures to maintain fringe contrast, and hence limits the sensitivity of the instrument. It also severely limits the astrometric accuracy of a simple interferometer, at least over large sky-angles.
However, in narrow-angle astrometry one is concerned with a close pair of stars, and the observable is a differential astrometric measurement, i.e. one is interested in knowing the angle between the two stars ($\overrightarrow{\Delta_s} = \overrightarrow{s_2} - \overrightarrow{s_1} $). The atmospheric turbulence is correlated over small angles. If the measurements of the two stars are simultaneous, or nearly so, the atmospheric term subtracts out making possible high precision “narrow-angle” astrometry.
The requirement that the target and reference stars be observed simultaneously results in a significant instrumental complexity, i.e. essentially two complete interferometers are required to share the same set of apertures (see Fig. \[fig:dsm\]). The splitting of light from the stars into two separate sets of delay lines, beam transport systems and beam combiners is done in a “dual-star module” located just after the apertures, with the split generally being accomplished using a beam-splitter. Considerable care must be taken in designing the system in order to avoid small pathlength measurement errors.
![\[fig:dsm\] Schematic of splitting the light in a dualstar interferometer. ](fig03-dualstar.eps){height="8.0cm"}
The exact level of astrometric precision that can be achieved depends on many factors, including the separation of the target/reference pair, the size of the interferometric baseline and the levels and distribution of atmospheric turbulence. For a typical Mauna Kea seeing profile the astrometric precision is $$\sigma_a \simeq 300\frac{\theta}{\sqrt{t}B^{2/3}}~{\rm arcsec}$$ where $B$ is the baseline length in meters, $\theta$ is the target/reference separation in radians, and t is the integration time in seconds. For typical baselines of $\sim 100$ m, and an angular separation of $\sim 30$ arcsecond implies an astrometric precision of 30 $\microas$ in an hour (see Fig. \[fig:naa\]).
The magnitude of the astrometric signal of the star’s motion about the center of mass (CM) between it and its planet is given by: $$\label{ast_reflex}
\Delta a_{CM} = 2 \frac{M_p}{M_s}a_p =
\frac{M_p/M_\Jupiter}{M_s/\Msun}\frac{a_p}{524}.$$ where $M_p$, $M_b$, $M_\Jupiter$, $\Msun$ are, respectively, masses
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'E. Congiu [^1]'
- 'M. Berton[^2]'
- 'M. Giroletti'
- 'R. Antonucci'
- 'A. Caccianiga'
- 'P. Kharb'
- 'M. L. Lister'
- |
\
L. Foschini
- 'S. Ciroi'
- 'V. Cracco'
- 'M. Frezzato'
- 'E. Järvelä'
- 'G. La Mura'
- 'J. L. Richards'
- 'P. Rafanelli'
bibliography:
- './biblio.bib'
title: |
Kiloparsec-scale emission in the\
narrow-line Seyfert 1 galaxy Mrk 783
---
Introduction
============
Narrow-line Seyfert 1 galaxies (NLS1s) are a puzzling class of active galactic nuclei (AGN), which were first classified by @Osterbrock85 according to their full width at half maximum (FWHM) of ${\rm H\beta }< 2000\,{\rm km\,s^{-1}}$. However, despite the narrowness of ${\rm H\beta }$, their ratio \[\]$\lambda5007/{\rm H\beta }< 3$ and the presence of strong multiplets in the optical and UV spectrum indicate that these objects are type 1 AGN.
Radio-quiet[^3] NLS1s (RQNLS1s) constitute $93\%$ of the total population up to redshift $0.8$ [@Komossa06] and $96.5\%$ at $\rm z<0.35$ [@Cracco16]. Radio-loud NLS1s (RLNLS1s) are relatively uncommon. They can be divided into two different classes according to their radio spectrum in the cm range. Flat-spectrum RLNLS1s (F-NLS1s) probably have a relativistic jet pointed toward Earth and can produce $\gamma$-rays [@Abdo09a; @Abdo09b], while steep-spectrum RLNLS1s (S-NLS1s) often show an extended radio morphology and are likely misaligned F-NLS1s.
One of the most interesting possibilities concerning the nature of NLS1s is that they are young and evolving objects [@Mathur00]. In particular, this appears to be true for RLNLS1s: F-NLS1s might be young flat-spectrum radio quasars (FSRQs) with a small black hole mass and S-NLS1s young radio galaxies [@Foschini15; @Berton16c]. However, a preference for low inclination might also play a role [e.g., @Shen14; @Peterson11]. Thus NLS1s are a somewhat heterogeneous group.
S-NLS1s have often been associated with compact steep-spectrum objects [CSS; @Oshlack01; @Komossa06; @Gallo06a; @Yuan08; @Caccianiga14; @Gu15; @Schulz15; @Berton16c; @Caccianiga17], which are usually believed to be young and growing radio galaxies [@Fanti95]. Only a handful of S-NLS1s were investigated in radio [@Whalen06; @Anton08; @Doi12; @Richards15; @Doi15; @Gu15; @Caccianiga17]. RLNLS1s indeed have a lower observed jet power than FSRQs [@Foschini15] because of their low black hole mass [@Heinz03; @Foschini14]. Therefore, while F-NLS1s are relatively easy to find because their luminosity is enhanced by relativistic beaming, S-NLS1s are not as easily detectable.
To study the radio properties of NLS1s, we carried out a survey with the Karl G. Jansky Very Large Array (JVLA) at $5$ GHz in A configuration. Our sample consists of 60 sources drawn from the papers by @Foschini15 and @Berton15a, and it contains radio-quiet (but not radio-silent) NLS1s, F-NLS1s, and S-NLS1s. In this paper we report the detection of extended emission in one S-NLS1s, Mrk 783. This source is one of the few NLS1 showing such an extended emission at $\rm z<0.1$. In Sect.\[sec:mrk783\] we describe the source according to results published in the literature, in Sect.\[sec:datared\] we describe the data reduction we performed, in Sect.\[sec:results\] we present our results, in Sect.\[sec:discussion\] we discuss them and, finally, in Sect.\[sec:summary\] we provide a brief summary. Throughout this work, we adopt a standard $\rm \Lambda CDM$ cosmology, with a Hubble constant $H_0 = 70\,{\rm km\,s^{-1}}Mpc^{-1}$, and $\Omega_\Lambda = 0.73$ [@Komatsu11]. Spectral indexes are specified with flux density $S_{\nu} \propto \nu^{-\alpha}$ at frequency $\nu$.
Mrk 783 {#sec:mrk783}
=======
Mrk783 (R.A. = $13$h $02$m $58.8$s Dec=$+16$d $24$m $27$s) is a NLS1 galaxy first classified by @Osterbrock85 at $z = 0.0672$ [@Hewitt91] with a bolometric luminosity of the AGN $L_{bol} = 3.3\times10^{44}\,{\rm erg\,s^{-1}}$ [@Berton15a]. Its host galaxy was classified as a lenticular galaxy [@Petrosian07], but the SDSS image clearly shows the presence of a tidal tail, or a spiral arm, extended in the east direction.
The mass of the central black hole inferred from the ${\rm H\beta }$ broad component line width is about $4.3\times10^7$ M$_{\odot}$ [@Berton15a]. ${\rm H\beta }$ shows a prominent red wing in the broad component, indicating a receding outflow with a velocity of $\sim500\,{\rm km\,s^{-1}}$. This broad component is clearly visible in all the permitted lines of the optical spectrum. Conversely, narrow lines, and particularly \[\]$\lambda5007$, do not show any outflowing component and are well reproduced by a single Gaussian profile [@Berton16b].
Mrk783 is a strong X-ray emitter that has been detected by ROSAT [@Schwope00], INTEGRAL [@Krivonos07], and Swift/XRT [@Panessa11]. @Panessa11 reported a luminosity of $9.33\times10^{43}\,{\rm erg\,s^{-1}}$ between $20$ and $100$ keV and a photon index of $1.7\pm0.2$ between $0.3$ and $100$ keV. This is consistent with nonsaturated comptonization, which occurs in the accretion disk corona and not in relativistic jets.
In the last 30 years, the galaxy was observed several times in several radio bands, for example, the WSRT at $1.4$ GHz [@Meurs81], VLA at $5$ GHz [@Ulvestad84; @Ulvestad95], and Green Bank telescope at $1.4$ GHz [@Bicay95]. However, no extended emission was found. Recently, @Doi13 observed the galaxy nucleus with the Very Long Baseline Array (VLBA) looking for extended emission near the core of the AGN. The image only shows a compact core, but the flux density recovered by the authors at $1.7$ GHz is only $4\%$ of the NRAO VLA Sky Survey (NVSS) flux density at $1.4\,$GHz [$S_{\nu}=33.2\,$mJy; @Condon98]. This discrepancy means that the vast majority of the flux emitted by the galaxy is distributed in structures with relatively low brightness temperature, which could not be seen by the instrument. Another hint of the extended emission can be found in the FIRST image of the galaxy [@Becker95]. The source is elongated along position angle (PA) $\ang{131}$ and shows a peak and a total flux density of $18.5$ mJy and $28.72$ mJy, respectively. At low frequencies, the TIFR Giant Metrewave Radio Telescope Sky Survey [TGSS; @Intema17] at $147$ MHz reports a flux density of $89.2\pm10.9$ mJy.
Mrk783 was classified as moderately radio-loud [@Berton15a] or radio-quietm [@Doi13]. The R parameter is indeed close to $10$. Therefore, a different estimate of the optical magnitude or optical variability in the source could have provided two different classifications. This is not uncommon, as has been clearly shown by @Ho01 and @Kharb14. However the radio emission does not appear to be dominant over the optical magnitude as in classical radio galaxies.
Data reduction {#sec:datared}
==============
The galaxy was observed on 2015 September 6 with the JVLA at $5$ GHz in A configuration with a bandwidth of $2$ GHz, for a
|
{
"pile_set_name": "ArXiv"
}
|
The recent formation of ultracold bosonic molecules from a Fermi gas of atoms[@regal] by a Feshbach resonance allows for an experimental check of theoretical calculations for physical quantities within the BCS-BEC crossover. In particular, in Ref. the relation between the composite-boson scattering length $a_B$ and the fermionic scattering length $a_F$ was calculated in the strong-coupling limit of the BCS-BEC crossover. The summation therein of all bosonic T-matrix diagrams has led to the result $a_B=0.75 a_F$. This result corrects the value $a_B= 2 a_F$ obtained within the Born approximation for the effective residual bosonic interaction[@Haussmann; @PS-96; @epjb]. The result $a_B=0.75 a_F$ could be tested experimentally in the near future, by measuring at the same time the molecule-molecule scattering length and the fermion-fermion scattering length while scanning the magnetic field through the Feshbach resonance.
In this manuscript, we provide a condensed version of the material published in Ref. , focusing specifically on the calculation of the bosonic scattering length. We hope that this short summary of our previous work could be useful to the scientific community at the present time.
Building blocks of the diagrammatic structure for composite bosons
==================================================================
In this section, we discuss the diagrammatic structure that describes generically the composite bosons in terms of the constituent fermions. Our theory rests on a judicious choice of the fermionic interaction, which (without loss of generality) greatly reduces the number and considerably simplifies the expressions of the Feynman diagrams to be taken into account.
Regularization of the fermionic interaction
-------------------------------------------
We begin by considering the following Hamiltonian for interacting fermions (we set Planck $\hbar$ and Boltzmann $k_{B}$ constants equal to unity throughout): $$\begin{aligned}
& & H = \sum_{\sigma} \int d{\bf r} \, \psi_{\sigma}^{\dagger}({\bf r}) \left(
- \frac{\nabla^2}{2m} - \mu \right) \psi_{\sigma}({\bf r}) \nonumber \\
& & + \frac{1}{2} \sum_{\sigma, \sigma'} \int d{\bf r} \, d{\bf r'}
\psi_{\sigma}^{\dagger}({\bf r}) \psi_{\sigma'}^{\dagger}({\bf r'})
V_{{\mathrm eff}}({\bf r}-{\bf r'})
\psi_{\sigma'}({\bf r'}) \psi_{\sigma}({\bf r}) .
\label{Eq:Hamiltonian}\end{aligned}$$ Here, $\psi_{\sigma}({\bf r}$) is the fermionic field operator with spin projection $\sigma = (\uparrow, \downarrow$), $m$ the fermionic mass, $\mu$ the fermionic chemical potential, and $V_{{\mathrm eff}}({\bf r}-{\bf r'})$ the [*effective potential*]{} that provides the *attraction* between fermions. For the application to atomic gases, the two spin states correspond to two different hyperfine states of the fermionic atoms.
To simplify the ensuing many-body diagrammatic structure (yet preserving the physical effects we are after), we adopt for $V_{{\mathrm eff}}$ the form of a “contact” potential [@footnote-contact] $$V_{{\mathrm eff}}({\bf r}-{\bf r'}) = v_{0} \,\, \delta ({\bf r}-{\bf r'})
\label{Eq:deltafunc}$$ where $v_{0}$ is a negative constant. With this choice, the interaction affects only fermions with opposite spins in the Hamiltonian (\[Eq:Hamiltonian\]) owing to Pauli principle. A suitable *regularization* of the potential (\[Eq:deltafunc\]) is, however, required to get accurate control of the many-body diagrammatic structure. In particular, the equation (in the center-of-mass frame) $$\frac{m}{4 \pi a_{F}} \, = \, \frac{1}{v_{0}} \, + \, \int \! \frac{d{\bf k}}
{(2 \pi)^{3}} \frac{m}{{\bf k}^{2}} \label{ferm-scatt-ampl}$$ for the *fermionic scattering length* $a_{F}$ associated with the potential (\[Eq:deltafunc\]) is ill-defined, since the integral over the three-dimensional wave vector ${\bf k}$ is ultraviolet divergent. The delta-function potential (\[Eq:deltafunc\]) is then regularized, by introducing an ultraviolet cutoff $k_{0}$ in the integral of Eq. (\[ferm-scatt-ampl\]) and letting $v_{0} \, \rightarrow \, 0$ as $k_{0} \, \rightarrow \, \infty$, in order to keep $a_{F}$ fixed at a *finite* value. The required relation between $v_{0}$ and $k_{0}$ is obtained directly from Eq. (\[ferm-scatt-ampl\]). One finds: $$v_{0} \, = \, - \, \frac{2 \pi^{2}}{m k_{0}} \, - \, \frac{\pi^{3}}{m a_{F} k_{0}^{2}}
\label{vo}$$ when $k_{0} |a_{F}| \gg 1$.
With the regularization (\[vo\]) for the potential, the classification of the many-body diagrams gets considerably simplified, since only specific sub-structures of these diagrams survive when the limit $k_{0} \, \rightarrow \, \infty$ is eventually taken. In particular, in order to obtain a finite result for a given Feynman diagram, the vanishing strength $v_0$ of the potential should be compensated by an ultraviolet divergence in some internal wave-vector integration. For the particle-particle ladder of Fig. \[fig:pplad\], the internal wave-vector integration associated with each rung diverges in the limit $k_0\to\infty$ and compensates the vanishing of $v_0$, yielding the finite result: $$\begin{aligned}
& &\Gamma_{0}(q) = - \left\{ \frac{m}{4 \pi a_{F}} + \right.
\int \! \frac{d{\bf k}}{(2 \pi)^{3}}\nonumber\\
& &\times\left. \left[\frac{\tanh(\beta \xi({\bf k})/2)
+\tanh(\beta \xi({\bf k-q})/2)}{2(\xi({\bf k})+\xi({\bf k-q})-i
\Omega_{\nu})}
- \frac{m}{{\bf k}^{2}} \right] \right\}^{-1}\; .
\label{p-p ladder}\end{aligned}$$ Here, $\xi({\bf k}) = {\bf k}^{2} /(2m) - \mu$ and $q=({\bf q},\Omega_{\nu})$ is a four-momentum, with wave vector ${\bf q}$ and Matsubara frequency $\Omega_{\nu}$ ($\nu$ integer). In an analogous way, one can show that in the particle-particle channel the contributions of the vertex corrections and of the two-particle effective interactions other than the rung vanish for our choice of the potential.
It is thus evident from these considerations that, with our choice of the fermionic interaction, in the strong-coupling limit the *skeleton structure* of the diagrammatic theory can be constructed only with the particle-particle ladder (\[p-p ladder\]) plus an infinite number of interaction vertices. A careful diagrammatic analysis considered in detail in Ref. then shows that: (a) Bare composite bosons correspond to fermionic particle-particle ladders; (b) The interaction among bare composite bosons is described by 4-point, 6-point vertices, and so on, which correspond to the product of $4,6,\ldots$, fermionic bare Green’s functions (with one internal four-momenta integration). The correspondence rules for the bosonic Green’s function and the 4-point vertex are shown in Fig. 2.
In particular, in the strong-coupling limit (whereby $\beta |\mu| \gg 1$),[@footnote-mu] the particle-particle ladder $\Gamma_0(q)$ has the following *polar structure*:[@Haussmann] $$\Gamma_0(q)\approx-\frac{4 \pi}{m^{2} a_{F}}\frac{ 1 +
\sqrt{1 + \left(-i\Omega_{\nu} +
\frac{{\bf q}^{2}}{4 m} - \mu_{B}\right)\epsilon_{0}^{-1}}}
{i\Omega_{\nu} - \left(\frac{{\bf q}^{2}}{4m} - \mu_{B}\right)}
\label{pp-sc}$$ where we have used the definition $\mu_{B} = 2\mu + \epsilon_{0}$ for the bosonic chemical potential ($\epsilon_0=1/(m a_F^2)$ being the bound-state energy of the fermionic two-body problem). Note that (apart from the residue being different from unity) the expression (\[pp-sc\]) resembles a “free” boson propagator with mass $2 m$. The (four-point) *effective two-boson interaction* reads instead $$\begin{aligned}
& &\tilde{u}_{2}(q_{1} \dots q_{4}) \, = \, \delta_{q_1+q_2
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We measure the angular power spectrum of the WMAP first-year temperature anisotropy maps. We use SpICE (Spatially Inhomogeneous Correlation Estimator) to estimate $C_\ell$’s for multipoles $\ell=2-900$ from all possible cross-correlation channels. Except for the map-making stage, our measurements provide an independent analysis of that by [@HinshawEtal2003a]. Despite the different methods used, there is virtually no difference between the two measurements for $\ell \simlt 700$ ; the highest $\ell$’s are still compatible within $1-\sigma$ errors. We use a novel [*intra-bin variance*]{} method to constrain $C_\ell$ errors in a model independent way. Simulations show that our implementation of the technique is unbiased within 1% for $\ell \simgt 100$. When applied to WMAP data, the intra-bin variance estimator yields diagonal errors $\sim 10\%$ larger than those reported by the WMAP team for $100 < \ell < 450$. This translates into a 2.4 $\sigma$ detection of systematics since no difference is expected between the SpICE and the WMAP team estimator window functions in this multipole range. With our measurement of the $C_{\ell}$’s and errors, we get $\chi^2/d.o.f. = 1.042$ for a best-fit model, which has a 14 % probability, whereas the WMAP team [@SpergelEtal2003] obtained $\chi^2/d.o.f. = 1.066$, which has a 5 % probability. We assess the impact of our results on cosmological parameters using Markov Chain Monte Carlo simulations. From WMAP data alone, assuming spatially flat power law models, we obtain the reionization optical depth $\tau = 0.145 \pm 0.067$, spectral index $n_s = 0.99 \pm 0.04$, Hubble constant $h = 0.67 \pm 0.05$, baryon density $\Omega_b h^2 = 0.0218 \pm 0.0014$, cold dark matter density $\Omega_{cdm} h^2 = 0.122 \pm 0.018$, and $\sigma_8 = 0.92 \pm 0.12$, consistent with a reionization redshift $z_{re} = 16 \pm 5$ (68 $\%$ CL).'
author:
- 'Pablo Fosalba, István Szapudi'
title: 'The Angular Power Spectrum of the First-Year WMAP Data Reanalysed'
---
Introduction
============
The [*Wilkinson Microwave Anisotropy Probe*]{} satellite (WMAP) has provided the clearest view of the primordial universe to date. Its unprecedented high sensitivity and spatial resolution resulted in a unique set of cosmic microwave background (CMB) radiation maps with close to full sky coverage and uniformly high quality. As a result, fundamental cosmological parameters can be constrained to the highest precision ever. Thorough analysis of this dataset [@BennettEtal2003a] yielded a cosmic variance limited measurement of the angular power spectrum, $C_\ell$’s, of the CMB temperature anisotropy for multipoles $\ell \simlt 350$ ([@HinshawEtal2003a]; hereafter H03). This confirmed and improved measurements from previous experiments ([@MillerEtal1999; @deBernardisEtal2000; @HananyEtal2000; @Halverson2002; @MasonEtal2003; @ScottEtal2003; @BenoitEtal2003]). The acoustic peak structure revealed by the WMAP temperature and polarization power spectra provided strong observational support to inflation and constrained viable cosmological scenarios to the domain of flat models and its close variants.
Considering the importance of these results, our principal aim is to estimate the angular power spectrum in a completely independent way in the full range of multipoles probed by WMAP, $2 \le \ell \le 900$, and systematically compare results to H03. Our $C_{\ell}$ estimation pipeline is based on SpICE [^1] [Spatially Inhomogeneous Correlator Estimator; @SzapudiEtal2001a; @SzapudiEtal2001b], a quadratic estimator based on correlation functions. SpICE performs edge corrections and heuristic minimum variance weighting in pixel [^2] space to produce nearly optimal results. Our fast HEALPix [^3] implementation of SpICE scales as ${\rm {\cal O} (N^{3/2})}$ (${\rm N}$ is number of pixels).
Power Spectrum Estimation {#sec:ps}
=========================
Our estimation methodology closely follows that of H03, but adapted to our technique:
[*Step 1:*]{} We use the [*foreground cleaned intensity maps*]{} for the 3 highest frequency bands Q, V & W downloaded from the LAMBDA website [^4]. Strong diffuse Galactic emission and resolved point sources are masked out using the Kp0 and Kp2 masks, that leaves $76.8\%$ and $85.0\%$ of the sky useful for cosmological analyses, respectively. Monopole $\ell=0$ and dipole $\ell=1$ terms are also removed from non-masked pixels.
[*Step 2:*]{} Power-spectrum estimation is performed via SpICE: we compute the cross-correlations from 28 different pairs of channels constructed from the 8 “differencing assemblies” (DAs) Q1 through W4. Noise correlation among different channels is negligible, therefore our cross-power estimator is unbiased with respect to the noise (see H03). Like H03, we implement an heuristic $\ell$-dependent pixel noise weighting scheme that minimizes errors: we use flat weights (mask weight only) for $\ell < 200$, inverse pixel noise variance for $\ell > 450$, and a transitional inverse rms noise weight in the intermediate range $200 < \ell < 450$.
[*Step 3:*]{} A model for the power spectrum for unresolved extragalactic radio sources is subtracted from the cross-power spectrum of each channel. We implement the model given in §3.1 of H03.
[*Step 4:*]{} $C_\ell$’s from different channels are optimally combined using an inverse noise weighting, with DA sensitivities as described in the LAMBDA website. All channels are included, except for those in Q-band that are only used in the intermediate $\ell$-range. This helps minimizing galactic contamination at low $\ell$ and the window function cut-off at the highest multipoles.
[*Step 5:*]{} Our quadratic estimator is defined in pixel space, where mask effects can be easily corrected for [cf. @SzapudiEtal2001a]. The two point correlation function is then transformed into harmonic space via Gauss-Legendre quadrature to obtain the $C_{\ell}$’s deconvolved from the window function of the experiment. Symmetrized non-Gaussian beam transfer profiles [@PageEtal2003] and pixel window functions are corrected for in $\ell$-space.
Principal Results {#sec:res}
=================
Figure \[fig:cls\] shows the angular power spectrum of WMAP, $\Delta T^2_{\ell} \equiv \ell(\ell+1)C_{\ell}/2 \pi$, in $\mu$K$^2$ units, measured with SpICE. Upper panel shows the power spectrum for individual multipoles, using Kp2 sky cut. Our measurement (red line) is in excellent agreement with H03 (black line), multipole by multipole. In particular, for the quadrupole and octopole we find $\Delta T^2_{2} \sim 135 \mu$K$^2$ and $\Delta T^2_{3} \sim 591 \mu$K$^2$, respectively (H03 get $\sim 123\mu$K$^2$ and $\sim 612 \mu$K$^2$). For the highest ${\ell}$’s we find slightly different amplitudes than H03, but consistent at the 1-$\sigma$ level.
For the most part, we observe no systematic dependence of the measured $C_{\ell}$’s on the sky cut (see difference between red and blue lines in bottom panel of Figure \[fig:cls\]). However, using Kp0 instead of Kp2 yields a $15\%$ lower amplitude of the octopole $\ell=3$ and a $15-20\%$ smaller amplitudes for the 3 highest band-powers centered at $\ell_{\rm eff} \sim 660, 750, 850$. This effect might be due to imperfect foreground removal and/or the intrinsic estimator variance due to finite volume and edge effects. We estimated the dispersion in a set of WMAP simulations with Kp0 & Kp2 sky cuts to be of the same order as the measured differences in the $C_{\ell}$’s of the data. On the other hand, the cross-correlation amplitude between the clean WMAP maps and the best fit foreground templates is at the 5% and 10% level of the WMAP $C_{\ell}$’s for the lowest and highest $\ell$’s, respectively. We thus conclude that sample variance due to sky coverage can account for most of the observed difference in the $C_\ell$’s, while residual foreground contamination is always subdominant. The low level of systematics in Kp2, and the increased statistical errors due to the decreased sky fraction left by Kp0, motivate us to adopt Kp2 (as in H03) for the best estimate of the $C_{\ell}$’s.
Error Estimation {#sec:errors}
================
In order to estimate the covariance of our $C_\ell$’s, we generated MC simulations of the CMB sky and instrument noise
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose a model of soft CP violation in which the CP violating mechanism naturally lies only in the charged Higgs sector. The charged Higgs mechanism not only accounts for the measured value of the CP-violating parameter $\epsilon$ but also accommodates the current limits on $\epsilon''/\epsilon$. Our model naturally prevents tree-level Flavor-Changing Neutral Currents (FCNCs) of any kind. Unlike the Weinberg-Branco Three-Higgs Doublet Model, the deviation from the Standard Model rate for $b\to s\gamma$ is small. Furthermore, leading contributions to the electron (neutron) electric dipole moment are non-zero beginning at the three (two) loop level. Surprisingly similar to the Standard Kobayashi-Maskawa Model, our model is of milliweak character but with seemingly superweak phenomenology.'
---
6.5in
-1cm
**A Simple Charged Higgs Model of Soft CP Violation**
**without Flavor Changing Neutral Currents**
David Bowser-Chao$^{(1)}$, Darwin Chang$^{(2,3)}$, and Wai-Yee Keung$^{(1)}$
*$^{(1)}$Physics Department, University of Illinois at Chicago, IL 60607-7059, USA\
$^{(2)}$Physics Department, National Tsing-Hua University, Hsinchu 30043, Taiwan, R.O.C.\
$^{(3)}$Institute of Physics, Academia Sinica, Taipei, R.O.C.\
*
Submitted to [*Physical Review Letters*]{}
PACS numbers: 11.30.Er, 14.80.Er
Introduction {#introduction .unnumbered}
============
-1cm
Three decades after its surprising discovery in the kaon system[@ccft], CP violation has remained mysterious. A desire for deeper insight into its origin is the driving force behind many ongoing experiments and even the construction of new machines such as the two B Factories. While a profound understanding may yet be lacking, several mechanisms have been suggested to explain observed CP violation (i.e., $\epsilon \ne 0$) within a gauge field theory. Kobayashi and Maskawa(KM)[@km] proposed a third generation of fermions, so that CP violation would arise from the mixing of the three quark generations and is manifested by a single phase in the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix. Since then, many other mechanisms have been put forth, including new gauge interactions[@gauge], neutral Higgs exchange[@neutral], supersymmetric partners[@susy], and charged Higgs exchange[@weinberg; @branco]. However, the KM model has the distinguishing feature that its mechanism is of milliweak strength, though its phenomenology is manifestly superweak[@superweak], consistent with current CP related data. Such intricate character has also been the driving force behind the desire to find non-superweak CP violation in the $B$ systems.
The leading model for the charged Higgs mechanism of CP violation has long been the Weinberg Three-Doublet Model of CP violation[@weinberg], which became even more intriguing after Branco[@branco] proposed a version in which CP violation is softly or spontaneously broken. This scheme naturally avoids tree-level flavor changing neutral currents. Without hard CP violation the CKM matrix is purely real (the KM mechanism is inoperative); CP violation in the kaon system instead results from charged Higgs exchange. Many weaknesses of the Weinberg-Branco Model, however, have since been identified. Sanda and Deshpande pointed out[@sanda] that short distance contributions to $\epsilon$, if dominant, would lead to a larger $\epsilon'/\epsilon$ than experimentally allowed, although it was subsequently demonstrated that long distance contributions to $\epsilon$ could be large enough to avoid this difficulty[@chang]. More recently, however, it has become clear that this model has other problems. A charged Higgs light enough to account for the observed $\epsilon$ has already been excluded by the LEP experiments[@edm]. The large neutron electric dipole moment[@edm] (EDM) and substantial rate for $b \rightarrow s
\gamma$[@bsg] predicted are also contradicted by data, leading several authors[@edm; @bsg] to rule out this model.
As an illustrative model for charged Higgs CP violation, the Weinberg-Branco Model also has the shortcoming that its neutral Higgs sector naturally also contains CP violation, which is usually ignored in the literature to simplify analysis and highlight the charged Higgs mechanism. However, for flavor conserving CP odd observables (e.g., the neutron EDM), the neutral Higgs contribution generically can be competitive with that from charged Higgs exchange.
In this letter we propose an alternative model that may serve as a generic example in which the charged Higgs mechanism of CP violation naturally dominates completely over other mechanisms. CP is broken softly or spontaneously so that the KM mechanism is inoperative. Tree-level flavor changing neutral currents are automatically absent, and the neutral Higgs sector is CP conserving at tree level. As in the KM Model, the quark and electron EDMs are severely suppressed. The electron EDM vanishes at the two-loop level, while the first non-zero contribution to the quark EDMs is at two loops. In contrast to the Weinberg-Branco model, our model easily satisfies other experimental CP violation constraints as well as the rate for $b\to s \gamma$. Finally, the parameter $\theta_{\rm{QCD}}$ vanishes at tree-level, since we disallow hard CP breaking; we shall see that radiative corrections are mild and consistent with the limit on a non-zero $\theta_{\rm{QCD}}$.
For most of this letter, we shall assume that CP is broken softly. One can also modify our model to break CP spontaneously by introducing at least one additional CP odd scalar boson, as discussed toward the end of this work, with the bulk of the phenomenology unchanged.
General Formalism {#general-formalism .unnumbered}
=================
-1cm
The Weinberg-Branco Model augments the Standard Model (SM) with additional Higgs $SU(2)_L$ doublets, which are responsible for kaon system CP violation; in this model, then, since the charged Higgs sector must break CP, so also must the neutral Higgs sector. To mandate charged Higgs exchange as the dominant CP violation mechanism we instead introduce only additional $SU(2)_L$ singlets of quarks and scalars to the theory. The simplest model for our purposes requires two additional charged Higgs singlets, $h_\alpha (\alpha=1,2)$ and a vectorial pair of heavy quark fields, $Q_{L,R}$, of electromagnetic charge $-{4\over3}$. This vector quark charge assignment avoids fractionally charged hadrons. Relevant new terms in the Lagrangian are: $${\cal L}_{h_i} =
\left[
(g \lambda_{i\alpha} \bar Q_L d_{iR} h_\alpha
+ M_Q \bar Q_L Q_R) + \hbox{h.c.} \right]
- (m^2)_{\alpha\beta} {h_\alpha}^{\dag} h_\beta
- \kappa_{\alpha\beta}
(\phi^{\dag} \phi-|\langle\phi\rangle|^2) \,h_\alpha^{\dag}
h_\beta \;\;\label{eq:lagrangian}$$ where $\phi$ is the Standard Model Higgs doublet, and $i$ is summed over the down quark flavors ($i=d,s,b$). The vector quark has purely vectorial coupling to the photon and $Z$ boson, with respective charges $(Q_Q, -Q_Q \sin^2\theta_W)$, while the charged Higgs couples with charges $(Q_h, -Q_h \sin^2\theta_W)$ and $Q_Q = Q_d + Q_h$. The neutral Higgs sector is identical to that in the Standard Model, with neither flavor changing couplings nor CP violation. The matrices $m^2$ and $\kappa$ are hermitian. Except for the discussion at the end, we assume that CP is broken softly in this Lagrangian, implying a special basis where all the Yukawa ($\lambda, \kappa$) and the SM couplings are real. We also require (see below) that dim-3 couplings, namely $M_Q$, are also real. This leaves, as in the KM model, only a single CP violating parameter: Im$(m^2)_{12}$. We can diagonalize $(m^2)_{\alpha\beta}$ by a unitary matrix $U_{\alpha i}$ which in general is complex: $h_\alpha =
U_{\alpha i} H_i$, with $H_i$ the mass eigenstates. The quark-Higgs interaction in the mass eigenstate basis is $${\cal L}_{QqH}=g\sum_{q=d,s,b}\xi_{qj}
(\bar Q_L q_R)H^-_j \ +\ \hbox{h.c.}
\ ,
\label{eq:QqH}$$ with $\xi_{qj} \equiv \lambda_{q\alpha} U_{\alpha j}$. The CP-violating transit propagators[@weinberg] can be expressed as $
\langle h_\alpha^{\dag} h_\beta \rangle
= \sum_{i,j=1,2} U_{i \alpha }^{\dag} U_{\beta j}
\langle H_i^{\dag} H_j
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
With heavy quark limit and hierarchy approximation $\lambda_{QCD}\ll
m_D\ll m_B$, we analyze the $B\to D^0\overline D^0$ and $B_s\to
D^0\overline D^0$ decays, which occur purely via annihilation type diagrams. As a roughly estimation, we calculate their branching ratios and CP asymmetries in Perturbative QCD approach. The branching ratio of $B\to D^0\overline D^0$ is about $3.8\times10^{-5}$ that is just below the latest experimental upper limit. The branching ratio of $B_s\to D^0\overline D^0$ is about $6.8\times10^{-4}$, which could be measured in LHC-b. From the calculation, it could be found that this branching ratio is not sensitive to the weak phase angle $\gamma$. In these two decay modes, there exist CP asymmetries because of interference between weak and strong interaction. However, these asymmetries are too small to be measured easily.
author:
- |
Ying Li[^1] and Juan Hua\
[*Physics Department, Yantai University, Yantai 264005, China*]{}
title: 'Study of Pure Annihilation Decays $B_{d,s} \to D^{0} \overline D^{0}$ '
---
Introduction {#sc:intro}
============
In the Standard Model (SM), CP-violation (CPV) arises from a complex phase in the Cabibbo-Kobayashi- Maskawa (CKM) quark mixing matrix, and the angles of unitary triangle are defined as [@Yao:2006px]: $$\begin{aligned}
\beta=\arg \Bigl[-\frac{V_{cb}^*V_{cd}}{V_{tb}^*V_{td}}\Bigl],~~~~
\alpha=\arg \Bigl[-\frac{V_{tb}^*V_{td}}{V_{ub}^*V_{ud}}\Bigl],~~~~
\gamma=\arg
\Bigl[-\frac{V_{ub}^*V_{ud}}{V_{cb}^*V_{cd}}\Bigl].\label{ckmangle}\end{aligned}$$ In order to test SM and search for new physics, many measurements of CP-violation observables can be used to constrain these above angles. It is well known that we measure $\beta$ precisely using the golden decay mode $B\to J/\psi K_s$; the angle $\alpha$ can be determined with decay $B\to\pi\pi$ and $\gamma$ could be measured precisely in Large Hadron Collider (LHC) with decay mode $B_s\to
D_sK$.
![The quark level Feynman diagrams for $B_d \to D^{0} \bar
D^{0}$ process []{data-label="fig0"}](fig1.eps)
Besides the above channels mentioned, many other channels are used to cross check the measurements. Among these decays, $B\to DD$ decay is considered to test the $\beta$ measurement. For $B\to DD$ decay, the analysis based on $SU(3)$ symmetry [@Savage:1989ub], iso-spin symmetry [@Xing:1999yx] and factorization approach [@Xing:1998ca] have been done in last several years. However, the the calculation of decay $B^0 \to D^0\overline D^0$ has difficulties. It is a pure-annihilation diagram decay, also named W-exchange diagram decay, which is power suppressed in factorization language. The quark diagrams of this decay are shown in Figure \[fig0\]. Theoretically, QCD factorization approach (QCDF) [@Beneke:1999br] and soft collinear effective theory (SCET)[@Bauer:2001yt] can not deal decays with two heavy charmed mesons effectively. In Ref.[@Keum:2003js; @Lu:2003xc], perturbative QCD (PQCD) has been exploited to $B$ meson decays with one charm meson in the final states and the results agree with experimental data well. Specially, the pure annihilation type B decays with charmed meson were studied in Ref.[@Lu:2003xc].
In the standard model picture, the $W$ boson exchange causes $\bar{b}d \to \bar{c}c $, and the $\bar{u}u$ quarks are produced from a gluon. This gluon attaches to any one of the quarks participating in the $W$ boson exchange. In decay $B \to D^{0}
\overline D^{0}$, the momentum of the final state $D$ meson is $\frac{1}{2} m_B (1-2 r^2)$, with $r=m_D/m_B$. If we consider heavy quark limit and hierarchy approximation $\lambda_{QCD}\ll m_D\ll
m_B$, the $D$ meson momentum is nearly $m_B/2$. According to the distribution amplitude used in Ref.[@Keum:2003js], the light quark in $D$ meson carrying nearly $40\%$ of the $D$ meson momentum. So, this light quark is still a collinear quark with 1 $\mathrm{GeV}$ energy, like that in $B\to DM$ [@Keum:2003js; @Lu:2003xc], $B\to K(\pi)\pi$ [@kls; @luy] decays. The gluon could be viewed as a hard gluon approximatively, so we can treat the process perturbatively where the four-quark operator exchanges a hard gluon with $u \bar u$ quark pair. Of course, we are able to calculate the diagrams if charm quark and up quark exchange. As a roughly estimation, we give the branching ratio and CP-violation of $B_{d,s} \to D^{0} \overline D^{0}$.
In this article, the analytic formulas for the decay amplitudes will be shown in the next section. In section \[sc:result\], we give the numerical results and summarize this article in section \[sc:summary\].
Analytic formulas {#sc:analy}
=================
For simplicity, we set $B$ meson at rest in our calculation. In light-cone coordinates, the momentum of $B$, $D^0$ and$\overline
D^0$ are: $$\begin{aligned}
P_B=\frac{M_B}{\sqrt{2}}(1,1,\vec{0});
P_2=\frac{M_B}{\sqrt{2}}(1-r^2,r^2,\vec{0});
P_3=\frac{M_B}{\sqrt{2}}(r^2,1-r^2,\vec{0}).\end{aligned}$$ we define the light (anti-)quark momenta in $B$, $D^0$ and $\overline D^0$ mesons as $k_1$, $k_2$, and $k_3$ as: $$k_1 =
(x_1P_1^+,0,{\bf k}_{1T}),\ \ k_2 = (x_2 P_2^+,0,{\bf k}_{2T}),\ \
k_3 = (0, x_3 P_3^-,{\bf k}_{3T}). \label{eq:momentun2}$$
In PQCD, we factorize the decay amplitude into soft($\Phi$), hard($H$), and harder ($C$) dynamics characterized by different scales, [@kls; @luy] $$\begin{gathered}
\mathcal{A}\sim
\int\!\! d x_1 d x_2 d x_3 b_1 d b_1 b_2 d b_2 b_3
d b_3 \mathrm{Tr} \Bigl[ C(t) \Phi_B(x_1,b_1) \Phi_{D}(x_2,b_2)
\Phi_D(x_3, b_3) H(x_i, b_i,t) S_t(x_i)\, e^{-S(t)} \Bigr].
\label{eq:convolution2}\end{gathered}$$ In above equation, $b_i$ is the conjugate space coordinate of the transverse momentum ${\bf k}_{iT}$, and $t$ is the largest energy scale. $C$ is Wilson coefficient, and $\Phi$ is the wave function. The last term, $e^{-S(t)}$, contains two kinds of contributions. One is due to the resummation of the large double logarithms from renormalization of ultra-violet divergence $\ln tb$, the other is from resummation of double logarithm $\ln^2 b$ from the overlap of collinear and soft gluon corrections, which is called Sudakov form factor. The hard part $H$ can be calculated perturbatively, and it is channel dependent. More explanation of above formula and review about PQCD can be found in many reference, such as [@kls; @luy; @Ali:2007ff].
As a heavy meson, the $B$ meson wave function is not well defined, neither is $D$ meson. In heavy quark limit, we take them as: $$\Phi_{B}(x,b) = \frac{i}{\sqrt{6}}
\left[ \not \! P + M_B \right] \gamma_5\phi_B(x,b),$$ $$\Phi_{D}(x,b) = \frac{i
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the “Asimov data set”, which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.'
---
[Asymptotic formulae for likelihood-based tests of new physics]{}
Glen Cowan$^1$, Kyle Cranmer$^2$, Eilam Gross$^3$, Ofer Vitells$^3$
$^1$ Physics Department, Royal Holloway, University of London, Egham, TW20 0EX, U.K.\
$^2$ Physics Department, New York University, New York, NY 10003, U.S.A.\
$^3$ Weizmann Institute of Science, Rehovot 76100, Israel
Keywords: systematic uncertainties, profile likelihood, hypothesis test, confidence interval, frequentist methods, asymptotic methods
Introduction {#sec:intro}
============
In particle physics experiments one often searches for processes that have been predicted but not yet seen, such as production of a Higgs boson. The statistical significance of an observed signal can be quantified by means of a $p$-value or its equivalent Gaussian significance (discussed below). It is useful to characterize the sensitivity of an experiment by reporting the expected (e.g., mean or median) significance that one would obtain for a variety of signal hypotheses.
Finding both the significance for a specific data set and the expected significance can involve Monte Carlo calculations that are computationally expensive. In this paper we investigate approximate methods based on results due to Wilks [@Wilks] and Wald [@Wald] by which one can obtain both the significance for given data as well as the full sampling distribution of the significance under the hypothesis of different signal models, all without recourse to Monte Carlo. In this way one can find, for example, the median significance and also a measure of how much one would expect this to vary as a result of statistical fluctuations in the data.
A useful element of the method involves estimation of the median significance by replacing the ensemble of simulated data sets by a single representative one, referred to here as the “Asimov” data set.[^1] In the past, this method has been used and justified intuitively (e.g., [@quast; @CSC]). Here we provide a formal mathematical justification for the method, explore its limitations, and point out several additional aspects of its use.
The present paper extends what was shown in Ref. [@CSC] by giving more accurate formulas for exclusion significance and also by providing a quantitative measure of the statistical fluctuations in discovery significance and exclusion limits. For completeness some of the background material from [@CSC] is summarized here.
In Sec. \[sec:formalism\] the formalism of a search as a statistical test is outlined and the concepts of statistical significance and sensitivity are given precise definitions. Several test statistics based on the profile likelihood ratio are defined.
In Sec. \[sec:qdist\], we use the approximations due to Wilks and Wald to find the sampling distributions of the test statistics and from these find $p$-values and related quantities for a given data sample. In Sec. \[sec:sensitivity\] we discuss how to determine the median significance that one would obtain for an assumed signal strength. Several example applications are shown in Sec. \[sec:examples\], and numerical implementation of the methods in the RooStats package is described in Sec. \[sec:roostats\]. Conclusions are given in Sec. \[sec:conclusions\].
Formalism of a search as a statistical test {#sec:formalism}
===========================================
In this section we outline the general procedure used to search for a new phenomenon in the context of a frequentist statistical test. For purposes of discovering a new signal process, one defines the null hypothesis, $H_0$, as describing only known processes, here designated as background. This is to be tested against the alternative $H_1$, which includes both background as well as the sought after signal. When setting limits, the model with signal plus background plays the role of $H_0$, which is tested against the background-only hypothesis, $H_1$.
To summarize the outcome of such a search one quantifies the level of agreement of the observed data with a given hypothesis $H$ by computing a $p$-value, i.e., a probability, under assumption of $H$, of finding data of equal or greater incompatibility with the predictions of $H$. The measure of incompatibility can be based, for example, on the number of events found in designated regions of certain distributions or on the corresponding likelihood ratio for signal and background. One can regard the hypothesis as excluded if its $p$-value is observed below a specified threshold.
In particle physics one usually converts the $p$-value into an equivalent significance, $Z$, defined such that a Gaussian distributed variable found $Z$ standard deviations above[^2] its mean has an upper-tail probability equal to $p$. That is,
$$\label{eq:significance}
Z = \Phi^{-1}(1-p) \,,$$
where $\Phi^{-1}$ is the quantile (inverse of the cumulative distribution) of the standard Gaussian. For a signal process such as the Higgs boson, the particle physics community has tended to regard rejection of the background hypothesis with a significance of at least $Z=5$ as an appropriate level to constitute a discovery. This corresponds to $p = 2.87 \times 10^{-7}$. For purposes of excluding a signal hypothesis, a threshold $p$-value of 0.05 (i.e., 95% confidence level) is often used, which corresponds to $Z = 1.64$. It should be emphasized that in an actual scientific context, rejecting the background-only hypothesis in a statistical sense is only part of discovering a new phenomenon. One’s degree of belief that a new process is present will depend in general on other factors as well, such as the plausibility of the new signal hypothesis and the degree to which it can describe the data. Here, however, we only consider the task of determining the $p$-value of the background-only hypothesis; if it is found below a specified threshold, we regard this as “discovery”.
It is often useful to quantify the sensitivity of an experiment by reporting the expected significance one would obtain with a given measurement under the assumption of various hypotheses. For example, the sensitivity to discovery of a given signal process $H_1$ could be characterized by the expectation value, under the assumption of $H_1$, of the value of $Z$ obtained from a test of $H_0$. This would not be the same as the $Z$ obtained using Eq. (\[eq:significance\]) with the expectation of the $p$-value, however, because the relation between $Z$ and $p$ is nonlinear. The median $Z$ and $p$ will, however, satisfy Eq. (\[eq:significance\]) because this is a monotonic relation. Therefore in the following we will take the term ‘expected significance’ always to refer to the median.
A widely used procedure to establish discovery (or exclusion) in particle physics is based on a frequentist significance test using a likelihood ratio as a test statistic. In addition to parameters of interest such as the rate (cross section) of the signal process, the signal and background models will contain in general [*nuisance parameters*]{} whose values are not taken as known [*a priori*]{} but rather must be fitted from the data.
It is assumed that the parametric model is sufficiently flexible so that for some value of the parameters it can be regarded as true. The additional flexibility introduced to parametrize systematic effects results, as it should, in a loss in sensitivity. To the degree that the model is not able to reflect the truth accurately, an additional systematic uncertainty will be present that is not quantified by the statistical method presented here.
To illustrate the use of the profile likelihood ratio, consider an experiment where for each selected event one measures the values of certain kinematic variables, and thus the resulting data can be represented as one or more histograms. Using the method in an unbinned analysis is a straightforward extension.
Suppose for each event in the signal sample one measures a variable $x$ and uses these values to construct a histogram $\vec{n} = (n_1,
\ldots, n_N)$. The expectation value of $n_i$ can be written
$$\label{eq:eni}
E[n_i] = \mu s_i + b_i \;,$$
where the mean number of entries in the $i$th bin from signal and background are
$$\begin{aligned}
\label{eq:si}
s_i = s_{\rm tot} \int_{{\rm bin} \, i} f_{s}(x; \vec{\theta}_{s}) \, dx \,,
\\*[0.3 cm]
\label{eq:bi}
b_i = b_{\rm tot} \int_{{\rm bin} \, i} f_{b}(x; \vec{\theta}_{b}) \, dx \,.\end{aligned}$$
Here the parameter $\mu$ determines the strength of the signal process, with $\mu=0$ corresponding to the background-only hypothesis and $\mu=1$ being the nominal signal hypothesis. The functions $f
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Rank aggregation systems collect ordinal preferences from individuals to produce a global ranking that represents the social preference. Rank-breaking is a common practice to reduce the computational complexity of learning the global ranking. The individual preferences are broken into pairwise comparisons and applied to efficient algorithms tailored for independent paired comparisons. However, due to the ignored dependencies in the data, naive rank-breaking approaches can result in inconsistent estimates. The key idea to produce accurate and consistent estimates is to treat the pairwise comparisons unequally, depending on the topology of the collected data. In this paper, we provide the optimal rank-breaking estimator, which not only achieves consistency but also achieves the best error bound. This allows us to characterize the fundamental tradeoff between accuracy and complexity. Further, the analysis identifies how the accuracy depends on the spectral gap of a corresponding comparison graph.'
author:
- |
[Ashish Khetan and Sewoong Oh ]{}\
[Department of ISE, University of Illinois at Urbana-Champaign]{}\
[Email: $\{$khetan2,swoh$\}$@illinois.edu]{}
bibliography:
- '\_ranking.bib'
title: 'Data-driven Rank Breaking for Efficient Rank Aggregation'
---
Introduction {#sec:intro}
============
In several applications such as electing officials, choosing policies, or making recommendations, we are given partial preferences from individuals over a set of alternatives, with the goal of producing a global ranking that represents the collective preference of the population or the society. This process is referred to as [*rank aggregation*]{}. One popular approach is [*learning to rank*]{}. Economists have modeled each individual as a rational being maximizing his/her perceived utility. Parametric probabilistic models, known collectively as Random Utility Models (RUMs), have been proposed to model such individual choices and preferences [@McF80]. This allows one to infer the global ranking by learning the inherent utility from individuals’ revealed preferences, which are noisy manifestations of the underlying true utility of the alternatives.
Traditionally, learning to rank has been studied under the following data collection scenarios: pairwise comparisons, best-out-of-$k$ comparisons, and $k$-way comparisons. [*Pairwise comparisons*]{} are commonly studied in the classical context of sports matches as well as more recent applications in crowdsourcing, where each worker is presented with a pair of choices and asked to choose the more favorable one. [*Best-out-of-$k$ comparisons*]{} data sets are commonly available from purchase history of customers. Typically, a set of $k$ alternatives are offered among which one is chosen or purchased by each customer. This has been widely studied in operations research in the context of modeling customer choices for revenue management and assortment optimization. The [*$k$-way comparisons*]{} are assumed in traditional rank aggregation scenarios, where each person reveals his/her preference as a ranked list over a set of $k$ items. In some real-world elections, voters provide ranked preferences over the whole set of candidates [@Lun07]. We refer to these three types of ordinal data collection scenarios as ‘traditional’ throughout this paper.
For such traditional data sets, there are several computationally efficient inference algorithms for finding the Maximum Likelihood (ML) estimates that provably achieve the minimax optimal performance [@NOS12; @SBB15; @HOX14]. However, modern data sets can be unstructured. Individual’s revealed ordinal preferences can be implicit, such as movie ratings, time spent on the news articles, and whether the user finished watching the movie or not. In crowdsourcing, it has also been observed that humans are more efficient at performing batch comparisons [@NIPS2011_4187], as opposed to providing the full ranking or choosing the top item. This calls for more flexible approaches for rank aggregation that can take such diverse forms of ordinal data into account. For such non-traditional data sets, finding the ML estimate can become significantly more challenging, requiring run-time exponential in the problem parameters.
To avoid such a computational bottleneck, a common heuristic is to resort to [*rank-breaking*]{}. The collected ordinal data is first transformed into a bag of pairwise comparisons, ignoring the dependencies that were present in the original data. This is then processed via existing inference algorithms tailored for [*independent*]{} pairwise comparisons, hoping that the dependency present in the input data does not lead to inconsistency in estimation. This idea is one of the main motivations for numerous approaches specializing in learning to rank from pairwise comparisons, e.g., [@Ford57; @NOS14; @ACPX13]. However, such a heuristic of full rank-breaking defined explicitly in , where all pairwise comparisons are weighted and treated equally ignoring their dependencies, has been recently shown to introduce inconsistency [@APX14a].
The key idea to produce accurate and consistent estimates is to treat the pairwise comparisons unequally, depending on the topology of the collected data. A fundamental question of interest to practitioners is how to choose the weight of each pairwise comparison in order to achieve not only consistency but also the best accuracy, among those consistent estimators using rank-breaking. We study how the accuracy of resulting estimate depends on the topology of the data and the weights on the pairwise comparisons. This provides a guideline for the optimal choice of the weights, driven by the topology of the data, that leads to accurate estimates.
[**Problem formulation.**]{} The premise in the current race to collect more data on user activities is that, a hidden true preference manifests in the user’s activities and choices. Such data can be explicit, as in ratings, ranked lists, pairwise comparisons, and like/dislike buttons. Others are more implicit, such as purchase history and viewing times. While more data in general allows for a more accurate inference, the heterogeneity of user activities makes it difficult to infer the underlying preferences directly. Further, each user reveals her preference on only a few contents.
Traditional collaborative filtering fails to capture the diversity of modern data sets. The sparsity and heterogeneity of the data renders typical similarity measures ineffective in the nearest-neighbor methods. Consequently, simple measures of similarity prevail in practice, as in Amazon’s “people who bought ... also bought ...” scheme. Score-based methods require translating heterogeneous data into numeric scores, which is a priori a difficult task. Even if explicit ratings are observed, those are often unreliable and the scale of such ratings vary from user to user.
We propose aggregating ordinal data based on users’ revealed preferences that are expressed in the form of [*partial orderings*]{} (notice that our use of the term is slightly different from its original use in revealed preference theory). We interpret user activities as manifestation of the hidden preferences according to discrete choice models (in particular the Plackett-Luce model defined in ). This provides a more reliable, scale-free, and widely applicable representation of the heterogeneous data as partial orderings, as well as a probabilistic interpretation of how preferences manifest. In full generality, the data collected from each individual can be represented by a [*partially ordered set (poset)*]{}. Assuming consistency in a user’s revealed preferences, any ordered relations can be seamlessly translated into a poset, represented as a Hasse diagram by a directed acyclic graph (DAG). The DAG below represents ordered relations $a>\{b,d\}$, $b>c$, $\{c,d\}>e$, and $e>f$. For example, this could have been translated from two sources: a five star rating on $a$ and a three star ratings on $b,c,d$, a two star rating on $e$, and a one star rating on $f$; and the item $b$ being purchased after reviewing $c$ as well.
![ A DAG representation of consistent partial ordering of a user $j$, also called a Hasse diagram (left). A set of rank-breaking graphs extracted from the Hasse diagram for the separator item $a$ and $e$, respectively (right).[]{data-label="fig:hasse"}](hasse "fig:"){width=".2\textwidth"} (-50,60)[${\cal G}_j$]{} ![ A DAG representation of consistent partial ordering of a user $j$, also called a Hasse diagram (left). A set of rank-breaking graphs extracted from the Hasse diagram for the separator item $a$ and $e$, respectively (right).[]{data-label="fig:hasse"}](rbgraph "fig:"){width=".3\textwidth"} (-115,70)[$G_{j,1}$]{} (-40,70)[$G_{j,2}$]{}
There are $n$ users or agents, and each agent $j$ provides his/her ordinal evaluation on a subset $S_j$ of $d$ items or alternatives. We refer to $S_j\subset\{1,2,\ldots,d\}$ as [*offerings*]{} provided to $j$, and use $\kappa_j=|S_j|$ to denote the size of the offerings. We assume that the partial ordering over the offerings is a manifestation of her preferences as per a popular choice model known as Plackett-Luce (PL) model. As we explain in detail below, the PL model produces total orderings (rather than partial ones). The data collector queries each user for a partial ranking in the form of a poset over $S_j$. For example, the data collector can ask for the top item, unordered subset of three next preferred items, the fifth item, and the least preferred item. In this case, an example of such poset could be $a < \{b,c,d\} < e < f$, which could have been generated from a total ordering produced by the PL model and taking the corresponding partial ordering from the total ordering. Notice that we fix the topology of the DAG first and ask the user to fill in the node
|
{
"pile_set_name": "ArXiv"
}
|
[**About neutral mesons and particle oscillations in the light of field-theoretical prescriptions of Weinberg**]{}
L.M. Slad[^1]\
[*Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow 119991, Russia*]{}
The postulated universality of the Weinberg’s prescriptions on the diagonalization of the mass term of the Lagrangian without increasing the total number of entities leads to the following conclusions: the set of neutral $K$-mesons consists of two elements, $K_{S}^{0}$ and $K_{L}^{0}$; the states $K^{0}$ and $\bar{K}^{0}$ do not exist as physical objects (in the form of particles or “particle mixtures”); the absence of the states $K^{0}$ and $\bar{K}^{0}$ destroys grounds for introducing the notion of their oscillations. The conclusions concerning the neutral $K$-mesons are also applicable to the neutral $D$-, $B$- and $B_{s}$-mesons.A theoretical and experimental vulnerability of the neutrino oscillation concept is noted.
The initial judgments about the family of four neutral $K$-mesons still remain almost unchanged and, furthermore, extend on the families of neutral $D$- and $B$-mesons. The concept of mutual transition of $K^{0}$- and $\bar{K}^{0}$-mesons in vacuum, originated long ago and retained up to present day, has served initially [@1] and continues to serve now [@2] as the only theoretical argument in favor of the hypothesis of neutrino oscillations by analogy with the letter.
In the present paper, we propose to put the status of neutral $K$-mesons in full accordance with field-theoretical prescriptions of Weinberg [@3] that have led to the prodigious gauge theory of electroweak interactions by making use of the diagonalization of the mass term in the Lagrangian without increasing the total number of entities. The specified prescriptions are an exact realization of general scientific principle existing for hundreds of years that “entities must not be multiplied beyond necessity”, which, having been accepted as a universal rule of field theory and particle physics, inevitably eliminates any reason for the meson oscillation.
For further comparisons, we focus on particular steps of Weinberg’s procedure [@3] on eliminating the item $cA_{\mu}^{3}(x)B^{\mu}(x)$ ($c$ is a constant) in the Lagrangian mass term, which is nondiagonal on the initial gauge fields and appears due to Higgs mechanism of spontaneous breaking of the original symmetry. This item could serve as the reason for the judgment on the possibility of a transition of one field to another in vacuum. First step: on the basis of two suitable superpositions of classical fields $A_{\mu}^{3}(x)$ and $B_{\mu}(x)$, Weinberg introduces orthonormal classical fields with definite masses $Z_{\mu}(x)$ and $A_{\mu}(x)$. Second step: Weinberg expresses the fields $A_{\mu}^{3}(x)$ and $B_{\mu}(x)$, and then all terms of the gauge Lagrangian, through the fields $Z_{\mu}(x)$ and $A_{\mu}(x)$. Third step: Weinberg gives status of quantized fields to the fields $Z_{\mu}(x)$ and $A_{\mu}(x)$ and identifies them with such particles as the intermediate meson $Z$ and the photon. First note: at any stage of constructing the gauge model, Weinberg does not connect the fields $A_{\mu}^{3}(x)$ and $B_{\mu}(x)$ with any quanta. Second note: the original gauge field $A_{\mu}^{3}(x)$ and $B_{\mu}(x)$ that serve as the cornerstone in the Weinberg’s construction disappear in the final model of electroweak interactions. Third note: the initial fields possess well-defined values of weak isospin and its third projection, but the final fields $Z_{\mu}(x)$ and $A_{\mu}(x)$ do not have such definite values, and, therefore, the corresponding terms of the electroweak interaction Lagrangian violate the isospin symmetry.
We now turn to a number of current opinions about the neutral $K$-mesons. They mainly reproduce part of the judgments stated in the works by Gell-Mann [@4] and Gell-Mann and Pais [@5] with adding some corrections for the results of the subsequent experiments concerning the violation of $CP$-symmetry.
Starting from the assumption of strict conservation of the isotopic spin in strong interactions, Gell-Mann [@4] concludes that, among the two long-lived neutral particles produced in the collision of the $\pi^{-}$meson with the proton, one particle ($K^{0}$) is a boson with the isospin 1/2 and its projection -1/2 and that there exists an antiparticle ($\bar{K}^{0}$) with the isospin projection +1/2 which corresponds to the boson $K^{0}$ and is different from it. The mentioned assumption of Gell-Mann is the key element in the subsequently formed structure of the family of neutral $K$-mesons.
Gell-Mann and Pais [@5] consider that, if there exists the decay $K^{0} \rightarrow \pi ^{+}+\pi^{-}$, then there should exist the charge-conjugate process $\bar{K}^{0} \rightarrow \pi ^{+}+\pi^{-}$, and thus, the weak interaction causes the virtual transition $K^{0} \rightleftharpoons \pi^{+}+\pi^{-} \rightleftharpoons \bar{K}^{0}$. The last judgment and the aspiration for providing the $C$-parity conservation in weak decays lead to the introduction of the quanta $K_{1}^{0}$ and $K_{2}^{0}$, which fields are expressed in the form of normalized sum and difference of the fields $K^{0}$ and $\bar{K}^{0}$, respectively. According to Gell-Mann and Pais, $K^{0}$ and $\bar{K}^{0}$ are the primary objects in production phenomena, whereas the decay process is best described in terms of $K_{1}^{0}$ and $K_{2}^{0}$. Each of the latter can be assigned a definite lifetime, that is not true to the $K^{0}$ and $\bar{K}^{0}$.
Note that a long-lived neutral boson produced, for example, in the collision of a $\pi^{-}$-meson with a proton, cannot make any experimental manifestation between the procuction moment and the decay moment and, consequently, it does not allow experimental identification in this time interval. The opinion stated in [@4] and [@5] that such a boson must have a well-defined value of the isospin and its third projection remains nothing more but an assumption. The fact that the introduced hadrons $K_{1}^{0}$ and $K_{2}^{0}$ do not possess definite values of the third projection of the isospin, does not induce Gell-Mann and Pais to reconsider their assumption of strict isotopic spin conservation in strong interactions, and this essentially prohibits the participation of these hadrons in strong interactions, causing at least a surprise.
Essential is the position of the authors of work [@5] about reserving the word “particle” for an object with a definite lifetime and recognizing the quanta $K_{1}^{0}$ and $K_{2}^{0}$ as true “particles”, and about that the $K^{0}$ and $\bar{K}^{0}$ must, strictly speaking, be considered as “particle mixtures” is presented essential. The mathematical definition of “particle mixtures” has resulted by Pais and Piccioni [@6] in introducing and describing the concept of oscillations, mutual transitions of the $K^{0}$ and $\bar{K}^{0}$-mesons in vacuum.
Let us now present a view of the neutral $K$-mesons obtained on the basis of exactly applying to them the field-theoretical prescriptions of Weinberg on the diagonalization of the mass term in the Lagrangian without increasing the total number of entities.
Following Weinberg, we shall deal initially not with particles but with suitable classical fields and assume that the initial Lagrangian of strong interactions is invariant under transformations of the isospin group $SU(2)$ (or the internal symmetry group $SU(3)$).
We introduce two neutral fields $\Phi_{+1 \frac{1}{2} -\frac{1}{2}}(x)$ and $\Phi_{-1 \frac{1}{2} +\frac{1}{2}}(x)$ that are pseudoscalar under the orthochronous Lorentz group and possess strangenesses $\pm 1$, isospin $1/2$ and its projections $\mp 1/2$ (these fields can also be considered as the components of vectors in the octet representation space of the group $SU(3)$).
We assume at the stage of preliminary analysis that, in the absence of weak interactions, these fields could describe the lower bound states of quark-antiquark systems, respectively, $d\bar{s}$ and $s\bar{d}$. In the presence of weak interactions, these two systems would virtually pass into one another due to double exchange of $W$-bosons with changing the third projection of isospin and strangeness. (Feynman diagrams corresponding to such an exchange can be found in the monograph [@7]). This means that the mass term in the Lagrangian of fields $\Phi_{\pm 1 \frac{1}{2} \mp \frac{1}{2}}(x)$, in view of the influence of weak interactions, should be presented in the following form $${\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The effect of narrow dibaryon resonances to nuclear matter and structure of neutron stars is investigated in the mean-field theory (MFT) and in the relativistic Hartree approximation (RHA). The existence of massive neutron stars imposes constraints to the coupling constants of the $\omega $- and $%
\sigma $-mesons with dibaryons. We conclude that the experimental candidates to dibaryons d$_1$(1920) and d’(2060) if exist form in nuclear matter a Bose condensate stable against compression. This proves stability of the ground state for nuclear matter with a Bose condensate of the light dibaryons.
author:
- |
Amand Faessler$^1$, A. J. Buchmann$^1$, M. I. Krivoruchenko$^{1,2}$\
[$^1$[*Institut für Theoretische Physik, Universität Tübingen, Auf der Morgenstelle 14* ]{}]{}\
\
[$^2$[*Institute for Theoretical and Experimental Physics, B.Cheremushkinskaya 25*]{}]{}\
title: 'Constraints to Coupling Constants of the $\omega $- and $\sigma $-Mesons with Dibaryons'
---
=-1.3truecm 16 truecm
The prospect to observe the long-lived H-particle predicted in 1977 by R. Jaffe [@Jaf] stimulated considerable activity in the experimental searches of dibaryons. It was proposed to examine the H-particle production in different reactions [@San]. The experiments [@Aok] did not give a positive sign for the H-particle, however, the existence of the H-particle remains an open question which must eventually be settled by experiment. The non-strange dibaryons with exotic quantum numbers, which have a small width due to zero coupling to the $NN$-channel, are promising candidates for experimental searches [@Mul]. The data on pion double charge exchange (DCE) reactions on nuclei [@Bil] exhibit a peculiar energy dependence, which can be interpreted [@Mar] as evidence for the existence of a narrow d’ dibaryon with quantum numbers $T=0$, $J^p=0^{-}$ and the total resonance energy of $2063$ MeV. Recent experiments at TRIUMPF (Vancouver) and CELSIUS (Uppsala) seem to support the existence of the d’ dibaryon [@Mey]. A method for searching narrow, exotic dibaryon resonances in the double proton-proton bremsstrahlung reaction is discussed in Ref. [@Ger]. Recently, some indications for a d$_1$(1920) dibaryon in this reaction have been found [@Khr].
When density of nuclear matter is increased beyond a critical value, production of dibaryons becomes energetically favorable. Dibaryons are Bose particles, so they condense in the ground state and form a Bose condensate [@Bal; @Kri]. An exactly solvable model for a one-dimensional Fermi-system of fermions interacting through a potential leading to a resonance in the two-fermion channel is analyzed in Ref. [@Buc]. The behavior of the system with increasing the density can be interpreted in terms of a Bose condensation of two-fermion resonances. The effect of narrow dibaryon resonances on nuclear matter in the mean field theory (MFT) is analyzed in Refs. [@Fae; @Faes]. In the limit of vanishing decay width, a dibaryon can be approximately described as an elementary field.
Despite the dibaryon Bose condensate does not exist in ordinary nuclei, dibaryons affect properties of nuclear matter and the ordinary nuclei through a Casimir effect. Presence of the background $\sigma $-meson mean field inside of nuclei modifies the nucleon and dibaryon masses and in turn modifies the zero-point vacuum fluctuations of the nucleon and dibaryon fields. This effect contributes to the energy density and pressure. It can be evaluated within the relativistic Hartree approximation (RHA). For nucleons, this effect is well known [@Wal]. In the loop expansion of quantum hadrodynamics (QHD), MFT corresponds to the lowest approximation (no loops), while RHA corresponds to the one-loop approximation in a calculation of the equation of state for nuclear matter.
At zero temperature, a uniformly distributed system of bosons with attractive potential is energetically unstable against compression and collapses [@Abr]. In such a case, the long wave excitations (sound in the medium) have imaginary dispersion law: The square of the sound velocity is negative $a_s^2<0$. The amplitude of these excitations increases with the time, providing instability of the system. It is necessary to analyze dispersion laws of other elementary excitations also. We shall see, however, that in MFT and RHA only sound waves can generate an instability. The ground state of nuclear matter with a Bose condensate of dibaryons is stable or unstable against small perturbations according as the repulsive $\omega $-meson exchange or the attractive $\sigma $-meson exchange is dominant between dibaryons.
In this paper, we investigate the hypothesis that the dibaryon matter is unstable against compression. In such a case, formation of dibaryons in nuclear matter can be treated as a possible mechanism for a phase transition into the quark matter. If central density of a massive neutron stars exceeds a critical value for formation of dibaryons, the neutron star should convert into a quark star, a strange star, or a black hole. Some of the observed pulsars are identified quite reliably with ordinary neutron stars [@Sha]. From the requirement that the dibaryon formation is not energetically favorable at densities lower than the central density of neutron stars with a mass $1.3M_{\odot }$, we derive constraints to the coupling constants of the mesons and dibaryons d$_1$(1920) and d$^{\prime }$(2060) and conclude that narrow dibaryons in this mass range can form a Bose condensate stable against perturbations only. The effect of the dibaryons to stability and structure of neutron stars in different phenomenological models is analyzed in Refs. [@Kri; @Tam]. Constraints to the binding energy of strange matter [@Wit] from the existence of massive neutron stars are discussed in Ref. [@KrMa].
The dibaryonic extension of the Walecka model [@Wal] is obtained by including dibaryons to the Lagrangian density [@Fae; @Faes] $$\label{I}
\begin{array}{c}
{\cal L}=\bar \Psi (i\partial _\mu \gamma _\mu -m_N-g_\sigma \sigma
-g_\omega \omega _\mu \gamma _\mu )\Psi +\frac 12(\partial _\mu \sigma
)^2-\frac 12m_\sigma ^2\sigma ^2 \\ -\frac 14F_{\mu \nu }^2+\frac 12m_\omega
^2\omega _\mu ^2+(\partial _\mu -ih_\omega \omega _\mu )\varphi
^{*}(\partial _\mu +ih_\omega \omega _\mu )\varphi -(m_D+h_\sigma \sigma
)^2\varphi ^{*}\varphi .
\end{array}$$ Here, $\Psi $ is the nucleon field, $\omega _\mu $ and $\sigma $ are fields of the $\omega $- and $\sigma $-mesons, $F_{\mu \nu }=\partial _\nu \omega
_\mu -\partial _\mu \omega _\nu $, $\varphi $ is the dibaryon isoscalar-scalar (or isoscalar-pseudoscalar) field. The values $m_\omega \ $ and $m_\sigma $ are the $\omega $- and $\sigma $-meson masses and the values $g_\omega $, $g_\sigma $, $h_\omega $, $h_\sigma $ are coupling constants of the $\omega $- and $\sigma $-mesons with nucleons ($g$) and dibaryons ($h$).
The $\sigma $-meson mean field $\sigma _c$ determines the effective nucleon and dibaryon masses in the medium $$\label{II}m_N^{*}=m_N+g_\sigma \sigma _c,$$ $$\label{III}m_D^{*}=m_D+h_\sigma \sigma _c.$$
The nucleon scalar density in the RHA is defined by expression [@Wal]
$$\label{IV}\rho _{NS}=<\bar \Psi (0)\Psi (0)>=\gamma \int \frac{d{\bf p}}{%
(2\pi )^3}\frac{m_N^{*}}{E^{*}({\bf p})}\theta (p_F-|{\bf p}|)-4m_N^3\zeta
(m_N^{*}/m_N)$$
where
$$4\pi ^2\zeta (x)=x^3lnx+1-x-\frac 52(1-x)^2+\frac{11}2(1-x)^3.$$ The last term in Eq.(4) occurs after the renormalization of the scalar density. Here, $\gamma =2$ for neutron matter and $\gamma =4$ for nuclear matter.
We investigate here properties of the nuclear matter below the critical density for formation of dibaryons, so the dibaryon condensate is zero $%
<|\varphi (0)|>=0$. The vacuum contribution to the scalar density of dibaryons can be found to be $$\label{V}2m_D^{*}\rho _{DS}=2m_D^{*}<\varphi (0)^{*}\varphi (0)>=m_D
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: '[Despite the great progress of current cosmological measurements, the nature of the dominant component of the universe, coined [*dark energy*]{}, is still an open question. [*Early Dark Energy*]{} is a possible candidate which may also alleviate some fine tuning issues of the standard paradigm. Using the latest available cosmological data, we find that the 95% CL upper bound on the early dark energy density parameter is ${\Omega_{\textrm{eDE}}}\,<\,0.009$. On the other hand, the dark energy component may be a stressed and inhomogeneous fluid. If this is the case, the effective sound speed and the viscosity parameters are unconstrained by current data. Future omniscope-like $21$ cm surveys, combined with present CMB data, could be able to distinguish between standard quintessence scenarios from other possible models with $2\sigma$ significance, assuming a non-negligible early dark energy contribution. The precision achieved on the ${\Omega_{\textrm{eDE}}}$ parameter from these $21$ cm probes could be below $\mathcal{O} (10\%)$.]{}'
author:
- Maria Archidiacono
- 'Laura Lopez-Honorez'
- Olga Mena
title: |
Current constraints on early and stressed dark energy models\
and future 21 cm perspectives
---
Introduction
============
The nature of the mysterious dark energy component that currently dominates the energy content of the universe reveals new physics missing from our universe’s picture, and constitutes the fundamental key to understand the fate of the universe. The most economical explanation of the dark energy component attributes this energy density to the one of the vacuum, i.e., a cosmological constant scenario. Together with cold dark matter (CDM), the so-called $\Lambda$CDM scenario can account for present data with a flat universe made up of roughly $30\%$ dark matter and $70\%$ dark energy. In this minimal model, the dark energy equation of state, $w$, which corresponds to the ratio of the dark energy pressure to the dark energy density, is constant and equal to $-1$. However, this simple picture suffers from severe fine tuning theoretical issues (see Ref. [@Copeland:2006wr] and references therein) as well as from problems with observations related to the matter power spectrum on scales of a few Mpc and below [@Moore:1999gc; @Bode:2000gq; @Penarrubia:2012bb; @BoylanKolchin:2011dk; @Ferrero:2011au; @Weinberg:2013aya]. Possible alternatives to alleviate them have been extensively explored. Perfect dark energy fluids, characterised either by a constant ($w\neq-1$) or by a time varying dark energy equation of state $w(a(t))$, or scalar field models, are the most popular options considered in the cosmological data analyses, as their parameterizations require few extra parameters (two at most) to be added to the usual $\Lambda$CDM scenario.
There exists also alternative scenarios, in which the gravitational sector is modified, leading to a modification of Einstein’s equations of gravity on large scales. Modifications of gravity (see e.g. [@DeFelice:2010aj] and references therein) incorporate models with extra spatial dimensions or an action which is non-linear in the Ricci scalar. There are also non-perfect fluid models, as Chaplygin gas cosmologies [@Bento:2002ps], which involve more parameters than just one equation of state $w$. Of particular interest here is the [*Early Dark Energy*]{} (hereafter EDE) case, as it arises as a natural hypothesis of dark energy [@Wetterich:2004pv; @Doran:2006kp; @odea; @Calabrese:2010uf]. EDE differs from the cosmological constant because it is not negligible in the early universe and the contribution depends on the initial density parameter ${\Omega_{\textrm{eDE}}}$. Furthermore, the EDE model considered here is based on a generic dark energy fluid which is inhomogeneous. Density and pressure are time varying, therefore the equation of state is not constant in time. The phenomenological analyses of these inhomogeneous dark energy models usually require additional dark energy clustering parameters, i.e. the dark energy effective sound speed and the dark energy anisotropic stress. The sound speed ${c^2_{\textrm{eff}}}$ [@Hu:1998kj; @Hu:1998tk; @ceff] is defined as the ratio between the dark energy pressure perturbation and the dark energy density contrast in the rest frame of the fluid, ${c^2_{\textrm{eff}}}\equiv(\delta P/\delta \rho)_{\rm rest}$. In the simplest quintessence models, ${c^2_{\textrm{eff}}}=1$, while the anisotropic stress is zero. The effective sound speed determines the clustering properties of dark energy and consequently it affects the growth of matter density fluctuations. Therefore, in principle, its presence could be revealed in large scale structure observations. The growth of perturbations can also be affected by the anisotropic stress contributions [@Hu:1998kj; @Hu:1998tk; @cvis] which lead to a damping in the velocity perturbations. In the parametrization used here, the damping effect is driven by the viscosity parameter ${c^2_{\textrm{vis}}}$ which links the anisotropic stress to the velocity perturbation and the metric shear.
Despite the precision achieved by the combination of Cosmic Microwave Background (CMB) measurements from the Planck satellite [@planck], Baryon Acoustic Oscillation (BAO) data from a number of galaxy surveys [@Anderson:2013zyy; @Beutler:2011hx; @Busca:2012bu; @Kirkby:2013fh; @Slosar:2013fi] and Supernovae Ia luminosity distance measurements [@Suzuki:2011hu] in the extraction of the dark energy equation of state parameter, $w=-1.06\pm 0.06$ at $68\%$ CL [@Anderson:2013zyy], the nature of the dark energy component remains unknown. Therefore, it is mandatory to carefully study other possibilities including the one of an EDE component, as well as the clustering properties of the dark energy fluid. In this paper we shall address both issues, relaxing the perfect fluid assumption and considering current cosmological data, in addition to the recent BICEP2 measurements of the B-modes power spectrum [@Ade:2014xna] .
We also explore the possibility of constraining an EDE component and/or a stressed dark energy fluid with future $21$ cm surveys. The next generation of radio experiments, which will image the neutral intergalactic medium (IGM) in $21$ cm emission/absorption, will provide a unique probe of the universe at higher redshifts ($z>6$) which lie out of the reach of galaxy surveys and CMB experiments. The $21$ cm line signal presents several advantages compared to traditional cosmic and astrophysical probes, see e.g. [@Loeb:2003ya], and it could be used to test the nature of dark energy [@Wyithe:2007rq]. The future generation of radio interferometers testing the $21$ cm signal, including the Squared Kilometer Array (SKA) [@Mellema:2012ht] and omniscopes [@Tegmark:2008au; @Zheng:2013tpz], may provide extra constraints on the cosmological parameters probing the Epoch of Reionisation (EoR) or the high redshift window, see e.g. [@Mao:2008ug; @Clesse:2012th]. In addition, the $21$ cm signal can also be used at low redshifts ($z<5$), offering a competitive cosmological probe for unraveling the nature of the component responsible for the present universe’s accelerated expansion [@Chang:2007xk; @Hall:2012wd].
The structure of the paper is as follows. Sections \[sec:ede\] and \[sec:stress\] describe the early and stressed dark energy models evolution in terms of the background and perturbation variables. In Sec. \[sec:methodanddata\] we present the method and data followed in the numerical analyses presented in Sec. \[sec:current\]. Section \[sec:future\] addresses the future perspective and constraints from $21$ cm surveys by means of a Fisher matrix forecast analysis. Finally, we draw our conclusions in Sec. \[sec:conclusions\].
Early Dark energy models {#sec:ede}
========================
The concept of EDE cosmology was introduced in [@Wetterich:2004pv] and studied in several subsequent works following different possible effective parametrizations of the evolution of the dark energy fluid, see e.g. [@Doran:2006kp; @Calabrese:2010uf; @Pettorino:2013ia; @planck]. Here we follow Ref. [@Doran:2006kp] to describe the evolution of the background dark energy density from the high redshift, constant value ${\Omega_{\textrm{eDE}}}$ until its present-day value $\Omega_{\rm DE}^0$ (assuming a flat universe with $\Omega_{\rm DE}^0+\Omega_{\rm m}^0=1$): $$\Omega_{\rm DE}(a) =\frac{\Omega_{\rm DE}^0 - {\Omega_{\textrm{eDE}}}\left(1- a^{-3 w_0}\right) }{\Omega_{\rm DE}^0 + \Omega_{m}^{0} a^{3w_0}} + {\Omega_{\textrm{eDE}}}\left (1- a^{-3 w_0}\right).
\label{eq:odea}$$ The evolution of $w(a)$ in this EDE parametrization reads $$w(a) = -\
|
{
"pile_set_name": "ArXiv"
}
|
14.85 pt
**Double commutants of multiplication operators on $C(K).$**
**A. K. Kitover**
Department of Mathematics, CCP, Philadelphia, PA 19130, USA
**Abstract.** Let $C(K)$ be the space of all real or complex valued continuous functions on a compact Hausdorff space $K$. We are interested in the following property of $K$: for any real valued $f \in C(K)$ the double commutant of the corresponding multiplication operator $F$ coincides with the norm closed algebra generated by $F$ and $I$. In this case we say that $K \in \mathcal{DCP}$. It was proved in [@Ki] that any locally connected metrizable continuum is in $\mathcal{DCP}$. In this paper we indicate a class of arc connected but not locally connected continua that are in $\mathcal{DCP}$. We also construct an example of a continuum that is not arc connected but is in $\mathcal{DCP}$.
Introduction
============
The famous von Neumann’s double commutant theorem [@N] can be stated the following way. Let $(X, \Sigma, \mu)$ be a space with measure and $f$ be a real-valued element of $L^\infty(X, \Sigma, \mu)$. Let $F$ be the corresponding multiplication operator in $L^2(X, \Sigma, \mu)$, i.e. $(Fx)(t) = f(t)x(t)$ for $x \in L^2(X, \Sigma, \mu)$ and $t$ from a subset of full measure in $X$. Then $$\{F\}^{cc} = \mathcal{A}_F$$ where $\{F\}^{cc}$ is the double commutant (or bicommutant) of $F$, i.e. $\{F\}^{cc}$ consists of all bounded linear operators on $L^2(X, \Sigma, \mu)$ that commute with every operator commuting with $F$ and $\mathcal{A}_F$ is the closure in the weak (or strong) operator topology of the algebra generated by $F$ and the identity operator $I$.
The generalization on the case of complex multiplication operators (or normal operators on a Hilbert space) is then immediate. Quite naturally arises the question of obtaining similar results for multiplication operators on other Banach spaces of functions. De Pagter and Ricker proved in [@PaR] that von Neumann’s result remains true for spaces $L^p(0,1), 1 \leq p < \infty$, and more generally for any Banach ideal $X$ in the space of all measurable functions such that $X$ has order continuous norm and $L^\infty(0,1) \subset X \subseteq L^1(0,1)$. But they also proved that the double commutant of the operator $T$, $(Tx)(t) = tx(t), x \in L^\infty, t \in [0,1]$, is considerably larger than the algebra $\mathcal{A}_T$ and consists of all operators of multiplication by Riemann integrable functions on $[0,1]$. The last result gives rise to the following question: let $C(K)$ be the space of all continuous real-valued functions on a Hausdorff compact space $K$. When is it true that for every multiplication operator $F$ on $C(K)$ its double commutant coincides with the algebra $\mathcal{A}_F$? This property is obviously a topological invariant of $K$ and we will denote the class of compact Hausdorff spaces that have it as $\mathcal{DCP}$ (short for double commutant property).
Continuums with $\mathcal{DCP}$ property
========================================
In [@Ki] the author proved that if $K$ is a compact metrizable space without isolated points then the following implications hold.
1. If $K$ is connected and locally connected then $ K \in \mathcal{DCP}$.
2. If $K \in \mathcal{DCP}$ then $K$ is connected.
In the presence of isolated points the analogues of the above statements become more complicated (see [@Ki Theorem 1.15]). To avoid these minor complications and keep closer to the essence of the problem we will assume that the compact spaces we consider have no isolated points.
A simple example (see [@Ki Example 1.16]) shows that the condition that $K$ is connected is not sufficient for $K \in \mathcal{DCP}$.
\[e1\] Let $K$ be the closure in $\mathds{R}^2$ of the set $\{(x, \sin{1/x}) : x \in (0,1]\}$. Let $f(x,y) =x, (x,y) \in K$, and let $F$ be the corresponding multiplication operator. Then it is easy to see (see details in [@Ki Example 1.16]) that the double commutant $\{F\}^{cc}$ consists of all operators of multiplication on functions from $C(K)$ but $\mathcal{A}_F$ consists of operators of multiplication on functions from $C(K)$ that are constant on the set $\{(0,y): y \in [0,1]\}$.
Therefore the next question is whether the condition that $K$ is connected and locally connected is necessary for $K \in \mathcal{DCP}$? Below we provide a negative answer to this question. In order to consider the corresponding example let us recall the following two simple facts.
\[p1\] Let $K$ be a compact Hausdorff space and $f \in C(K)$. Let $F$ be the corresponding multiplication operator. Then
1. The double commutant $\{F\}^{cc}$ consists of multiplication operators.
2. The algebra $\mathcal{A}_F$ coincides with the closure of the algebra generated by $F$ and $I$ in the operator norm.
$(1)$. Let $T \in \{F\}^{cc}$. Let $\mathbf{1}$ be the function in $C(K)$ identically equal to 1. Clearly for every $a \in C(K)$ the operator $F$ commutes with the multiplication operator $A$ where $Ax=ax, x \in C(K)$. Therefore for any $a \in C(K)$ $T$ commutes with $A$ and $T(a) = T(a\mathbf{1})= TA\mathbf{1} = AT\mathbf{1} = aT\mathbf{1} = (T\mathbf{1})a$. Hence if we take $g = T\mathbf{1}$ then $T$ coincides with the multiplication operator $G$ generated by the function $g$.
$(2)$ If $T \in \{F\}^{cc}$ then by part $(1)$ of the proof $T=G$ where $G$ is a multiplication operator by a function $g \in C(K)$. It remains to notice that $\|G\| = \|G \mathbf{1}\|_{C(K)}$ and therefore on $\{F\}^{cc}$ the convergence in strong operator topology implies convergence in the operator norm.
\[c1\] Let $f \in C(K)$ and $F$ be the corresponding multiplication operator. The following two statements are equivalent.
$(1)$ $\{F\}^{cc} = \mathcal{A}_F$.
$(2)$ For any $G \in \{F\}^{cc}$ and for any $s, t \in K$ the implication holds $$f(s) = f(t) \Rightarrow g(s) = g(t),$$ where $g \in C(K)$ is the function corresponding to the operator $G$.
In what follows our main tool will be the following lemma which was actually proved though not stated explicitly in [@Ki] (see [@Ki Proof of Theorem 1.14]).
\[l1\] Let $K$ be a compact metrizable space, $f \in C(K)$, and $F$ be the corresponding multiplication operator. Let $G \in \{F\}^{cc}$ and $g$ be the corresponding function from $C(K)$. Let $u, v \in K$ be such that
- $f(u) = f(v)$.
- The points $u$ and $v$ have open, and locally connected neighborhoods in $K$.
- For any open connected neighborhood $U$ of $u$ there is an open interval $I_U$ in $\mathds{R}$ such that $f(u) \in I_U \subset f(U)$.
Then $g(u) = g(v)$.
We will also need a simple lemma proved in [@Ki Lemma 1.13]
\[l2\] Let $K$ be a compact Hausdorff space, $F, G$ multiplication operators on $C(K)$ by functions $f$ and $g$, respectively and $G \in \{F\}^{cc}$. Let $k \in K$ be such that $Int f^{-1}(\{f(k)\}) \neq \emptyset$. Then $g$ is constant on $f^{-1}(\{f(k)\})$.
Now we are ready to give an example of a metrizable connected compact space $K$ such that $K$ is not locally connected but $K \in \mathcal{DCP}$. Let $B$ be the well known **“broom”**. $$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
[*Abstract*]{}. – Using the Quantum Inverse Scattering Method we construct an integrable Heisenberg-XXZ-model, or equivalently a model for spinless fermions with nearest-neighbour interaction, with defects. Each defect involves three sites with a fine tuning between nearest-neighbour and next-nearest-neighbour terms. We investigate the finite size corrections to the ground state energy and its dependence on an external flux as a function of a parameter $\nu$, characterizing the strength of the defects. For intermediate values of $\nu$, both quantities become very small, although the ground state wavefunction remains extended.
$(^*)$ Electronic mail: Peter.Schmitteckert@physik.uni-augsburg.de
author:
- |
[P. Schmitteckert$(^*)$, P. Schwab]{} and [U. Eckern]{}\
[*Institut für Physik, Universität Augsburg*]{}\
[*D-86135 Augsburg, Germany*]{}
date: '18.5.1995'
title: 'Quantum Coherence in an Exactly Solvable One-dimensional Model with Defects'
---
\
------- ---------- ---------------------------------------
PACS: 71.27.+a Strongly correlated electron systems.
71.30.+h Metal-insulator transitions.
75.10.Jm Quantized spin models.
------- ---------- ---------------------------------------
\
[*Introduction*]{}. – Three recent experiments have demonstrated that persistent currents, periodic in the magnetic flux, exist in mesoscopic metal [@Levy] and semiconductor [@Benoit93] rings at very low temperatures. Surprisingly, though the current is found to be small, of the order of $\sim
ev_F/L$ for single rings ($v_F$ is the Fermi velocity, and $L$ the circumference), it is still two orders of magnitude larger than expected theoretically, at least for the metal rings studied in [@Levy]. In the latter, the electron motion is diffusive, i.e. the elastic mean free path is much smaller than the circumference. While it is well established that the Coulomb interaction gives an important contribution to the current for a measurement on an ensemble of rings [@Ambegaokar91], the interaction effect in single rings, is far from being understood theoretically.
In this article, we consider a one-dimensional, interacting model in the presence of a magnetic flux, or equivalently, with twisted boundary conditions. We introduce very special “defects” into the model describing spinless fermions with nearest-neighbour interaction. Despite this inhomogeneity, the model remains integrable and we present exact results for the finite size corrections to the ground state energy, and its dependence on the magnetic flux, as a function of a parameter $\nu$ characterizing the strength of the defects. Clearly, our investigation does not provide an answer to the questions raised by the experiments (there, the number of transverse channels is much larger than one). Instead, our work is closely related to, and an extension of, various recent theoretical studies \[4–9\] of quantum coherence in strongly interacting electron systems.
[*Construction of the model*]{}. – Using the Quantum Inverse Scattering Method (QISM), we construct our model from the ${\cal R}$ and ${\cal L}$ matrices of the Heisenberg-XXZ-model on an inhomogeneous lattice as, for example, described in [@Korepin93]. The central equation of the QISM is the Yang-Baxter equation, which guarantees that a scattering process factorizes in two-particle scattering processes and does not depend on the order of these. In order to construct a model with defects, we allow that the local ${\cal
L}_n$ matrix depends, in addition to the spectral parameter $\lambda$, on a parameter $\nu_n$, ${\cal L}_n(\lambda) = {\cal L}(\lambda+\nu_n)$. The transfer matrix is given by $T(\lambda)= \mbox{Tr} \prod_{n=1}^{M} {\cal
L}(\lambda + \nu_n)$, where $M$ denotes the number of lattice sites. To include twisted boundary conditions, we multiply the ${\cal L}_M$ matrix of the Heisenberg-XXZ-ring with $\exp{\left( {i}\phi\,\hat{\sigma}^z /2\right)}$.
The Hamiltonian is then given as the logarithmic derivative of the transfer matrix with respect to $\lambda$, at a specific value [@Korepin93]. In particular, in the special case in which all $\nu_n=0$, we obtain the usual XXZ-model, which can be transformed to a spinless fermion model by a Jordan-Wigner transformation. For a general set of parameters, $\{\nu_n\}$, it is difficult to determine the Hamiltonian explicitly, with one exception, namely where there are no defects on neighbouring sites, i.e. $\nu_n\nu_{n+1}=0$ for all $n$. This is the situation we study in the following. As an illustration, consider a vanishing nearest-neighbour interaction, and a single defect at the site $n_1$ characterized by the parameter $\nu$. The resulting Hamiltonian is given by $$\begin{aligned}
{\cal H} &=& {\cal H}^0 \;+\;{\cal H}^I_{n_1}(\nu) \;=\;
- \sum_{n=1}^M \Big( c^{+}_n c^{}_{n+1} \;+\; c^{+}_{n+1}
c^{}_{n}\Big)\;+\; {\cal H}^{I}_{n_1}(\nu)\\
{\cal H}^I_{n_1}(\nu) &=& \big( 1 - \frac{1}{\cosh \nu}\Big)\,\Big(
c^{+}_{n_1-1} c^{}_{n_1} \;+\; c^{+}_{n_1} c^{}_{n_1+1} \Big)
\;-\; e^{{i}\pi/2}\,\tanh(\nu)\, c^{+}_{n_1-1} c^{}_{n_1+1}\;+\;
\mbox{h.c.} \label{def:HI}\end{aligned}$$ where the $\{c^{+}_n\}$ and $\{c^{}_n\}$ are the standard fermion creation and annihilation operators. The generalization to $r$ defects is straightforward (assuming $\nu_n\nu_{n+1}=0$), $${\cal H}={\cal H}^0+\sum_{\ell=1}^{r} {\cal H}^I_{n_\ell}(\nu_{n_\ell})$$ where $n_\ell$ denotes the location of a defect with strength $\nu_{n_\ell}$. An illustration is given in Fig. \[fig:Defect\]. The expression for the Hamiltonian in the presence of a finite nearest-neighbour interaction is more lengthy but similar in structure, i.e. a defect located at $n_\ell$ affects the lattice sites $n_\ell-1$, $n_\ell$, and $n_\ell+1$ only [@Schmitteckert].
[*Single defect, no interaction*]{}. – As is apparent from Eq. (\[def:HI\]), for $\nu=0$, the Hamiltonian reduces to ${\cal H}^0$, i.e. the standard single-band tight-binding model (the hopping matrix element is chosen to be unity). In the opposite limit, $\nu=\infty$, the lattice site $n_1$ is cut out of the ring. As a result, the model represents free fermions on a ring of $M-1$ sites, however, with an additional phase factor $e^{{i}\delta_1}$, $\delta_1=\pi/2$, for the hopping between $n_1-1$ and $n_1+1$, plus one uncoupled site. We emphasize that the parameters, $\cosh^{-1}(\nu)$ and $\tanh{(\nu)}$, as well as the phase factor $\delta_1=\pi/2$, are fine-tuned in the following sense: a generic impurity breaks translational invariance and lifts the degeneracies of the single-particle spectrum, which are found at $\phi=0,\pm\pi$. While our defects also break translational invariance, this symmetry is replaced by another, of not as clear physical origin. As a result we find that even when changing $\nu$, no degeneracies are lifted — they only occur at different, $\nu$-dependent values of $\phi$. The corresponding $\nu$-dependent symmetry operators can be constructed [@Korepin93].
The localization of electronic states is another, well established phenomenon, in one-dimensional disordered systems. In Fig. \[plot:WF\], we plot the squared modulus of the wavefunction for the single-particle level lowest in energy. Clearly, for the integrable case, the wavefunction is extended though reduced at the defect. Allowing, however, the phase $\delta_1$ to be different from $\pi/2$, which corresponds to the non-integrable case, we find a drastically different behaviour with a clear localization of the wave-function near the defect.
[*Several defects, finite interaction*]{}. – The results for a finite nearest-neighbour interaction, and in the presence of several defects, are obtained from the Bethe equations, which we derive from the algebraic Bethe ansatz, with the result $$\label{eq:BE}
\left[\frac{ \cosh(\lambda_j-{i}\eta)}{\cosh(\lambda_j+{i}\eta)}
\right]^{M-r}
\prod^{r}_{\ell=1}
\frac{\cosh(\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The Richardson-Lucy unfolding approach is reviewed. It is extremely simple and excellently performing. It efficiently suppresses artificial high frequency contributions and permits to introduce known features of the true distribution. An algorithm to optimize the number of iterations has been developed and tested with five different types of distributions. The corresponding unfolding results were very satisfactory independent of the number of events, the number of bins in the observed and the unfolded distribution, and the experimental resolution.'
address: 'Universität Siegen, D-57068 Siegen, Germany'
author:
- 'G. Zech'
title: 'Iterative unfolding with the Richardson-Lucy algorithm'
---
unfolding; Richardson-Lucy; iterative unfolding
Introduction
============
In many experiments the measurements are deformed by limited acceptance, sensitivity or resolution of the detectors. To be able to compare and combine results from different experiments and to compare the published data to a theory, the detector effects have to be unfolded. While acceptance losses can be corrected for, unfolding resolution effects is quite involved. Naive methods produce oscillations in the unfolded distribution that have to be suppressed by regularization schemes.
Various unfolding methods have been proposed in particle physics [@any91; @cernworkshop; @cowan]. The data are usually treated in form of histograms. This is also the case in the Richardson-Lucy (R-L) method [@rich72; @lucy74] which is especially simple, reliable, independent of the dimension of the histogram and independent of the underlying metric.
Iterative unfolding with the R-L algorithm has initially been used for picture restoration. Shepp and Vardi [@shepp82; @vardi85], and independently Kondor, [@kondor83] have introduced it into physics. It corresponds to a gradual unfolding. Starting with a first guess of the smooth true distribution, this distribution is modified in steps such that the difference between its smeared version and the observed distribution is reduced. With increasing number of steps, the iterative procedure develops oscillations. These are avoided by stopping the iterations as soon as the unfolded distribution, when folded again, is compatible with the observed data within the uncertainties. We will discuss the details below. The R-L algorithm originally was derived using Bayesian arguments [@rich72] but it can also be interpreted in a purely mathematical way [@muelthei86; @muelthei2005]. It became finally popular in particle physics after it had been promoted by D’Agostini [@dago] with the label Bayesian unfolding. In Ref. [@lindemann] it was adapted to unbinned unfolding. In Ref. [@na38] the R-L algorithm was applied to a 4-dimensional distribution.
The present situation in particle physics is unsatisfactory for two reasons: i) There is a lack of comparative systematic studies of the different unfolding methods and ii) the way to fix the degree of smoothing, the regularization strength, is usually only vaguely defined.
In the following section we introduce the notation and formulate the mathematical relations. In Section 3 we discuss regularization and the problem of assigning errors to the unfolded distribution. In Section 4 the R-L iterative approach is described. A criterion is developed to fix the number of iterations that have to be applied and which determine the degree of regularization. Section 5 contains examples. We conclude with a summary and recommendations.
Definitions and basic relations
===============================
An event sample with variables $\{x_{1},\ldots,x_{n}\}$, the *input sample* is produced according to a statistical distribution $f(x)$. It is observed in a detector. The *observed sample* $\{x_{1}^{\prime},\ldots,x_{n^{\prime}}^{\prime}\}$ is distorted due the finite resolution of the detector and reduced because of acceptance losses. We distinguish between four different histograms: The *true histogram* with content $\theta_{j}$, $j=1,\ldots,N$ of bin $j$. $\theta_{j}\propto
\int_{bin\text{ }j}f(x)dx$ corresponds to $f(x)$. The *input histogram* contains the input sample. The content of its bin $j$ is drawn from a Poisson distribution with mean value $\theta_{j}$. The *observed histogram* contains the observed sample with $d_{i}$ events in bin $i$, $i=1,\ldots,M$. The expected number of events $t_{i}$ in bin $i$ is given by $t_{i}\propto
\int_{bin\text{ }i}f^{\prime}(x^{\prime})dx^{\prime}$ where the functions $f^{\prime}$ and $f$ are related through $f^{\prime}(x^{\prime})=\int
g(x^{\prime},x)f(x)dx$ with the response function $g(x^{\prime},x)$. We choose $M>N$ to constrain the problem. The result of the unfolding procedure is again a histogram, the *unfolded histogram*, with bin content $\hat{\theta}_{j}$. We are confronted with a standard inference problem where the wanted parameters are the bin contents $\theta_{j}$ of the true histogram. It is to be solved by a least square (LS) or a maximum likelihood (ML) fit. We discuss only one-dimensional histograms but the corresponding array may represent a multi-dimensional histogram with arbitrarily numbered cells as well.
The numbers $t_{i}$ and $\theta_{j}$ are related by the linear relation $$t_{i}=\sum_{j=1}^{N}A_{ij}\theta_{j} \label{transfer}%$$ with the response matrix $A_{ij}$ $$A_{ij}=\frac{\int_{bin\text{ }i}f^{\prime}(x^{\prime})dx^{\prime}}%
{\int_{bin\text{ }j}f(x)dx}\;.\;$$
$A_{ij}$ is the probability to observe an event in bin $i$ that belongs to the true bin $j$. We calculate $A_{ij}$ by a Monte Carlo simulation, but as we do not know $f(x)$, we have to use a first guess of it. If the size of the bins is smaller than the experimental resolution, the elements of the response matrix show little dependence on the distribution that is used to generate the events.
We assume that the observed values $d_{i}$ fluctuate according to the Poisson distribution with the expectation $t_{i}$ and the variance $\delta_{i}%
^{2}=t_{i}$.
The representation of the unfolded distribution by a histogram is a first smoothing step. We call it *implicit regularization*. With wide enough bins, strong oscillations in the unfolded histogram are avoided. LS or ML fits will produce the parameter estimates $\hat{\theta}_{j}$ together with reliable error estimates. With the prediction $t_{i}$ for $d_{i}$ we can define $\chi^{2}$,
$$\chi^{2}=%
%TCIMACRO{\dsum \limits_{i=1}^{M}}%
%BeginExpansion
{\displaystyle\sum\limits_{i=1}^{M}}
%EndExpansion
\frac{\left[ d_{i}-t_{i}\right] ^{2}}{t_{i}}\;,$$
and the log-likelihood $\ln L$ derived from the Poisson distribution, $$\ln L=\sum_{i=1}^{M}\left[ d_{i}\ln t_{i}-t_{i}\right] \;.\label{likstat}%$$ Minimizing $\chi^{2}$ or maximizing $\ln L$ determines the estimates of the parameters $\hat{\theta}_{j}$. The ML fit is applicable also with small event numbers $d_{i}$ and suppresses negative estimates of the parameter values. Negative values can occur in rare cases.
The regularization and the error assignment
===========================================
In particle physics the data are often distorted by resolution effects. This means that without regularization the number of events in neighboring bins of the unfolded histogram are negatively correlated and as a consequence local fluctuations are observed. More precisely, the fitted parameters $\hat{\theta}_{j}%
,\hat{\theta}_{j^{\prime}}$ in two true bins $j,j^{\prime}$ are anti-correlated if their events have sizable probabilities $A_{ij}%
,A_{ij^{\prime}}$ to fall into the same observed bin $i$. These specific correlations are taken into account in most unfolding methods. An exception is entropy regularization [@nara86; @sch94; @maga98] which also penalizes fluctuations between distant bins.
The $\chi^{2}$ surface of the unregularized fit near its minimum $\chi_{0}%
^{2}$ is rather shallow and large correlated parameter changes produce only small changes $\Delta\chi^{2}$ of $\chi^{2}$ of the fit. The location of the true parameter point in the parameter space is badly known but the surfaces of $\chi_{0}^{2}+\Delta\chi^{2}$ for not too small values of $\Delta\chi^{2}$ are well defined and fix the error intervals which should not be affected by the regularization. We are allowed to move the point estimate but the error intervals should not be shifted. The regularization should lead only to a small increase of $\chi^{2}$. The increase $\Delta\chi^{2}=\chi^{2}-\chi_{0}^{2}$ defines an $N$ dimensional error interval around the fitted point in the parameter space. It can be converted to a $p$-value $$p=\int_{\Delta\chi^{2}}^{\infty}u_{N}(z)dz \label{pvalue}%$$ where $u_{N}$ is the $\chi^{2}$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A *Bagnera-de Franchis variety* $X = A/G$ is the quotient of an abelian variety $A$ by a free action of a finite cyclic group $G \subset \operatorname{Bihol}(A)$, which does not contain only translations. Constructing explicit polarizations and using a method introduced by F. Catanese, we classify split Bagnera-de Franchis varieties up to complex conjugation in dimensions $\leq 4$.'
address: |
Lehrstuhl Mathematik VIII\
Mathematisches Institut der Universität Bayreuth, NW II\
Universitätsstr. 30\
D-95447 Bayreuth\
Germany
author:
- Andreas Demleitner
title: 'Classification of Bagnera-de Franchis Varieties in Small Dimensions'
---
Introduction
============
This work studies free group actions of finite groups $G$ on abelian varieties $A$ and the corresponding quotients. Here, the group $G$ is a group of affine transformations of $A$, but not a subgroup of the group of translations (else, the quotient would be again an abelian variety). A quotient of an abelian variety by such a group $G$ is called a *generalized hyperelliptic variety*. More generally, one defines a *generalized hyperelliptic manifold* to be the quotient of a complex torus by a group $G$ as above.\
The study of these dates back to the beginning of the 20th century, when Bagnera and de Franchis as well as Enriques and Severi published their seminal works [@BdF] and [@Enr-Sev], respectively. In the surface case, the classification result of Bagnera and de Franchis shows that there are no non-projective hyperelliptic manifolds. Since then, several authors have studied hyperelliptic manifolds, as well as related areas that contributed a lot to today’s understanding of this topic. To name only a few works: [@Uchida-Yoshihara], [@Fujiki], [@BGL], [@Catanese-Ciliberto]. In 2001, Lange ([@Lange]) gave a method to classify BdF-varieties up to dimension $4$, using heavily the tables of linear automorphisms of abelian surfaces and threefolds (loc. cit), although he omitted some calculations in his work. It does not seem that this method can be used for the classification in dimension $> 4$ (because tables of linear automorphisms are - as far as we know - currently only known up to dimension $3$). Instead, Catanese [@Fabrizio] introduced a method for the classification based on elementary linear algebra and number theory which will be explained and used in this paper for the classification in higher dimensions.
Let us explain how this work is organized. The first chapter mainly recalls some basic facts we will need and establishes some elementary results concerning combinatorics of automorphisms of complex tori. In section \[bdf-chapter\] we introduce Bagnera-de Franchis varieties as quotients of an abelian variety $A$ by a free action of a finite cyclic group $G$ which is not a subgroup of the group of translations and state a characterization for them: a BdF-variety $X=A/G$ splits as $A = (B_1 \times B_2)/(G \times T_r)$, where $T_r$ is a finite group of translations, such that suitable properties are satisfied (cf. Theorem \[charac\]). Here, $G$ acts on $B_1$ by translation and linearly on $B_2$.\
In Chapter 2 we follow [@Fabrizio] and introduce the *Hodge type* of a $G$-Hodge decomposition, an invariant attached to a faithful representation $G \to \operatorname{GL}({\Lambda})$, where ${\Lambda}$ is a free abelian group of even rank.\
Catanese’s method (loc. cit.) for the classification of BdF-varieties will be discussed in Chapter 3. We will assume here that the lattice ${\Lambda}_2$ of $B_2$ is a module over a direct sum of cyclotomic rings (in this case we call $X$ *split*). This yields a decomposition of our abelian variety $B_2$ into $G$-invariant abelian subvarieties $B_{2,k}$, on which $G$ acts with eigenvalues of order $k$. We go on classifying complex tori which admit a linear automorphism acting only with eigenvalues of order $k$ to be able to list all possible decompositions for $B_2$. In Chapter 4 and Chapter 5, we put all pieces together (such that the conditions in the characterization of BdF-varieties are satisfied) and obtain the following classification result:
\[class-result\] The following classification results hold.
1. There are no BdF-curves.
2. Families of split BdF-varieties $X$ of dimension $\leq 4$ are fully classified up to complex conjugation in <span style="font-variant:small-caps;">Tables 5-7</span>.
3. Families of complex tori of dimension $\leq 5$, which admit a linear automorphism of order $m := |G|$ whose eigenvalues are only primitive $m$-th roots of unity are fully classified up to complex conjugation in sections \[surface-case\] to \[5dim\].
4. Each family of complex tori of dimension $\leq 5$ as in iii) contains an abelian variety.
Moreover, except possibly for the cases listed in <span style="font-variant:small-caps;">Table 7</span>, every family listed in iii) contains a principally polarized abelian variety.
The one-dimensional case i) is an easy consequence of the Riemann-Hurwitz formula, while the classification result for two-dimensional BdF-varieties is exactly the classification result of Bagnera-de Franchis, Enriques-Severi ([@BdF], [@Enr-Sev]). The threefold case was treated by Lange ([@Lange]). However, the result ii) in $\dim(X) = 4$ is new, as well as iii) and iv) are (as far as we know).\
The problem we face during the classification of BdF-varieties is that we do not know whether the classified families of complex tori in iii) really contain abelian varieties. This question will be dealt with in the last chapter: we find explicit polarizations for these, which turn out to be principal in most cases. We also investigate the problem of projectivity from another point of view, explaining how the following result by T. Ekedahl (for a detailed proof, see [@Demleitner]) applies to our situation.
\[Ekedahl\] Let $({\ensuremath{T}},G)$ be a rigid group action of a finite group $G$ on a complex torus ${\ensuremath{T}}$. Then ${\ensuremath{T}}/G$ is projective (or equivalently, $T = A$ is an abelian variety).
**Notation**: We fix the following notation throughout the whole work. We will work over the field ${\ensuremath{\mathbb{C}}}$ of complex numbers. By an *abelian variety*, we will therefore mean a complex abelian variety. The notion of a *ring* will always mean a commutative ring with unit element. The set of natural numbers ${\ensuremath{\mathbb{N}}}$ will denote the set of all non-negative integers. The dual space of a vector space $V$ is denoted $V^\vee$.
[^1]
Preliminaries
=============
In this section, we recall some basic facts which we will need in the sequel. Let ${\ensuremath{T}}= V/{\Lambda}$ be a complex torus. It is well-known (cf. for instance [@Cpl-Ab-Var] ) that ${\ensuremath{T}}$ is an abelian variety if and only if there is an alternating ${\ensuremath{\mathbb{Z}}}$-bilinear form ${\ensuremath{E}}$ on ${\Lambda}$ such that the associated ${\ensuremath{\mathbb{R}}}$-bilinear form $H \colon V \times V \to {\ensuremath{\mathbb{R}}}$ given by $H(v,w) = {\ensuremath{E}}({\ensuremath{\iota}}v,w)+ {\ensuremath{\iota}}{\ensuremath{E}}(v,w)$ is Hermitian and positive definite. These conditions are publicly known as the two *Riemann Bilinear Relations*. The Riemann Bilinear Relations can also be expressed in the following way. The form ${\ensuremath{E}}$ extends ${\ensuremath{\mathbb{C}}}$-linearly to a form ${\ensuremath{E}}$ on ${\Lambda}\otimes_{\ensuremath{\mathbb{Z}}}{\ensuremath{\mathbb{C}}}= V \oplus \overline{V}$. We have $$\begin{aligned}
{\ensuremath{E}}\in (V^\vee \otimes V^\vee) \oplus (V^\vee \otimes \overline{V}^\vee) \oplus (\overline{V}^\vee \otimes V^\vee) \oplus (\overline{V}^\vee \otimes \overline{V}^\vee).
\end{aligned}$$ Hence, ${\ensuremath{E}}$ splits as a sum ${\ensuremath{E}}= {\ensuremath{E}}_1 + H_1 + H_2 + {\ensuremath{E}}_2$ (where ${\ensuremath{E}}_1$ is in the first direct summand, $H_1$ is in the second one, and so on). Now we have (cf. [@Griffiths-Harris p. 327]):
The following
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present the results from a monitoring campaign of the Narrow-Line Seyfert 1 galaxy PG 1211+143. The object was monitored with ground-based facilities (UBVRI photometry; from February to July, 2007) and with [[*Swift*]{}]{} (X-ray photometry/spectroscopy and UV/Optical photometry; between March and May, 2007). We found PG 1211+143 in a historical low X-ray flux state at the beginning of the [[*Swift*]{}]{} monitoring campaign in March 2007. It is seen from the light curves that while violently variable in X-rays, the quasar shows little variations in optical/UV bands. The X-ray spectrum in the low state is similar to other Narrow-Line Seyfert 1 galaxies during their low-states and can be explained by a strong partial covering absorber or by X-ray reflection onto the disk. With the current data set, however, it is not possible to distinguish between both scenarios. The interband cross-correlation functions indicate a possible reprocessing of the X-rays into the longer wavelengths, consistent with the idea of a thin accretion disk, powering the quasar. The time lags between the X-ray and the optical/UV light curves, ranging from $\sim$2 to $\sim$18 days for the different wavebands, scale approximately as $\sim \lambda^{4/3}$, but appear to be somewhat larger than expected for this object, taking into account its accretion disk parameters. Possible implications for the location of the X-ray irradiating source are discussed.'
title: 'Studying X-ray reprocessing and continuum variability in quasars: PG 1211+143'
---
\[firstpage\]
quasars: individual: PG 1211+143; quasars: general; galaxies:active, photometry; accretion, accretion disks
Introduction
============
Powered by accretion, supposedly onto a supermassive black hole, quasars (Active Galactic Nuclei, AGN) are long known mostly as highly energetic, exotic objects in the hearts of the galaxies. Not until recently was their key role in galaxy evolution realized, revealed mostly as a strong correlation between the properties of the central black hole and that of the host galaxy (Magorrian et al. 1998, Ferrarese & Merritt 2000). Studying quasars, therefore, is not only important to understand the underlying physics; it can also help to shed some light on the strange interplay between the accreting matter from the host and outflows from the center, which ultimately shape both – the black hole and the galaxy.
Although a general picture of the structure of a typical quasar seems to be widely accepted (e.g. Elvis 2000; see also Krolik 1999), there are still many details in this picture that are not fully understood. Many of the problems to be solved concern AGN continuum variability – a rather common property, observed in practically all energy bands. Its universality indicates perhaps that variability should be an intrinsic property of the processes, responsible for continuum generation. The optical/UV to X-ray part of the continuum spectrum, as typically assumed, originates from an accretion disk around the central supermassive black hole.
Generally, X-ray variability can be caused by several factors: a change in the accretion rate; variable absorption (e.g. Abrassart & Czerny 2000); variable reflection (e.g. through a change of the height of the irradiating source, Miniutti & Fabian 2004; see also Gallo 2006, Done & Nayakshin 2007); some combination of reflection and absorption (e.g. Chevalier et al. 2006; Turner & Miller 2009); hot spots orbiting the central black hole (Turner et al. 2006; Turner & Miller 2009); local flares (Czerny et al. 2004), etc.
The AGN type with the strongest X-ray variability is the class of Narrow-Line Seyfert 1 galaxies (NLS1s, e.g. Osterbrock & Pogge, 1985). In addition, NLS1s show the steepest X-ray spectra seen among all AGN (e.g. Boller et al. 1996, Brandt et al. 1997, Leighly 1999a, b, Grupe et al. 2001). Most of their observed properties, like spectral slopes, FeII and \[OIII\] line ratios, CIV shifts, etc., appear to be driven by the relatively high Eddington ratio $L/L_{\rm Edd}$ in these objects (e.g. Sulentic et al. 2000, Boroson 2002, Grupe 2004, Bachev et al. 2004).
What concerns the optical/UV variability, the picture there is even more puzzling. There are many factors that can contribute to the variations of the optical flux, but most of them can account for the long-term (months to years) changes. There are often reported in many objects, however, short-term (day to week) optical/UV variations, simultaneous with or shortly lagging behind the X-ray variations. An interesting idea that can explain such a behaviour is reprocessing of the highly variable X-ray emission into optical/UV bands.
In this paper we address the question of the relations between the X-ray and the optical/UV emission by studying the variability from X-rays to I-band of the NLS1 PG 1211+143. This NLS1 has been the target of almost all major X-ray observatories since EINSTEIN (Elvis et al. 1985). The X-ray continuum displays a strong and variable soft X-ray excess (Pounds & Reeves 2007). From XMM-Newton RGS data, Pounds et al. (2003) suggested the presence of high-velocity outflows in PG 1211+143, a result that was questioned by Kaspi & Behar (2006). However, high-velocity outflows seen in X-rays have been repeatedly reported (e.g. Leighly et al. 1997) and new XMM-Newton data of PG 1211+143 (Pounds & Page 2006) seem to confirm the previous claims made by Pounds et al. (2003).
Our primary goal is to find out if and how the X-ray variations are transferred into the longer-wavelength continuum. Time delays between the flaring X-ray emission, presumably coming from a compact, central source and the optical/UV light curves are expected, provided the X-rays are reprocessed in the outer, colder part of an accretion disk. Such a study may have implications on two important problems – the radial temperature distribution of an accretion disk (and hence – the type of the disk) and the location of the X-ray source, based on how much the disk “sees” it.
This paper is organized as follows: In Section 2 we describe the [[*Swift*]{}]{} and ground-based optical monitoring observations. Section 3 focuses on presenting the results of this study and in Section 4 we discuss these results in the context of the general picture of AGN. Throughout the paper spectral indexes are denoted as energy spectral indexes with $F_{\nu} \propto \nu^{-\alpha}$. Luminosities are calculated assuming a $\Lambda$CDM cosmology with $\Omega_{\rm M}$=0.27, $\Omega_{\Lambda}$=0.73 and a Hubble constant of $H_0$=75 km s$^{-1}$ Mpc$^{-1}$.
Observations and reductions
===========================
Swift data
----------
The [[*Swift*]{}]{} Gamma-Ray Burst (GRB) explorer mission (Gehrels et al. 2004) monitored PG 1211+143 between 2007 March 08 and May 20. Note, that scheduled observations were twice bumped by detections of Gamma-Ray-Bursts[^1], explaining the absence of segments 15 and 20 (Table A1). After our monitoring campaign in 2007, PG 1211+143 was re-observed by [[*Swift*]{}]{} in February 2008 (segment 24) However, this observation was used to slew between two targets. Therefore, this observation is very short (188s) and no X-ray spectra or UVOT photometry data were obtained. This observation only allows to measure a count rate. A summary of all [[*Swift*]{}]{} observations is given in Table\[obs\_log\]. The [[*Swift*]{}]{} X-Ray Telescope (XRT; Burrows et al. 2005) was operating in photon counting mode (PC mode; Hill et al. 2004) and the data were reduced by the task [*xrtpipeline*]{} version 0.10.4, which is included in the HEASOFT package 6.1. Source photons were selected in a circular region with a radius of 47$^{''}$ and background region of a close by source-free region with $r=188^{''}$. Photons were selected with grades 0–12. The photons were extracted with [*XSELECT*]{} version 2.4. The spectral data were re-binned by using [*grppha*]{} version 3.0.0 having 20 photons per bin. The spectra were analyzed with [*XSPEC*]{} version 12.3.1 (Arnaud 1996). The ancillary response function files (arfs) were created by [*xrtmkarf*]{} and corrected for vignetting and bad columns/pixels using the exposure maps. We used the standard response matrix [*swxpc0to12s0\_20010101v010.rmf*]{}. Especially during the low-state the number of photons during one segment is too small to derive a spectrum with decent signal-to-noise. Therefore we co-added the data of several segments to obtain source and background spectra. In order to examine
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Trapped ultracold neutrons (UCN) have for many years been the mainstay of experiments to search for the electric dipole moment (EDM) of the neutron, a critical parameter in constraining scenarios of new physics beyond the Standard Model. Because their energies are so low, UCN preferentially populate the lower region of their physical enclosure, and do not sample uniformly the ambient magnetic field throughout the storage volume. This leads to a substantial increase in the rate of depolarization, as well as to shifts in the measured frequency of the stored neutrons. Consequences for EDM measurements are discussed.'
author:
- 'P.G. Harris'
- 'J.M. Pendlebury'
- 'N.E. Devenish'
bibliography:
- 'neutron\_edm.bib'
title: 'Gravitationally enhanced depolarization of ultracold neutrons in magnetic field gradients, and implications for neutron electric dipole moment measurements '
---
Introduction
============
Ultracold neutrons (UCN) are neutrons of extremely low energy, typically less than or of the order of 200 neV, which therefore have wavelengths that are long compared with the spacing between atomic nuclei in solids. The surfaces of many materials then appear as a positive potential barrier (the so-called Fermi potential) from which these neutrons reflect. This allows the storage of such neutrons in material bottles, typically for several minutes at a time, which in turn permits the study of their fundamental properties. One such study is the ongoing search for the electric dipole moment (EDM) of the neutron, of which the most recent measurement was carried out at the Institut Laue-Langevin, Grenoble, by a collaboration led by the University of Sussex and the Rutherford Appleton Laboratory,[@baker06] using apparatus at room temperature (in contrast to its cryogenic successor, now under development).
The internal volume of the neutron trap used in the room-temperature EDM experiment (RT-nEDM) was an upright cylinder 12 cm high, with quartz walls 37 cm in diameter and a roof and floor of aluminium coated with diamond-like carbon. Crucial to the analysis of the experimental data was the fact that the UCN, being of very low energy, tended to populate preferentially the lower part of the storage volume, whereas the cohabiting mercury ($^{199}$Hg) magnetometer[@green98] filled the volume uniformly. Any vertical magnetic-field gradient $\dBzdz$ applied to the volume would affect the two species differently, such that the gyromagnetic-ratio-corrected ratio of the neutron and mercury Larmor precession frequencies $$\label{eqn:R}
R = \left| \frac{\nu_n}{\nu_{\rm Hg}}\cdot \frac{\gamma_{\rm Hg}}{\gamma_{n}} \right|$$ would, to first order, be shifted by $$\label{eqn:DeltaR}
\Delta R = \pm \Delta h \cdot \frac{\dBzdz}{B_{0_z}},$$ where $\Delta h$ is the (always positive) difference in height between the centre of mass of the mercury and that of the UCN, and the $\pm$ sign depends upon the relative directions of $B_{0_z}$ and $\dBzdz$: $R$ increases (i.e. $\Delta R$ becomes more positive) as the absolute magnitude of the field at the bottom of the storage cell (sampled preferentially by the neutrons) increases relative to that at the top of the cell.
The Larmor precession frequency of the UCN was measured by means of the Ramsey method of separated oscilliatory fields, for which a time $T$ = 130 s between the two r.f. pulses was used consistently. During this period, the UCN would suffer some loss of their (transverse) polarization. This study looks at some of the mechanisms and consequences of this so-called $T_2$ relaxation. For the sake of example, all values of the various parameters used in modelling the phenomenon (storage cell size, Fermi potential, $B_{0_z}$ magnitude etc.) are those appropriate to the RT-nEDM experiment.
UCN density distributions
=========================
It is convenient to refer to the energy of UCN in terms of the maximum height attainable within Earth’s gravitational field. Phase space arguments can be used to demonstrate[@golub_UCN_book; @pendlebury94] that a population of trapped UCN each of energy $\epsilon$ has a density distribution with height $h$ of the form $$\label{eqn:height_dist}
n(h) = \left(1-\frac{h}{\epsilon}\right)^{1/2}n(0).$$ Integration and inversion of this function shows that the height distribution of such UCN within a storage cell of height $H$ may be generated from numbers $X$ distributed uniformly between 0 and 1 via the equation $$\label{eqn:h_generator}
h = \epsilon\left[1-\left(1-kX\right)^{2/3}\right].$$ The constant $k=1$ when $\epsilon<H$, and $k=1-\left(1-H/\epsilon\right)^{3/2}$ otherwise. We note that the average height of the UCN within this population is $$\label{eqn:average_height}
\left<h\right> = -\frac{\epsilon}{k}\left[0.6-k-0.6\left(1-k\right)^{5/3}\right].$$
UCN may be generated by capturing the very low-energy tail of the Maxwell-Boltzmann distribution within a thermal source, or else by downscattering from e.g. liquid helium in a superthermal source. In either case, the energy distribution tends to be close to $$\label{eqn:energy_distn}
n(\epsilon)d\epsilon \propto \epsilon^{1/2}d\epsilon.$$ By the time the UCN are stored, this distribution can change: for example, allowing the UCN to fall under gravity will shift the entire energy distribution upwards; or passage through a polarising foil can remove those of low energy. The top of the energy distribution tends to have a fairly sharp cut-off, corresponding to the Fermi potential of the storage vessel. In the case of RT-nEDM, the UCN rose under gravity after passage through a polarising foil, and the bottom of the storage cell was positioned such that those with just enough energy to pass through the foil would also have just enough energy to reach the cell. Here, therefore, the energy distribution is modelled with the simple function of \[eqn:energy\_distn\], using the 93 cm equivalent height Fermi potential of the quartz walls of the vessel as the cutoff energy. As above, integration and inversion yields a generating function $$\label{eqn:E_generator}
\epsilon = \epsilon_{F}Y^{2/3},$$ where $\epsilon_{F}$ is the (Fermi potential) cut-off energy, and the numbers $Y$ are distributed uniformly between 0 and 1.
The distribution of average heights of a population of UCN with such an energy distribution is shown in \[fig:height\_dist\]. The centre of mass of this modelled population of UCN is 3.0 mm below the centre of the storage trap, in good agreement with the 2.81 mm reported in Baker et al.[@baker06]. Some 4.6% of the UCN are not sufficiently energetic to reach the top of the trap. There is a small but extended tail, amounting to some 4.6% of the total population, of neutrons that do not have sufficient energy to reach the lid of the bottle. Over time, in a vertical magnetic-field gradient, two processes contributing to $T_2$ depolarisation come into play: (a) There is an energy dependence to the natural depolarisation rate in a magnetic-field gradient, because of the different rates at which the neutrons sample the measurement volume. This will be referred to as the [*intrinsic*]{} component, and is modelled in this study by means of a simulation. It is applicable even in the absence of a gravitational field. (b) Under gravity, UCN at different average heights effectively sample different magnetic fields, and therefore on average precess at different rates. This will be referred to as the [*enhanced*]{} component, and is here modelled by means of the analytic distributions described above.
\[ht\]
Simulation of UCN depolarisation in magnetic-field gradients
============================================================
Other studies have considered $T_1$ (longitudinal) and $T_2$ (transverse) relaxation rates of atoms in various configurations of electromagnetic fields and storage trap geometries.[@gamblin65; @schearer65; @cates88; @cates88b; @mcgregor90; @schmid08] An approach often adopted is that of the autocorrelation function, as outlined by McGregor.[@mcgregor90] In this instance, however, the situation is complicated by the parabolic nature of the orbits of the UCN moving under gravity. For this study, therefore, a Monte Carlo simulation has been developed, in which the UCN move in ballistic trajectories within the RT-nEDM cylindrical trap described above, and their spins evolve classically according to the solution $$\begin{aligned}
\vec{\sigma}(t) &= \left(\vec{\sigma}_0-\frac{\left(\vec{\sigma}_0\cdot \vec{B}\right)\vec{B}}{B^2}\right)\cos\left(\omega t\right) \\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We have recently identified metal-sandwich (MS) crystal structures and shown with [*ab initio*]{} calculations that the MS lithium monoboride phases are favored over the known stoichiometric ones under hydrostatic pressure \[Phys. Rev. B 73, 180501(R) (2006)\]. According to previous studies synthesized lithium monoboride tends to be boron-deficient, however the mechanism leading to this phenomenon is not fully understood. We propose a simple model that explains the experimentally observed off-stoichiometry and show that compared to such boron-deficient phases the MS-LiB compounds still have lower formation enthalpy under high pressures. We also investigate stability of MS phases for a large class of metal borides. Our [*ab initio*]{} results suggest that MS noble metal borides are less unstable than the corresponding AlB$_2$-type phases but not stable enough to form under equilibrium conditions.'
author:
- 'Aleksey N. Kolmogorov and Stefano Curtarolo'
title: Theoretical study of metal borides stability
---
1. Introduction {#section.introduction}
===============
The interest in the AlB$_2$ family of metal diborides re-emerged after the discovery of superconductivity in MgB$_2$ with a surprisingly high transition temperature of 39 K[@origin]. Boron $p$-states have been shown to be key for both stability and superconductivity in these compounds[@Kortus; @Shein; @Oguchi]. MgB$_2$ is a unique metal diboride because it has a significant density of boron $p\sigma$-states at the Fermi level which give rise to the high T$_c$ superconductivity, and yet enough of them are filled for the compound to be structurally stable[@Kortus; @Shein; @Oguchi]. The effectively hole-doped noble- and alkali-metal diborides would have higher $p\sigma$ density of states (DOS) at E$_F$, but they have been demonstrated to be unstable under normal conditions[@Oguchi]. The effort to achieve higher T$_c$ has thus primarily focused on doping magnesium diboride with various metals; however, doping this material has proven to be difficult[@dope_review] and no improvement on T$_c$ has yet been reported. According to a recent theoretical study of nonlocal screening effects in metals, MgB$_2$ may already be optimally doped[@peihong]. Lithium borocarbide with a doubled AlB$_2$ unit cell has been suggested as a possible high-T$_c$ superconductor under hole-doping[@LiBC], but disorder in the heavily doped Li$_x$BC appears to forbid superconductivity above 2K[@LixBC].
In this work we investigate whether there could be stable high-T$_c$ superconducting metal borides in configurations beyond the standard AlB$_2$ prototype. We have recently proposed metal-sandwich (MS) structures MS1 and MS2, which also have $sp^2$ layers of boron but separated by two metal layers[@MGB]. Despite their rather simple unit cells these structures have apparently never been considered before. As we demonstrate below, identification of the MS structures is not straightforward because they represent a local minimum not usually explored with current compound prediction strategies[@MGB]. We reveal trends in the cohesion of MS phases by calculating formation energies for a large class of metal borides and show that some monovalent-metal borides benefit from having additional layers of metal. The MS noble-metal borides still have positive formation energy, but they are less unstable than the AlB$_2$-type phases. This result helps resolve the question of what phases would form first in the noble-metal boride systems under non-equilibrium conditions[@AgxB2; @hype; @PF; @Ag_laser].
Our main finding concerns the Li-B system, in which the MS lithium monoboride is stable enough to compete against the known stoichiometric phases. According to our previous [*ab initio*]{} calculations the MS lithium monoboride is comparable in energy to these phases under normal conditions, but it becomes the ground state at 50% concentration under moderate hydrostatic pressures[@MGB]. Here we extend the analysis to non-stoichiometric Li-B phases which could potentially intervene in the synthesis of the MS phases. In particular, synthesized lithium monoboride with linear chains of boron is known to be boron-deficient for reasons not fully understood so far. We simulate the incommensurate LiB$_y$ phases (notation explained in Ref. [@xy]) by constructing a series of small periodic Li$_{2n}$B$_m$ structures and show that the minimum formation energy is achieved for $y\approx0.9$, in very good agreement with the experimentally observed values. Using this simple model of the off-stoichiometry phases with linear chains of boron we demonstrate that relative to them MS-LiB still has lower formation enthalpy under high pressures. Simulations of other alkali-metal borides, MB$_y$ (M = Na, K, Rb, Cs), suggest that that these nearly stoichiometric phases might form under moderate pressures.
The paper is divided in the sections describing: 2) simulation details; 3) construction of the MS prototypes; 4) stability of MS phases for a large class of metal borides; 5) detailed investigation of the Li-B system; 6) simulations of other monovalent and higher-valent metal borides; 7) summary of the electronic and structural properties of the MS phases.
2. Computation details {#section.methods}
======================
Present [*ab initio*]{} calculations are performed with Vienna Ab-Initio Simulation Package [VASP]{} [@kresse1993; @kresse1996b] with Projector Augmented Waves (PAW) [@bloechl994] and exchange-correlation functionals as parametrized by Perdew, Burke, and Ernzerhof (PBE)[@PBE] for the Generalized Gradient Approximation (GGA). Because of a significant charge transfer between metal and boron in most structures considered we use PAW pseudopotentials in which semi-core states are treated as valence. This is especially important for the Li-B system as discussed in Refs. [@MGB; @US_PAW]. Simulations are carried out at zero temperature and without zero-point motion; spin polarization is used only for magnetic alloys. We use an energy cutoff of 398 eV and at least 8000/(number of atoms in unit cell) ${\bf k}$-points distributed on a Monkhorst-Pack mesh [@MONKHORST_PACK]. We also employ an augmented plane-wave+local orbitals (APW+lo) code [WIEN2K]{} to plot characters of electronic bands[@WIEN2K]. All structures are fully relaxed. Our careful tests show that the relative energies are numerically converged to within 1$\sim$2 meV/atom.
Construction of binary phase diagrams A$_x$B$_{1-x}$ is based on the calculated formation enthalpy $H_{f}$, which is determined with respect to the most stable structures of pure elements. For boron there are two competing phases $\alpha$-B and $\beta$-B[@bB]; we use $\alpha$-B (Ref. [@Oguchi]), theoretically shown to be the more stable phase at low temperatures and high pressures [@bB]. A structure at a given composition $x$ is considered stable (at zero temperature and without zero-point motion) if it has the lowest formation enthalpy for any structure at this composition and if on the binary phase diagram $H_{f}(x)$ it lies below a [*tie-line*]{} connecting the two stable structures closest in composition to $x$ on each side.
3. Identification of MS prototypes {#section.identification}
==================================
Data-mining of quantum calculations (DMQC), introduced in our previous work[@SC1], is a theoretical method to predict the structure of materials through efficient re-use of [*ab initio*]{} results. The DMQC iteratively determines correlations in the calculated energies on a chosen library of binary alloys and structure types. The last work has demonstrated that for a set of 114 crystal structures and 55 binary metallic alloys the method gives an almost perfect prediction of the ground states (within the library) in a fraction of all possible computations[@SC1; @Morgan]. The speed-up (commonly by a factor from 3 to 4) is achieved by the method’s rational strategy for suggesting the next phase to be evaluated. An essential feature of these calculations is the full relaxation of the considered structures, which ensures an accurate determination of the correlations in the chosen library[@DMQC].
We have recently begun expanding the $114\times55$ library of [*ab initio*]{} energies of binary alloys[@SC1] to include metal borides. Boron tends to form covalent bonds in intermetallic compounds; to have this correlation in future predictions with the DMQC we needed first to add a few compound-forming metal-boride systems into the library. Introduction of a new system involves calculations of energies for all the prototype entries in the library. Surprisingly, in the very first system considered, Mg-B, one of the fcc structures with 4-atom unit cell at 50% concentration, A$_2$B$_2$ fcc-(111) (or V2 [@Zunger1]), relaxed almost all the way down to the AlB$_2$-MgB$_2$$\leftrightarrow$hcp-Mg tie-line. Significant relaxations are not uncommon in our simulations; they usually correspond
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The paper introduces an extension of the proposal according to which conceptual representations in cognitive agents should be intended as *heterogeneous proxytypes*. The main contribution of this paper is in that it details how to reconcile, under a heterogeneous representational perspective, different theories of typicality about conceptual representation and reasoning. In particular, it provides a novel theoretical hypothesis - as well as a novel categorization algorithm called DELTA - showing how to integrate the representational and reasoning assumptions of the theory-theory of concepts with the those ascribed to the prototype and exemplars-based theories.'
author:
- Antonio Lieto
bibliography:
- 'bibliography.bib'
title: |
Heterogeneous Proxytypes Extended:\
Integrating Theory-like Representations and Mechanisms with Prototypes and Exemplars
---
Introduction
============
The proposal of characterizing the representational system of cognitive artificial agents by considering conceptual representations as *heterogeneous proxytypes* was introduced in [@lieto2014computational][^1] and has been recently employed and successfully tested in systems like DUAL-PECCS [@lieto2016dual; @lieto15ijcai; @lieto2016towards], later integrated with diverse cognitive architectures such as ACT-R [@anderson2004integrated], CLARION [@sun06clarion], SOAR [@laird2012soar] and Vector-LIDA [@snaider2014vector]. The main contribution of this work is in that it offers a proposal to reconcile, under a heterogeneous representational perspective, not only prototype and exemplars based representations and reasoning procedures, but also the representational and reasoning assumptions ascribed to the so called theory-theory of concepts [@murphy2002big]. In doing so, the paper proposes a novel categorization algorithm, called *DELTA* (i.e. unifie**D** Cat**E**gorization a**L**gorithm for he**T**erogeneous represent**A**tions) able to unify and integrate, in a cognitively oriented perspective, all the common-sense categorization mechanisms available in the cognitive science literature. The rest of the paper is organized as follows: the Section 2 provides an overview of the main representational paradigms proposed by the Cognitive Science and the Cognitive Modelling communities. Section 3, briefly synthesize the representational framework intending concepts as *heterogeneous proxytypes* by showing how such theoretical proposal has been actually implemented and successfully tested in the DUAL-PECCS system. Section 4, proposes a more close analysis of the findings of the theory-theory of concepts, while, Section 5, proposes a novel and extended categorization algorithm integrating the theory-theory representational and reasoning mechanisms with those involving both exemplars and prototypes.
Prototypes, Exemplars, Theories and Proxytypes {#prototypes_exemplars_proxytypes}
==============================================
In the Cognitive Science literature, different theories about the nature of concepts have been proposed. According to the so called classical theory, concepts can be simply defined in terms of sets of necessary and sufficient conditions. Such theory was dominant until the mid ’$70$s of the last Century, when Rosch’s experimental results demonstrated the inadequacy of such a theory for ordinary –or common-sense – concepts [@rosch75cognitive]. Rosch’s results suggested, on the other hand, that ordinary concepts are characterized and organized in our mind in terms of *prototypes*. Since then, different theories of concepts have been proposed to explain different representational and reasoning aspects concerning the problem of typicality: the prototype theory, the exemplars theory and the theory-theory. According to the *prototype* view, knowledge about categories is stored in terms of prototypes, i.e., in terms of some representation of the “best” instance of the category. In this view, the concept *bird* should coincide with a representation of a typical bird (e.g., a robin). In the simpler versions of this approach, prototypes are represented as (possibly weighted) lists of typical features. According to the *exemplar* view, a given category is mentally represented as set of specific exemplars explicitly stored in memory: the mental representation of the concept *bird* is a set containing the representation of (some of) the birds we encountered during our past experience. Another well known typicality-based theory of concepts is the so called the *theory-theory* [@murphy2002big]. Such approach adopts some form of holistic point of view about concepts. According to some versions of the theory-theories, concepts are analogous to theoretical terms in a scientific theory. For example, the concept *cat* is individuated by the role it plays in our mental theory of zoology. In other versions of the approach, concepts themselves are identified with micro-theories of some sort. For example, the concept *cat* should be identified with a mentally represented microtheory about cats.
Although these approaches have been largely considered as competing ones (since they propose different models and predictions about how we organize and reason on conceptual information), they turned out to be not mutually exclusive [@malt1989line]. Rather, they seem to succeed in explaining different classes of cognitive *phenomena*, such as the fact that human subjects use different representations to categorize concepts. In particular, it seems that we can use - in different situations - exemplars, prototypes or theories [@smith1998prototypes; @murphy2002big; @keil1989concepts]. Such experimental evidences led to the development of the so called “heterogeneous hypothesis” about the nature of concepts: this approach assumes that concepts do not constitute a unitary phenomenon, and hypothesizes that different types of conceptual representations may co-exist: prototypes, exemplars, theory-like or classical representations [@machery2009doing]. All such representations, in this view, constitute different *bodies of knowledge* and contain different types of information associated to the the same conceptual entity. Furthermore, each body of conceptual knowledge is assumed to be featured by specific processes in which such representations are involved (e.g., in cognitive tasks like recognition, learning, categorization, *etc.*). In particular prototypes, exemplars and theory-like default representations are associated with the possibility of dealing with non-monotonic strategies of reasoning and categorization, while the classical representations (i.e. that ones based on necessary and/or sufficient conditions) are associated with standard deductive mechanism of reasoning [^2].
In recent years an alternative theory of concepts has been proposed: the *proxytype theory*. It postulates a biological localization and interaction between different brain areas for dealing with conceptual structures. Such localization have a direct counterpart in the well known distinction between *long term* and *working memory* [@prinz2002furnishing]. In addition, such characterization is particularly interesting for the explanation of phenomena such as, for example, the activation (and the retrieval) of conceptual information. In this setting, concepts are seen as *proxytypes*. A *proxytype* is any element of a complex representational network *stored in long-term memory* corresponding to a particular *category* that can be tokenized in working memory to ‘go proxy’ for that category [@prinz2002furnishing]. In other terms, the proxytype theory, inspired by the work of Barsalou [@barsalou1999perceptual], considers concepts as *temporary constructs* of a given category, activated (tokenized) in working memory as a result of conceptual processing activities, such as concept identification, recognition and retrieval.
Heterogeneous Proxytypes
========================
In the original formulation of the proxytypes theory, however, proxytypes have been depicted as monolithic conceptual structures, primarily intended as prototypes [@de2005prinz]. A revised view of this approach has been recently proposed, hypothesizing the availability of a wider range of representation types than just prototypes [@lieto2014computational]. They correspond to the kinds of representations hypothesized by the above mentioned heterogeneous approach to concepts. In this sense, proxytypes are assumed to be heterogeneous in nature (i.e., they are assumed to be composed by heterogeneous networks of conceptual representations and not only by a monolithic one)[^3].
In this renewed formulation, heterogeneous representations (such as *prototypes*, *exemplars*, *theory-like* structures, *etc.*) for each conceptual category are assumed to be *stored in long-term memory*. They can be activated and accessed by resorting to different categorization strategies. In this view, each representation has its associated accessing procedures. In the following, I will briefly present how such theoretical hypothesis has been implemented in the DUAL-PECCS categorization system, and I will use the latter system as a computational referent for showing how the proposals presented in this paper can extend both the system itself and, more importantly, its underlying theoretical framework.
Heterogeneous Proxytypes in DUAL-PECCS
--------------------------------------
DUAL-PECCS [@lieto15ijcai; @lieto2016dual], is a cognitive categorization system explicitly designed and implemented under the heterogeneous proxytypes assumption[^4] for both the representational level (that is: it is equipped with a hybrid knowledge base composed of heterogeneous representations, each endowed with specific reasoning mechanisms) and for the ‘proxyfication’ mechanisms (i.e.: the set of procedures implementing the tokenization of the different representations in working memory). The heterogeneous conceptual architecture of DUAL PECCS includes prototypes, exemplars and classical representations. All these different bodies of knowledge point to the same conceptual entity (the anchoring for these different types of representations is obtained via the Wordnet, see again [@lieto2016dual]). An example of the heterogeneous conceptual architecture of DUAL PECCS is
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider Vlasov-type scaling for the Glauber dynamics in continuum with a positive integrable potential, and construct rescaled and limiting evolutions of correlation functions. Convergence to the limiting evolution for the positive density system in infinite volume is shown. Chaos preservation property of this evolution gives a possibility to derive a non-linear Vlasov-type equation for the particle density of the limiting system.'
author:
- 'Dmitri Finkelshtein[^1]'
- 'Yuri Kondratiev[^2]'
- 'Oleksandr Kutoviy[^3]'
title: Vlasov scaling for the Glauber dynamics in continuum
---
Introduction
============
Kinetic equations are a useful approximation for the description of dynamical processes in multi-body systems, see, e.g., the reviews by H.Spohn [@Spo1980], [@Spo1991]. Among them, the Vlasov equation has important role in physics (in particular, physics of plasma). It describes the Hamiltonian motion of an infinite particle system in the mean field scaling limit when the influence of weak long-range forces is taken into account. The convergence of the Vlasov scaling limit was shown rigorously by W.Braun and K.Hepp [@BH1977] (for the Hamiltonian dynamics) and by R.L.Dobrushin [@Dob1979] (for more general deterministic dynamical systems). However, the resulting Vlasov-type equations for particle densities are considered in classes of integrable functions (or, in the weak form, of finite measures). This, in fact, restricts us to the case of finite volume systems or systems with zero mean density in an infinite volume. Detailed analysis of Vlasov-type equations for integrable functions is presented in the recent paper by V.V.Kozlov [@Koz2008].
In [@FKK2010a], we proposed a general approach to study the Vlasov-type scaling for some classes of stochastic evolutions in the continuum, in particular, for spatial birth-and-death Markov processes. The approaches mentioned above are not applicable to these dynamics (even in a finite volume) due to essential reasons (see [@FKK2010a] for details). One of them is a possible variation of the particle number during the evolution. More essentially is that for these processes the possibility of their descriptions in terms of proper stochastic evolutional equations for particle motion is, generally speaking, absent. There are only few works concerning general spatial birth-and-death evolutions, see [@Pre1975], [@HS1978], [@GK2006], [@GK2008], [@Pen2008], [@Qi2008]. However, the conditions for the existence (in different senses) of the evolutions considered therein are quite far from the general form.
Therefore, we looked for an alternative approach to the derivation of kinetic Vlasov-type equations from stochastic dynamics. The correct Vlasov limit can be easily guessed from the BBGKY hierarchy for the Hamiltonian system, see, e.g., [@Spo1980]. Such a heuristic derivation does not assume the integrability condition for the density, but until now, it could not be made rigorously due to the lack of detailed information about the properties of solutions to the BBGKY hierarchy. Our approach is based on this observation applied in a new dynamical framework. Note that we already know that many stochastic evolutions in continuum admit effective descriptions in terms of hierarchical equations for correlation functions which generalize the BBGKY hierarchy from Hamiltonian to Markov setting, see, e.g., [@FKO2009] and the references therein. Even more, these hierarchical equations are often the only available technical tools for a construction of considered dynamics [@KKM2008], [@KKZ2006], [@FKK2009].
Developing this point of view, our scheme for the Vlasov scaling of stochastic dynamics is based on the proper scaling of the hierarchical equations. This scheme has also a clear interpretation in the terms of scaled Markov generators. An application of the considered scaling leads to the limiting hierarchy which posses a chaos preservation property. Namely, if we start from a Poissonian (non-homogeneous) initial state of the system, then during the time evolution this property will be preserved. Moreover, a special structure of the interaction in the resulting virtual Vlasov system gives a non-linear evolutional equation for the density of the evolving Poisson state.
The control of the convergence of Vlasov scalings for the considered hierarchies is a quite difficult technical problem which should be analyzed for any particular model separately. In the present paper, we solve this problem for the Glauber dynamics in continuum. These dynamics have given reversible states which are grand canonical Gibbs measures. The corresponding equilibrium dynamics which preserve the initial Gibbs state in the time evolution were considered in, e.g., [@KL2005], [@KLR2007], [@KMZ2004], [@FKL2007]. Note that, in applications, the time evolution of initial state is the subject of the primary interest. Therefore, we understand the considered stochastic (non-equilibrium) dynamics as the evolution of initial distributions for the system. Actually, the corresponding Markov process (provided it exists) itself gives a general technical equipment to study this problem. Moreover, using the techniques developed in [@GK2006], it is possible to construct this Markov process as a solution of a stochastic differential equation. Unfortunately, this approach does not give any information about the properties of the corresponding correlation functions which we need for the study of Vlasov scaling as was mentioned above.
However, we note that the transition from the micro-state evolution corresponding to the given initial configuration to the macro-state dynamics is the well developed concept in the theory of infinite particle systems. This point of view appeared initially in the framework of the Hamiltonian dynamics of classical gases, see, e.g., [@DSS1989]. Again, the lack of the general Markov processes techniques for the considered systems makes it necessary to develop alternative approaches to study the state evolutions in the Glauber dynamics. Such approaches we realized in [@KKM2008], [@KKZ2006], [@FKKZ2010], [@FKK2010]. The description of the time evolutions for measures on configuration spaces in terms of an infinite system of evolutional equations for the corresponding correlation functions was used there. The latter system is a Glauber evolution’s analog of the famous BBGKY-hierarchy for the Hamiltonian dynamics.
Here we extend the approximation approach proposed in [@FKKZ2010], [@FKK2010] to the Vlasov scaling for the Glauber dynamics in continuum. We construct and study semigroups corresponding to properly rescaled Markov generator of the Glauber dynamics (Propositions \[descsemigroupexist\] and \[sun-inv\]). We prove for the integrable and bounded potential the convergence of these semigroups to the limiting semigroup which describe Vlasov evolution (Theorem \[maintheorem\]). We derive the corresponding Vlasov-type equation from this evolution (Theorem \[Vlasovscheme\]). Note that the stationary solution of this equation will satisfied the well-known Kirkwood–Monroe equation in the freezing theory (Remark \[RemarkKirkwoodMonroe\]).
Glauber dynamics in continuum
=============================
Basic facts and notation
------------------------
Let ${\mathcal{B}}({{\mathbb{R}}^d})$ be the family of all Borel sets in ${{\mathbb{R}}^d}$, $d\geq 1$; ${\mathcal{B}}_{\mathrm{b}}
({{\mathbb{R}}^d})$ denotes the system of all bounded sets in ${\mathcal{B}}({{\mathbb{R}}^d})$.
The configuration space over space ${{\mathbb{R}}^d}$ consists of all locally finite subsets (configurations) of ${{\mathbb{R}}^d}$, namely, $$\label{confspace}
\Gamma =\Gamma_{{\mathbb{R}}^d} :=\Bigl\{ \gamma \subset
{{\mathbb{R}}^d} \Bigm| |\gamma _\Lambda |<\infty, \ \mathrm{for \ all } \ \Lambda \in {\mathcal{B}}_{\mathrm{b}} ({{\mathbb{R}}^d})\Bigr\}.$$ Here $\gamma_\Lambda:=\gamma\cap\Lambda$, and $|\cdot|$ means the cardinality of a finite set. The space $\Gamma$ is equipped with the vague topology, i.e., the minimal topology for which all mappings $\Gamma\ni\gamma\mapsto
\sum_{x\in\gamma}
f(x)\in{\mathbb{R}}$ are continuous for any continuous function $f$ on ${{\mathbb{R}}^d}$ with compact support; note that the summation in $\sum_{x\in\gamma} f(x)$ is taken over finitely many points of $\gamma$ which belong to the support of $f$. In [KK2006]{}, it was shown that $\Gamma$ with the vague topology may be metrizable and it becomes a Polish space (i.e., complete separable metric space). Corresponding to this topology, the Borel $\sigma
$-algebra ${\mathcal{B}}(\Gamma )$ is the smallest $\sigma $-algebra for which all mappings $\Gamma \ni \gamma \mapsto |\gamma_ \Lambda |\in{\mathbb{N}}_0:={\mathbb{N}}\cup\{0\}$ are measurable for any $\Lambda\in{\mathcal{B}}_{\mathrm{b}}({{\mathbb{R}}^d})$.
The space of $n$-point configurations in an arbitrary $Y\in{\mathcal{B}}({{\mathbb{R}}^d})$ is defined by $$\Gamma^{(n)}_Y:=\Bigl\{ \eta \subset Y \Bigm| |\eta |=n\Bigr\} ,\quad n\in {\mathbb{N}}.$$ We set also $\Gamma^{(0)}_Y:=\{\emptyset\}$. As a set,
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Non-coherent electronic transport in metallic nanowires exhibits different carrier temperatures for the non-equilibrium forward and backward populations in the presence of electric fields. Depending on the mean free path that characterizes inter-branch carrier backscattering transport regimes vary between the ballistic and diffusive limits. In particular, we show that the simultaneous measurements of the electrical characteristics and the carrier distribution function offer a direct way to extract the carrier mean free path even when it is comparable to the conductor length. Our model is in good agreement with the experimental work on copper nanowires by Pothier [*et al.*]{} \[Phys. Rev. Lett. [**79**]{}, 3490 (1997)\] and provides an elegant interpretation of the inhomogeneous thermal broadening observed in the local carrier distribution function as well as its scaling with external bias.'
address:
- '$^1$ Beckman Institute and Department of Physics, University of Illinois at Urbana-Champaign, IL 61801, USA.'
- '$^2$ Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL 61801, USA.'
author:
- 'M. A. Kuroda$^1$, J.-P. Leburton$^{1,2}$'
bibliography:
- 'report.bib'
title: Carrier mean free path and temperature imbalance in mesoscopic wires
---
[*Keywords*]{}: diffusive transport, nanowires, nanotubes, mean free path
Present understanding of transport in mesoscopic systems relies on two different approaches: Landau’s theory of Fermi liquids [@baymlandau] and Tomonaga-Luttinger liquids (TLL) theory [@solyom1979]. The latter describes correlated one dimensional (1D) systems and is characterized by a power law decay at the Fermi level at $T=0$, instead of the discontinuity observed in Fermi liquids. Recent progresses in fabrication technology have made available a variety of material structures to study the electronic properties in quasi-1D systems like quantum wires [@hu1999], nanotubes [@iijima1991] and nanoribbons [@tapaszto2008]. Despite renewed experimental efforts, TLL features of 1D systems have yet to be indisputably proven, as their manifestation is limited fundamentally by their sensitivity to disorder and surface roughness, and the fact that inherent perturbations induced by the measurement should be much smaller than thermal fluctuations. These undesired effects cause loss of coherence amongst particles, and transport becomes diffusive.
In the past, transport experiments on copper nanowires at low temperature [@pothier1997] have shown to exhibit quasi-particle distributions with a two-step profile, a shape expected in a regime with no carrier interactions due to the superposition of the distribution functions in the leads [@nagaev1995]. However, the thermal broadening of the local distribution function (hot carrier effects) suggested significant carrier scattering, thereby invalidating lack of interactions.
In this paper we show that in non-coherent transport and beyond the diffusive limit the non-equilibrium carrier distribution in 1D-systems cannot be described by a single energy distribution but as a superposition of two distinct (forward and backward) carrier populations coupled by mutual scattering. Depending on the strength of the coupling between the two populations, the transport regimes varies between the well-known ballistic and diffusive limits. The model provides a straight correlation between the non-uniform thermal broadening of the carrier distribution and the mean-free path (even when the latter is comparable to the channel length) and presents good agreement with the experimental work in mesoscopic copper wires [@pothier1997], describing both the inhomogeneous local thermal broadening as well as the scaling law of the carrier distribution function observed the high-bias. We also discuss the conditions for the observation of this phenomena in other material systems.
![(a) Band structure of two branch system level. (b) Forward and backward carrier distribution function out of thermal equilibrium in the presence of a field. (c) Effective quasi-particle distribution function.[]{data-label="fig:distrib"}](./distrib.eps){width="4in"}
We assume that close to the Fermi level the energy dispersion in the 1D conductor is well described by linear branches $E_\pm(k)$[^1], as shown in Fig. \[fig:distrib\].a. For simplicity we only consider two branches, but our model and conclusions can be extended to mesoscopic systems with multiple bands provided that, as we show later, in 1D conductors current and heat flow do not depend on the branch Fermi velocity of the system. We assume that each of these two branches exhibits a $2g_c$-degeneracy (where the factor of 2 accounts for the spin). We group the carrier populations according to the sign of the Fermi velocity ($v_F= \hbar^{-1} \partial_k E_\pm$). The effective intra-branch electron-electron (e-e) scattering thermalizes the distribution (i.e. $\tau_{e-e}^{intra}\rightarrow 0$) causing the loss of coherence . Hence, each of these populations is described by a Fermi distribution function $f_\eta(E)$ (with $\eta = +,-$) [@kuroda2008]. In the presence of an electric field a population imbalance between the branches arises because of inefficient inter-branch carrier scattering, which creates a quasi-Fermi level difference ($\mu_+ \neq \mu_-$) and disrupts the thermal equilibrium between the two populations ($T_+\neq T_-$) as depicted in Fig. \[fig:distrib\].b. Under these conditions we have recently shown that the net carrier and heat transport is expressed as [@kuroda2008]: $$\begin{aligned}
I = g_c G_q \frac{\mu_+-\mu_-}{e}\label{eq:current}\\
U = \frac{g_c}{2} \left(G_{th}^+ T_+ -G_{th}^- T_-\right)\label{eq:heatflow}\end{aligned}$$ in terms of the quantum electric ($G_q = e^2/(\pi \hbar) $) [@datta] and thermal ($G_{th}^\pm = \pi k_B^2 T_\pm /(3\hbar)$) [@rego1997] conductance, respectively. Neither the current nor the heat flow depend on the magnitude of the branch Fermi velocity because of the system dimensionality. Because of the constant density of states, the local carrier distribution function measured experimentally [@pothier1997; @pothier1997b] is the average of the branch distribution functions : $$f(E) = \frac{1}{2}\left[f_+(E)+f_-(E)\right] \label{eq:distfunc}$$ as shown in Fig. \[fig:distrib\].c. Two steps in he distribution function are clearly observed when $|\mu_+-\mu_-| \gg k_BT_+,k_BT_-$.
We denote $\lambda$ the mean free path characterizing interactions amongst carriers in [*different*]{} branches, which tends to restore the equilibrium between the two populations. We assume these interactions involve inter-branch e-e, impurity or acoustic phonon (if the sound velocity $v_s\ll v_F$) processes, and only induce quasi-particle backscattering, i.e. no energy is transferred from the carrier populations to the external system. In this case the effective electric field $F$ along the channel has been shown to be [@kuroda2008]: $$F = \frac{I}{g_c G_q \lambda} \label{eq:field}.$$ By integrating this equation along the channel, assuming that the mean free path remains constant and using Eq. \[eq:current\], we find the drain-source bias voltage $V_{ds}$: $$V_{ds} = V_{c}+V_{ch} = \frac{\mu_+-\mu_-}{e}
\left(1+\frac{L}{\lambda}\right) \label{eq:Vds}.$$ The magnitudes $V_c$ and $V_{ch}$ denote voltage drops at the contacts and along channel, respectively, where the former is due to the quantum contact resistance [@datta]. We have also shown that the temperature profiles for forward and backward populations are given by: $$\pm G_{th}^\pm \partial_xT_\pm = \frac{I F}{2} \mp
\frac{U}{\lambda}$$ under the influence of quasi-particle backscattering. In particular, direct integration of this equation with the boundary conditions $T_+(-L/2) = T_{0+}$ and $T_-(L/2)
= T_{0-}$ (perfectly absorbing contacts) yields: $$\begin{aligned}
T_\pm(x) = \sqrt{T_{0\pm}^2\pm\frac{(1/2\pm \tilde{x}) \left(T_{0-}^2 -
T_{0+}^2\right)}{(1 + \tilde{\lambda})}+ \frac{\left(1/2 \pm
\tilde{x}\right) (1/2 \mp \tilde{x} +
\tilde{\lambda})}{(1+\tilde{\lambda})^2 } \frac{V_{ds}^2
}{\mathcal{L}}}\label{eq:tempprof}\end{aligned}$$ where $\mathcal{L}=\frac{\pi^2}{3} \left(\frac{k_B}{e}\right)^2$ is the Lorenz number. The variables $\tilde{\lambda}$ and $\tilde{x}$ stand for the rescaled position ($\tilde{x} = x/L$) and mean free path ($\tilde{\lambda}=\lambda/L
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We derive the general lagrangian and propagator for a vector-spinor field in $d$-dimensions and show that the physical observables are invariant under the so-called point transformation symmetry. Until now the symmetry has not been exploited in any non-trival way, presumably because it is not an invariance of the classical action nor is it a gauge symmetry. Nevertheless, we develop a technique for exploring the consequences of the symmetry leading to a conserved vector current and charge. The current and charge are identically zero in the free field case and only contribute in a background such as a electromagnetic or gravitational field. The current can couple spin-$\frac{3}{2}$ fields to vector and scalar fields and may have important consequences in intermediate energy hadron physics as well as linearized supergravity. The consistency problem which plagues higher spin field theories is then discussed and and some ideas regarding the possiblity of solutions are presented.'
author:
- |
Terry Pilling[^1]\
Department of Physics,\
North Dakota State University,\
Fargo, ND 58105-5566
title: 'Symmetry of massive Rarita-Schwinger fields'
---
Introduction
============
Theories of interacting high spin fields[^2] have been a subject of considerable interest for many years. This is partly due to the many particles with spin $\geq \frac{3}{2}$ seen in accelerator laboratories and also because there is currently no general field theory description which is relativistic, interacting and also free of inconsistencies[^3]. Over the years one interacting theory after another have been shown to be inconsistent leading many to suggest that all higher spin fields must be composite. On the other hand, higher spin elementary particles, such as the gravitino, play an important role in supersymmetry, which itself represents a fundamental building block of many modern unification schemes. Thus we would like to remain hopeful that a solution to the consistency problems can be found within point particle field theory. Perhaps our interpretation regarding the physical degrees of freedom is misguided or perhaps, as is our present concern, we have neglected symmetries or other aspects of nature that should be included. The hope being that if all of the symmetries are properly included, the result will be a consistent theory. That this hope is reasonable is exemplified by the fact that consistent solutions have already been found in restricted scenarios with curved backgrounds, cosmological constant tuning and Planck scale masses [@madore1975; @deser1977; @rindani1986; @rindani1991; @deser2001; @deser2001-b].
The consistency problems seem to exist for most interacting higher spin field theories and are a main concern of many theorists working in the field and so we will devote the final section of this paper to a discussion of the problem and touch on some possible consequences of the symmetry. Perhaps the ideas that we present will inspire some new angles of attack on the problem.
The main goal of this paper is a review of the $d$-dimensional theory of interacting Rarita-Schwinger fields and an exploration of the symmetries. Our hope is that the general expressions and the new interactions that we present here will be of use in formulating effective theories of interacting hadrons as well as work involving the massive gravitinos of supergravity such as, for example, the AdS/CFT correspondence [@volovich1998; @rashkov1999; @koshelev1998; @matlock1999]. Perhaps the AdS/CFT results can be extended to the case wherein the Rarita-Schwinger fields are not fixed at the start as non-interacting and onshell[^4].
We begin in sections \[spincontent\] and \[conditions\] by (re-)deriving the most general lagrangian and propagator for a Rarita-Schwinger spin-$\frac{3}{2}$ field[^5] using the method of Aurilia and Umezawa [@aurilia1969] extended to $d$-dimensions.
Since we are using the vector-spinor representation of spin-$\frac{3}{2}$, we find the usual result that a lower spin content is retained in the field in order to maintain the desirable properties of the action such as hermiticity, linearity in derivatives and non-singular behavior. However, recently there have been other promising ideas where the lower spin content is given a physical interpretation [@kaloshin2004] or where vector spinor description of spin-$\frac{3}{2}$ is replaced by a pure spin-$\frac{3}{2}$ field [@kirchbach2001; @ahluwalia1992; @ahluwalia1993; @kirchbach2002].
Considering the lower spin components as unphysical, as we do here, leads to a non-unique action depending on an arbitrary complex parameter measuring the lower spin content of the theory. Various choices of the parameter are seen to reduce the general expression to the spin-$\frac{3}{2}$ actions found in the literature. We formulate the equations in $d$ spacetime dimensions in anticipation of diverse applications from effective theories of hadronic interactions involving the spin-$\frac{3}{2}$ baryons to applications in arbitrary dimensional supergravity theories. For example, both the composite $\Delta(1232)$ resonance found in low and intermediate energy nucleon scattering experiments and the gravitino of N-extended supersymmetry after spontaneous symmetry breaking are thought to be described by the massive, spin-$\frac{3}{2}$, Rarita-Schwinger field that we study here.
In section \[group\] we examine the properties of the so-called ‘point’ or ‘contact’ transformations. These form a non-unitary group of transformations of the fields which shifts the parameter, amounting to a sort of rotation among the spin-$\frac{1}{2}$ degrees of freedom. The path integral is seen to be invariant under point transformations which implies that physical correlation functions are invariant under a redefinition of the arbitrary parameter. That the parameter is arbitrary is well known and this has caused many authors to simply fix it to a convenient value. Unfortunately, this has served to hide some of the freedom of the theory. We restore the explicit parameter dependence and, in sections \[interactions\] and \[implications\], we derive and examine new conserved currents resulting from the the symmetry.
Finally, in section \[consistency\] we discuss the consistency problems and point out a few ideas of how the conserved charge found in section \[implications\] might be useful in that context. The analysis we have used should also be generalizable to higher spins as well whenever the theory contains auxiliary fields of lower spin and has a similar symmetry group involving them.
Spin content of the Rarita-Schwinger field {#spincontent}
==========================================
In this section we give a decomposition of the Rarita-Schwinger field into separate spin blocks and derive some general formulas and identities that will be needed later. The result of this and the following section is the expression for the most general free lagrangian. The reader only interested in the result may want to turn immediately to equation (\[action1\]) or (\[Action1\]) below.
A commonly used formulation of the spin-$\frac{3}{2}$ field is the vector-spinor representation[^6] given by Rarita and Schwinger in 1941 [@rarita1941]. The vector-spinor transforms under the Lorentz group as[^7] $$\label{spindecomp}
\left(\frac{1}{2},\frac{1}{2}\right) \otimes \left[ \left(\frac{1}{2}, 0
\right) \oplus \left( 0, \frac{1}{2} \right) \right]
= \left(1, \frac{1}{2}\right) \oplus \left(\frac{1}{2}, 1 \right)
\oplus \left( 0, \frac{1}{2} \right) \oplus \left(\frac{1}{2}, 0 \right)$$ whereas the spin decomposition of the field [*in the rest frame*]{} [@kirchbach2002; @kaloshin2004] is $$\text{spin } \psi_\mu^A = \left( 1 + 0 \right) \otimes \frac{1}{2}
= \frac{3}{2} + \frac{1}{2} + \frac{1}{2}.$$ The vector-spinor field thus contains two spin-$\frac{1}{2}$ components in addition to the physical spin-$\frac{3}{2}$ component. The decomposition of the spin-$\frac{3}{2}$ field that we will use is given by choosing $\left( 0, \frac{1}{2} \right) = p_\mu \psi^\mu$, where $p_\mu = i \partial_\mu$. The complimentary part is $$\left(1, \frac{1}{2}\right) = \left( g_{\mu \nu} - \frac{p_\mu p_\nu}{p^2} \right) \psi^\nu,$$ which can then be written in terms of spin-$\frac{3}{2}$ and spin-$\frac{1}{2}$ projectors as $$g_{\mu \nu} - \frac{p_\mu p_\nu}{p^2}
= \left(P^{\frac{3}{2}}\right)_{\mu \nu} + \left(P^{\frac{1}{2}}_{11}\right)_{\mu \nu}.$$ Defining $\left(P^{\frac{1}{2}}_{22}\right)_{\mu \nu}
= \frac{p_\mu p_\nu}{p^2}$ we have an expansion of the identity $$\label{expanse1}
g_{\mu \nu} = \left(P^{\frac{3}{2
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Dynamics of interactions play an increasingly important role in the analysis of complex networks. A modeling framework to capture this are temporal graphs which consist of a set of vertices (entities in the network) and a set of time-stamped binary interactions between the vertices. We focus on enumerating $\Delta$-cliques, an extension of the concept of cliques to temporal graphs: for a given time period $\Delta$, a $\Delta$-clique in a temporal graph is a set of vertices and a time interval such that all vertices interact with each other at least after every $\Delta$ time steps within the time interval. Viard, Latapy, and Magnien \[ASONAM 2015, TCS 2016\] proposed a greedy algorithm for enumerating all maximal $\Delta$-cliques in temporal graphs. In contrast to this approach, we adapt the Bron-Kerbosch algorithm—an efficient, recursive backtracking algorithm which enumerates all maximal cliques in static graphs—to the temporal setting. We obtain encouraging results both in theory (concerning worst-case running time analysis based on the parameter “$\Delta$-slice degeneracy” of the underlying graph) as well as in practice[^1] with experiments on real-world data. The latter culminates in an improvement for most interesting $\Delta$-values concerning running time in comparison with the algorithm of Viard, Latapy, and Magnien.'
author:
- 'Anne-Sophie Himmel'
- Hendrik Molter
- Rolf Niedermeier
- Manuel Sorge
bibliography:
- 'literature.bib'
title: 'Adapting the Bron-Kerbosch Algorithm for Enumerating Maximal Cliques in Temporal Graphs[^2]'
---
Introduction
============
Network analysis is one of the main pillars of data science. Focusing on networks that are modeled by undirected graphs, a fundamental primitive is the identification of complete subgraphs, that is, cliques. This is particularly true in the context of detecting communities in social networks. Finding a maximum-cardinality clique in a graph is a classical NP-hard problem, so super-polynomial worst-case running time seems unavoidable. Moreover, often one wants to solve the more general task of not only finding one maximum-cardinality clique but to list *all maximal* cliques. Their number can be exponential in the graph size. The famous Bron-Kerbosch algorithm (“Algorithm 457” in *Communications of the ACM 1973*, [@bron1973algorithm]) addresses this task and still today forms the basis for the best (practical) algorithms to enumerate all maximal cliques in static graphs [@ELS13]. However, to realistically model many real-world phenomena in social and other network structures, one has to take into account the dynamics of the modeled system of interactions between entities, leading to so-called temporal networks. In a nutshell, compared to the standard static networks, the interactions in temporal networks (edges) appear sporadically over time (while the vertex set remains static). Indeed, as @nicosia2013graph pointed out, in many real-world systems the interactions among entities are rarely persistent over time and the non-temporal interpretation is an “oversimplifying approximation”. In this work, we use the standard model of temporal graphs. A temporal graph consists of a vertex set and a set of edges, each with an integer time-stamp. The generalization of a clique to the temporal setting that we study is called [$\Delta$-clique]{} and was introduced by @Viard2015Dyno [@viard2015computing]. Intuitively, being in a [$\Delta$-clique]{} means to be regularly in contact with all other entities in this [$\Delta$-clique]{}. In slightly more formal terms, each pair of vertices in the [$\Delta$-clique]{} has to be in contact in at least every $\Delta$ time steps. A fully formal definition is given in Section \[sec:preliminaries\]. We present an adaption of the framework of Bron and Kerbosch to temporal graphs. To this end, we overcome several conceptual hurdles and propose a temporal version of the Bron-Kerbosch algorithm as a new standard for efficient enumeration of maximal [$\Delta$-clique]{}s in temporal graphs.
Related Work
------------
Our work relates to two main lines of research. First, enumerating $\Delta$-cliques in temporal graphs generalizes the enumeration of maximal cliques in static graphs, this being subject of many different algorithmic approaches (sometimes also exploiting specific properties such as the “degree of isolation” of the cliques searched for) [@bron1973algorithm; @ELS13; @II09; @HKMN09; @komusiewicz2009isolation; @tomita2006worst]. Indeed, clique finding is a special case of dense subgraph detection. Second, more recently, mining dynamic or temporal networks for periodic interactions [@LB10] or preserving structures [@uno2015mining] (in particular, this may include cliques as a very fundamental pattern) has gained increased attention. Our work is directly motivated by the study of @Viard2015Dyno [@viard2015computing] who introduced the concept of $\Delta$-cliques and provided a corresponding enumeration algorithm for $\Delta$-cliques. In fact, following one of their concluding remarks on future research possibilities, we adapt the Bron-Kerbosch algorithm to the temporal setting, thereby outperforming their greedy-based approach in most cases.
Results and Organization
------------------------
Our main contribution is to adapt the Bron-Kerbosch recursive backtracking algorithm for clique enumeration in static graphs to temporal graphs. In this way, we achieve a significant speedup for most interesting time period values $\Delta$ (typically two orders of magnitude of speedup) when compared to a previous algorithm due to @Viard2015Dyno [@viard2015computing] which is based on a greedy approach. We also provide a theoretical running time analysis of our Bron-Kerbosch adaption employing the framework of parameterized complexity analysis. The analysis is based on the parameter “$\Delta$-slice degeneracy” which we introduce, an adaption of the degeneracy parameter that is frequently used in static graphs as a measure for sparsity. This extends results concerning the static Bron-Kerbosch algorithm [@ELS13]. A particular feature to achieve high efficiency of the standard Bron-Kerbosch algorithm is the use of pivoting, a procedure to reduce the number of recursive calls of the Bron-Kerbosch algorithm. We show how to define this and make it work in the temporal setting, where it becomes a significantly more delicate issue than in the static case. In summary, we propose our temporal version of the Bron-Kerbosch approach as a current standard for enumerating maximal cliques in temporal graphs.
The paper is organized as follows. In Section \[sec:preliminaries\], we introduce all main definitions and notations. In addition, we give a description of the original Bron-Kerbosch algorithm as well as two extensions: pivoting and degeneracy ordering. In Section \[section:bronKerboschDelta\], we propose an adaption of the Bron-Kerbosch algorithm to enumerate all maximal $\Delta$-cliques in a temporal graph, prove the correctness of the algorithm and give a running time upper bound. Furthermore, we adapt the idea of pivoting to the temporal setting. In Section \[sec:degeneracy\] we adapt the concept of degeneracy to the temporal setting and give an improved running time bound for enumerating all maximal $\Delta$-cliques. In Section \[section:implExp\], we present the main results of the experiments on real-world data sets. We measure the $\Delta$-slice degeneracy of real-world temporal graphs, we study the efficiency of our algorithm, and compare its running time to the algorithm of @Viard2015Dyno, showing a significant performance increase due to our Bron-Kerbosch approach. We conclude in Section \[sec:conclusion\], also presenting directions for future research.
Preliminaries {#sec:preliminaries}
=============
In this section we introduce the most important notations and definitions used throughout this article.
Graph-Theoretic Concepts
------------------------
In the following we provide definitions of adaptations to the temporal setting for central graph-theoretic concepts.
### Temporal Graphs
The motivation behind temporal graphs, which are also referred to as temporal networks [@holme2012temporal], time-varying graphs [@nicosia2013graph], or link streams [@Viard2015Dyno], is to capture changes in a graph that occur over time. In this work, we use the well-established model where each edge is given a time stamp [@Viard2015Dyno; @holme2012temporal; @boccaletti2014structure]. Assuming discrete time steps, this is equivalent to a sequence of static graphs over a fixed set of vertices [@michail2015introduction; @Erlebach0K15]. Formally, the model is defined as follows.
A *temporal graph* $\mathbb{G}=(V,E,T)$ is defined as a triple consisting of a set of vertices $V$, a set of *time-edges* $E \subseteq \binom{V}{2} \times T$, and a time interval $T=[\alpha, \omega]$, where $\alpha, \omega \in \mathbb{N}$, $T \subseteq \mathbb{N}$ and $\omega -\alpha$ is the *lifetime* of the temporal graph $\mathbb{G}$.
The notation
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The location of the images in a multiple-image gravitational lens system are strongly dependent on the orientation angle of the mass distribution. As such, we can use the location of the images and the photometric properties of the visible matter to constrain the properties of the dark halo. We apply this to the optical Einstein Ring system 0047-2808 and find that the dark halo is almost spherical and is aligned in the same direction as the stars to within a few degrees.'
author:
- 'Randall B. Wayth'
- 'Rachel L. Webster'
title: 'The dark matter halo of the gravitational lens galaxy 0047-2808'
---
Introduction
============
Numerical simulations of Cold Dark Matter (CDM) have been very successful in reproducing the observed large scale structure of the universe. The CDM model predicts that the dark matter (DM) haloes of today’s galaxies are assembled through successive mergers of smaller haloes. Simulations using only dark matter predict that the haloes should be quite prolate, however it is not clear how gas and/or stars interacting with the dark matter will change the shape of the halo. Studies have suggested that the DM halo can become more or less cuspy ([El-Zant]{}, [Shlosman]{}, & [Hoffman]{}, 2001; [Tissera]{} & [Dominguez-Tenreiro]{}, 1998) and rounder ([Evrard]{}, [Summers]{}, & [Davis]{}, 1994; [Dubinski]{}, 1994) after the interaction with stars and gas. An important test of galaxy formation and evolution models will be to compare the shape and profile of galaxy haloes with observed haloes. Thus, simple questions such as: “Do we expect the visible and dark matter to be aligned in elliptical galaxies?” and “Is the dark matter density in the central regions changed by the gravitational dominance of the stars?” must be answered with observations. For instance: the Milky Way, despite being a spiral galaxy, appears to have an almost spherical halo ([Ibata]{} [et al.]{}, 2001).
Gravitational lensing offers a method to tightly constrain the shape of DM haloes in the population of medium redshift ($0.1 < z < 1.0$) lens galaxies. The image positions in a lens system are highly sensitive to the orientation of the overall mass profile. [Keeton]{}, [Kochanek]{}, & [Falco]{} (1998) showed that the *overall* mass distribution is typically aligned with the visible matter using a sample of lens galaxies and a simple SIE mass model. However, depending on the lens galaxy, the stellar mass can contribute a substantial fraction of the total mass inside the image. The extreme case is the lensed QSO 2237+0305 where the dark matter constitutes only 4% of the projected mass inside the images ([Trott]{} & [Webster]{}, 2002). In this case we expect the visible matter orientation and the total matter orientation derived from a lensing analysis to be very similar. The logical next step is to use a more complicated (stars + halo) model for the lens galaxy to determine the properties of the DM halo alone.
In this paper we use an implementation of the LensMEM algorithm ([Wallington]{}, [Kochanek]{}, & [Narayan]{}, 1996) and a stars+halo lens model to study the optical Einstein Ring 0047-2808 ([Warren]{} [et al.]{}, 1999, 1996) using data from the HST. This system is well suited for the study because it is an isolated lens galaxy so we expect any external shear contributions to be small. The system is a $z=0.485$ elliptical which is lensing a background starbursting galaxy at $z=3.6$.
The algorithm we employ performs a non-parametric source reconstruction to match the observed data for a given lens model. The goodness-of-fit of the model is calculated using a $\chi^2$ taking into account the degrees of freedom used in the source. In this paper we assume $H_0=70$ kms$^{-1}$Mpc$^{-1}$ and $(\Omega_m,\Omega_{\lambda})=(0.3,0.7)$.
Method
======
The data was reduced as described in [Wayth]{} [et al.]{} (2002). The final image of the “ring” is 133x133 with $0.05\arcsec$ pixels as shown in Figure \[fig:img\_and\_model\]. The lens galaxy was best fit with a Sersic profile, where the surface brightness as a function of radius $r$ is $\Sigma=\Sigma_{1/2} \exp\{-B(n)\lbrack(r/r_{1/2})^{1/n}-1\rbrack\}$. The parameter $n$ quantifies the shape of the profile: the values $n=0.5$, $n=1$, and $n=4$ correspond to the Gaussian, exponential, and de Vaucouleurs profiles. Profiles with larger $n$ are more cuspy. $B(n)$ is a constant for a particular $n$ and we used the series asymptotic solution for $B(n)$ provided by [Ciotti]{} & [Bertin]{} (1999). Additional parameters used for the light profile are the axis ratio ($q$) and orientation angle ($\theta_s$). The fitted parameters are shown in Table \[tab:phot\_fits\].
------------------------- ---------
$R_{1/2}$ (pixels) $21.69$
$\Sigma_{1/2}$ (counts) $0.7$
$q$ $0.693$
$\theta_s$ ($\deg$) $125$
n $3.115$
------------------------- ---------
: Photometric parameters for the lens galaxy.[]{data-label="tab:phot_fits"}
We model the galaxy stellar component with fixed parameters from the photometry and allow only the M/L to vary. The halo is modelled as a Pseudo-Isothermal Elliptic Potential (PIEP) with a finite core. The PIEP model is defined by the lensing potential $\psi = b[r_c^2 + (1-\epsilon)x^2 + (1+\epsilon)y^2]^{1/2}$ where $r_c$ is the core radius, $b$ is the mass scale (Einstein radius) and $\epsilon$ is the ellipticity. An additional parameter is used for the orientation angle ($\theta_h$, measured anti-clockwise from horizontal). It is worth noting that the lens can be fit with the PIEP model alone (without a core) with the parameters $b=1.165, \epsilon=0.08$ and $\theta_h=129$. We use this mass scale for the halo model. The source in this system actually has two distinct components. The two-component model explains the location and brightness of all features in the image with a standard lens model. Figure \[fig:img\_and\_model\] shows the model source and corresponding image for the plain PIEP model.
The mass enclosed inside the image is tightly constrained by the Einstein radius. We use this constraint to normalise the stellar M/L for a halo of a given core radius. A large core is equivalent to a constant M/L mass model, whereas a small core will generate an unrealistically low M/L for the observed stellar component of the lens.
In preliminary tests, we found that we cannot fit the data for $r_c \ga 7\arcsec$ (42kpc physical scale length) i.e. constant M/L models cannot fit the data. Therefore we have restricted our analysis to halo core radii $< 7\arcsec$. For the range of allowed core radius values, we have calculated the range of halo ellipticity and orientation angle which produce acceptable fits to the data.
Results
=======
Figure \[fig:halores\] plots the acceptable range of halo ellipticity and orientation angle as a function of core radius. On the left, we see that the halo ellipticity is consistently less than the stellar ellipticity. For $1.5\arcsec < r_c < 2.5\arcsec$, the data permit a halo with projected mass density which is circular, although in all cases the best solution has a halo with non-zero ellipticity.
On the right of Figure \[fig:halores\], the plot shows that the halo orientation angle is independent of the core radius and is in the same direction as the projected stellar major axis (within errors). The acceptable range of orientation angles for $1.5\arcsec < r_c < 2.5\arcsec$ are for non-zero ellipticity.
Conclusion
==========
By using a lens model which separates the stars from the halo, we have been able to determine some of the basic properties of the dark matter halo in the lens system 0047-2808. We find that the projected ellipticity of the halo is not circular, but is substantially rounder than the observed stellar ellipticity. A small range of halo core radii values ($1.5\arcsec < r_c < 2.5\arcsec$) allow the projected halo mass to be circular.
The halo’s core, modelled as a constant density region, must be $< 7\arcsec$ to fit the observation. The core size could be further constrained by applying realistic limits to the stellar M/L which we intend to do in further work.
Finally, we find that although the halo is less elliptical than the stars, the orientation angle of the star’s and halo’s major
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Many imaging problems require solving an inverse problem that is ill-conditioned or ill-posed. Imaging methods typically address this difficulty by regularising the estimation problem to make it well- posed. This often requires setting the value of the so-called regularisation parameters that control the amount of regularisation enforced. These parameters are notoriously difficult to set a priori, and can have a dramatic impact on the recovered estimates. In this paper, we propose a general empirical Bayesian method for setting regularisation parameters in imaging problems that are convex w.r.t. the unknown image. Our method calibrates regularisation parameters directly from the observed data by maximum marginal likelihood estimation, and can simultaneously estimate multiple regularisation parameters. A main novelty is that this maximum marginal likelihood estimation problem is efficiently solved by using a stochastic proximal gradient algorithm that is driven by two proximal Markov chain Monte Carlo samplers, thus intimately combining modern high-dimensional optimisation and stochastic sampling techniques. Furthermore, the proposed algorithm uses the same basic operators as proximal optimisation algorithms, namely gradient and proximal operators, and it is therefore straightforward to apply to problems that are currently solved by using proximal optimisation techniques. We also present a detailed theoretical analysis of the proposed methodology, including asymptotic and non-asymptotic convergence results with easily verifiable conditions, and explicit bounds on the convergence rates. The proposed methodology is demonstrated with a range of experiments and comparisons with alternative approaches from the literature. The considered experiments include image denoising, non-blind image deconvolution, and hyperspectral unmixing, using synthesis and analysis priors involving the $\ell_1$, total-variation, total-variation and $\ell_1$, and total-generalised-variation pseudo-norms.'
author:
- 'Ana F. Vidal [^1]'
- 'Valentin De Bortoli [^2]'
- 'Marcelo Pereyra [^3]'
- 'Alain Durmus [^4]'
bibliography:
- 'refs.bib'
title: 'Maximum likelihood estimation of regularisation parameters in high-dimensional inverse problems: an empirical Bayesian approach'
---
Acknowledgements
================
We are grateful to Dr. Charles Deledalle for providing us with a SUGAR implementation for an ADMM solver available at <https://github.com/deledalle/sugar/blob/master/solvers/admm.m>. AD acknowledges financial support from Polish National Science Center grant: NCN UMO-2018/31/B/ST1/00253.
[^1]: Email: af69@hw.ac.uk
[^2]: Email: valentin.debortoli@cmla.ens-cachan.fr
[^3]: Email: m.pereyra@hw.ac.uk
[^4]: Email: alain.durmus@cmla.ens-cachan.fr Part of this work has been presented at the 25th IEEE International Conference on Image Processing (ICIP) [@vidal2018maximum]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we investigate how semantic relations between concepts extracted from medical documents can be employed to improve the retrieval of medical literature. Semantic relations explicitly represent relatedness between concepts and carry high informative power that can be leveraged to improve the effectiveness of retrieval functionalities of clinical decision support systems. We present preliminary results and show how relations are able to provide a sizable increase of the precision for several topics, albeit having no impact on others. We then discuss some future directions to minimize the impact of negative results while maximizing the impact of good results.'
author:
- 'Maristella Agosti, Giorgio Maria Di Nunzio, Stefano Marchesin, Gianmaria Silvello'
bibliography:
- 'Marchesin.bib'
title: A Relation Extraction Approach for Clinical Decision Support
---
<ccs2012> <concept> <concept\_id>10002951.10003317.10003318</concept\_id> <concept\_desc>Information systems Document representation</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10002951.10003317.10003318.10011147</concept\_id> <concept\_desc>Information systems Ontologies</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10002951.10003317.10003325.10003330</concept\_id> <concept\_desc>Information systems Query reformulation</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10002951.10003317.10003347.10003352</concept\_id> <concept\_desc>Information systems Information extraction</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10002951.10003317.10003371</concept\_id> <concept\_desc>Information systems Specialized information retrieval</concept\_desc> <concept\_significance>500</concept\_significance> </concept> </ccs2012>
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Katarina Uzelac
- Zvonko Glumac
- 'Osor S. Barišić'
date: 'Received: date / Revised version: date'
title:
- 'Short-time dynamics in the 1D long-range Potts model'
- 'Short-time dynamics in the 1D long-range Potts model'
---
Introduction
============
Short-time dynamics (STD) in systems quenched to criticality has attracted considerable attention in the last decade due to the appealing fact that systems even in the early period of relaxation to equilibrium exhibit universal scaling properties which involve both static and dynamic critical exponents [@JSS89; @Huse]. The interest in this phenomenon exists at different levels. From a practical point of view, it offers a useful numerical tool for calculating both dynamic and static critical properties where the critical slowing down is turned into advantage. From a fundamental point of view, it opened a series of questions of current interest from the universal amplitudes to the universality of the fluctuation-dissipation ratio [@Cugliandolo93] in a wider context of ageing phenomena in pure systems [@CG05]. One of the first points of conceptual interest was the emergence of a new independent universal dynamical exponent describing the initial increase of the magnetization in this regime [@JSS89], but related also to the persistence probability of the global order parameter [@Majumdar96]. Since the STD was formulated in the context of the dynamical renormalization group (RG) and the new exponent evaluated within the $\epsilon$-expansion [@JSS89] it has been further investigated, mostly numerically, in a variety of models in two and three dimensions for equilibrium phase transitions [@SZ95; @OSchYZ97; @daSilva02; @Zheng98] and also for out-of-equilibrium ones [@TO98].
Quite a few studies were carried out on models with long-range (LR) interactions. The RG approach of Janssen et al. [@JSS89] was extended to the case of power-law decaying interactions of the form $r^{-d-\sigma}$ in the same continuous n-vector model [@CGLMS00], in the random Ising model [@Chen02], and in the kinetic spherical model [@CGLY00; @BDH07]. Studies of STD at criticality in discrete models with LR interactions, where such an approach does not apply, are still absent. Numerical “advantage” is there rather reduced due to the fast relaxation in the presence of LR interactions.
In this paper we present the first and preliminary numerical study of the 1D LR Potts model, useful as a paradigm that comprises different universality classes obtained by variation of the number of states $q$. We show that, in spite of the difficulties of the numerical approach in the LR case, the scaling properties characteristic for the STD may be well reproduced with a reasonable numerical effort and derive the two dynamical critical exponents in the wide extent of the range-parameter $\sigma$ for two different universality classes.
The outline of the paper is as follows. In Section \[sec:Model\] we give an overview of the model and basic STD properties considered in the paper, followed by the details of our numerical approach. The Section \[sec:Results\] contains the results for two special cases of the Potts model: $q=2$, corresponding to the Ising model, which is compared to the previous RG results, and $q=3$, where the new results are derived in the regime where the transition is of the second order. The conclusion is given in Section \[sec:Conclusion\].
Model and short-time dynamics approach {#sec:Model}
======================================
We consider the 1D Potts model defined by the Hamiltonian $$H = - \sum_{i < j} \; \frac{J}{|i - j|^{1+\sigma}} \; \delta _{s_i, s_j} \; ,
\label{hamilt}$$ where $J>0$, $s_i$ denotes a $q$-state Potts spin at the site $i$, $\delta$ is the Kronecker symbol and the summation is over all the pairs of the system. Hereafter $J=k_B=1$ is used. As is well known [@ACCN88], for $0 <\sigma \leq 1$ the model (\[hamilt\]) has a phase transition at nonzero temperature for all $q$. Only a few exact results are available for its equilibrium critical behavior, but the model was studied in detail by several approximate methods [@GU93; @LB97; @BDD99]. It has a rather complicated phase diagram in the $(q,\sigma)$ plane, involving similar variety of critical regimes, that is encountered in the $(q,D)$ plane of the same model with short-range (SR) interactions. This gives the additional motivation to examine also the dynamical scaling properties in the STD regime depending on $q$ and $\sigma$.
In the present work we are interested in two special cases, $q=2$ and $q=3$ in the range of parameter $\sigma$ corresponding to the nontrivial (non mean-field (MF)) critical regime, where the initial slip of the magnetization can be observed. For $q=2$ this is accomplished for $0.5 <\sigma <1$ [@FMN72]. In the latter case, $q=3$, which belongs to a different universality class, this region is restrained to $\sigma_{c}(q=3) < \sigma < 1$, where $\sigma_{c}(q) > 0.5$ denotes the point of the onset of the first-order phase transition, occurring for $q>2$ and known only approximately [@UG97GU98; @ReynalDiep04]. For these two cases we shall study the nonequilibrium evolution to criticality in early times of several quantities, magnetization, autocorrelation function and time correlations of the magnetization. Let us first briefly remind their scaling properties in the STD regime and explain their implementation to the model (\[hamilt\]).
STD approach
------------
As shown by Janssen [*et al*]{} [@JSS89], if the system is brought out of equilibrium by a quench from high temperature to criticality, and left to evolve following the nonconservative dynamics of Model A (in the sense of reference [@HH77]), then, during the early stage of relaxation it will display universal scaling properties characterized by the static exponents and the new universal dynamic exponent. Consequently, in the system of size $L$ after a quench from high temperature to the critical region in the presence of small initial magnetization $m_0$, the magnetization will obey the scaling relation $$M(t, \tau, L, m_0) = b^{-\beta/\nu} M(t/b^z, b^{1/\nu} \tau, L/b, b^{x_0} m_0),
\label{scgen}$$ where $\tau = (T-T_c)/T_c$, $b$ is a scaling factor and $\beta, \nu$ are the static critical exponents. Besides the dynamical exponent $z$, the scaling involves a new exponent $x_0$ as the anomalous dimension of the initial magnetization $m_0$.
At criticality ($\tau = 0$), and for $L\gg \xi$, equation (\[scgen\]) may be reduced to $$M(t, m_0) = t^{-\beta/(\nu z)} M(1, t^{x_0/z} m_0).
\label{scTc}$$ For early times satisfying $t \ll t_x \approx m_0^{-z/x_0}$, but larger than the microscopic time $t_{micro}$, the r. h. s. can be expanded giving the power-law increase of the magnetization known as the initial slip, $$M(t) \sim m_0 t^{\theta'},
\label{deftheta1}$$ with $\theta' = x_0/z - \beta/(\nu z)$. The magnetization in the model (\[hamilt\]) is defined in a standard way $$\label{eq:m1}
M(t) = \langle M_1(t) \rangle\; = \frac{q}{(q-1)\;L}\;\left< \sum_{i}\; \left( \delta_{s_i(t),1} - \frac{1}{q} \right) \right>,$$ where $1$ denotes the preferential direction among $q$ possible Potts states ${\alpha}$. The brackets $\langle...\rangle$ denote the average over initial conditions and random force.
During the short time after the quench, the correlation length is small compared to the system size, and the exponent $\theta'$ can be derived directly from the power law (\[deftheta1\]) by performing simulations on the chain of a single large size and averaging over a great number of independent runs.
In the absence of the initial magnetization ($m_0 = 0$), equation (\[scgen\]) gives the scaling relation for the $k$-th moment of the magnetization, $$M^{(k)}(t,L) = b^{-k\beta/\nu} M^{(k)}(t/b^{z},L/b).
\label{eq:scMk}$$ In early times, when $\xi(t) \ll L$, the second moment also displays a power-law behavior, $$M^{(2)}(t,L) \sim t^{(d-2\beta/\nu)/z},
\label{eq:Mk}$$ which can be used to
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Bahman Dehnadi ${}^a$\
E-mail:
- |
Andre H. Hoang ${}^{b,c}$\
E-mail:
- |
Vicent Mateu ${}^{c,d,e}$\
E-mail:
- |
S. Mohammad Zebarjad ${}^a$\
E-mail:\
${}^a$ Shiraz University, Physics Department, Shiraz 71454, Iran.\
${}^b$ University of Vienna, Faculty of Physics, Boltzmanngasse 5, A-1090 Vienna, Austria.\
${}^c$ Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, D-80805 München, Germany.\
${}^d$ Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139.\
${}^e$ Instituto de Física Corpuscular, UVEG - Consejo Superior de Investigaciones Científicas, Apartado de Correos 22085, E-46071, Valencia, Spain.
bibliography:
- 'charm2.bib'
title: 'Charm Mass Determination from QCD Charmonium Sum Rules at Order $\alpha_s^3$'
---
Introduction {#sectionintroduction}
============
Accurate determinations of the charm quark mass are an important ingredient in the prediction of inclusive and radiative $B$ decays or exclusive kaon decays such as $K\to\pi\nu\bar{\nu}$. Since these decays are instruments to either measure CKM matrix elements or to search for new physics effects, appropriate and realistic estimates of the uncertainties are also an important element of these analyses [@Antonelli:2009ws].
One of the most powerful methods to determine the charm quark mass is based on sum rules for the charm-anticharm production rate in $e^{+}e^{-}$ annihilation [@Novikov:1977dq]. Here, moments of the correlation function of two charm vector currents at zero momentum transfer $$\begin{aligned}
\label{momentdef1}
M_{n}^{{\rm th}} & = & \dfrac{12\pi^2 Q_c^2}{n!}\,\dfrac{{\rm d}}{{\rm
d}q^{2n}}\left.\Pi(q^{2})\right|_{q^{2}=0}\,,\\
\left(g_{\mu\nu}q^{2}-q_{\mu}q_{\nu}\right)\,\Pi(q^{2})
& = &
-\, i\int\mathrm{d}x\, e^{iqx}\left\langle \,0\left|T\,
j_{\mu}(x)j_{\nu}(0)\right|0\,\right\rangle
\,,\nonumber \\[2mm]
j^{\mu}(x)
& = &
\bar{\psi}(x)\gamma^{\mu}\psi(x)
\,,\nonumber \end{aligned}$$ $Q_c$ being the charm quark electric charge, can be related to weighted integrals of the normalized charm cross section $$\begin{aligned}
\label{momentdef2}
M_{n} & = &
\int\dfrac{{\rm d}s}{s^{n+1}}R_{e^{+}e^{-}\to\, c\bar{c}\,+X}(s)\,,\\
R_{e^{+}e^{-}\to\, c\bar{c}\,+X}(s) & = & \dfrac{\sigma_{e^{+}e^{-}\to\, c\bar{c}\,+X}(s)}{\sigma_{e^{+}e^{-}\to\,\mu^{+}\mu^{-}}(s)}\,,\nonumber \end{aligned}$$ which can be obtained from experiments. For small values of $n$ such that $m_{c}/n\gtrsim\Lambda_{{\rm QCD}}$ the theoretical moments $M_{n}^{{\rm th}}$ can be computed in an operator product expansion (OPE) where the dominant part is provided by perturbative QCD supplemented by small vacuum condensates that parametrize nonperturbative effects [@Shifman:1978bx; @Shifman:1978by]. The leading gluon condensate power correction term has a surprisingly small numerical effect and is essentially negligible for the numerical analysis as long as $n$ is small.
This allows to determine the charm mass in a short distance scheme such as $\overline{{\rm MS}}$ to high precision. This method to determine the $\overline{{\rm MS}}$ charm mass is frequently called charmonium sum rules. For the theoretical moments the perturbative part of the OPE is known at ${\mathcal O}(\alpha_{s}^{0})$ and ${\mathcal O}(\alpha_{s})$ for any value of $n$ [@Kallen:1955fb]. At ${\mathcal O}(\alpha_{s}^{2})$ the first 30 moments are known [@Boughezal:2006uu; @Maier:2007yn], and to ${\mathcal O}(\alpha_{s}^{3})$ for $n=1$ [@Chetyrkin:2006xg; @Boughezal:2006px], $n=2$ [@Maier:2008he], and $n=3$ [@Maier:2009fz]. Higher moments at ${\cal O}(\alpha_s^3)$ have been determined by a semianalytical procedure [@Hoang:2008qy; @Kiyo:2009gb] (see also [@Greynat:2010kx]). The Wilson coefficient of the gluon condensate contribution is known to ${\mathcal O}(\alpha_{s})$ [@Broadhurst:1994qj]. On the experimental side the total hadronic cross section in $e^{+}e^{-}$ annihilation is known from various experimental measurements for c.m. energies up to $10.538\,$GeV. None of the experimental analyses actually ranges over the entire energy region between the charmonium region and $10.538$ GeV, but different analyses overlapping in energy exist such that energies up to $10.538\,$GeV are completely covered [@Bai:1999pk; @Bai:2001ct; @Ablikim:2004ck; @Ablikim:2006aj; @Ablikim:2006mb; @:2009jsa; @Osterheld:1986hw; @Edwards:1990pc; @Ammar:1997sk; @Besson:1984bd; @:2007qwa; @CroninHennessy:2008yi; @Blinov:1993fw; @Criegee:1981qx; @Siegrist:1976br; @Rapidis:1977cv; @Abrams:1979cx; @Siegrist:1981zp].[^1] Interestingly, to the best of our knowledge, the complete set of all available experimental data on the hadronic cross section has never been used in previous charmonium sum rule analyses to determine the experimental moments. Rather, sum rule analyses have relied heavily on theoretical input using different approaches to determine the corresponding “experimental error” and intrinsically leading to a sizable modeling uncertainty for energy regions below $10.538\,$GeV for low values of $n$ [@Hoang:2004xm].
The most recent charmonium sum rule analysis based on Eqs. (\[momentdef1\]) and (\[momentdef2\]), carried out by Kühn et al. [@Chetyrkin:2009fv; @Kuhn:2007vp] using input from perturbative QCD (pQCD) at ${\mathcal O}(\alpha_{s}^{3})$ for the perturbative contribution, obtained $\overline{m}_{c}(\overline{m}_{c})=
1279\pm(2)_{{\rm pert}}\pm(9)_{{\rm exp}}
\pm(9)_{\alpha_{s}}\pm(1)_{{\rm \left\langle GG\right\rangle}}\,$ MeV where the first error is the perturbative uncertainty and the second is the experimental one. The third and the fourth uncertainties come from $\alpha_s$ and the gluon condensate correction, respectively. To our knowledge this result, the outcome of similar analyses in Ref. [@Chetyrkin:2006xg] and by Boughezal, Czakon and Schutzmeier [@Boughezal:2006px][^2], and a closely related analysis based on lattice results instead of data for pseudoscalar moments [@Allison:2008xk; @McNeile:2010ji] represent the analyses with the highest precision achieved so far in the literature. If confirmed, any further investigations and attempts concerning a more precise charm quark $\overline{{\rm MS}}$ mass would likely be irrelevant for any foreseeable future.
We therefore find it warranted to reexamine the charmonium sum rule analysis with special attention on the way how perturbative and experimental uncertainties have been treated in Refs. [@Chetyrkin:2009fv; @Kuhn:2007vp]. A closer look into their analysis reveals that the quoted perturbative uncertainty results from a specific way to arrange the $\alpha_s$ expansion for the charm mass extractions and, in addition, by setting the $\overline{{\rm MS}}$ renormalization scales in $\alpha_{s}$ and in the charm mass (which we call $\mu_{\alpha}$ and $\mu_{m}$, respectively) equal to each other (i.e., they use $\mu_{\alpha}=\mu_{m}$). Moreover, concerning the experimental moments, only data up to $\sqrt{s}=4.8\,$GeV from the BES experiments [@Bai:2001ct; @Ablikim:
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $W,W''\subseteq G$ be nonempty subsets in an arbitrary group $G$. $W''$ is said to be a complement to $W$ if $WW''=G$ and it is minimal if no proper subset of $W''$ is a complement to $W$. We show that, if $W$ is finite then every complement of $W$ has a minimal complement, answering a problem of Nathanson. We also give necessary and sufficient conditions for the existence of minimal complements of a certain class of infinite subsets $W$ in finitely generated abelian groups, partially answering another problem of Nathanson.'
address:
- 'Universität Wien, Fakultät für Mathematik, Oskar-Morgenstern-Platz 1, 1090 Wien, Austria.'
- 'Department of Mathematics, Indian Institute of Science Education and Research Bhopal, Bhopal Bypass Road, Bhauri, Bhopal 462066, Madhya Pradesh, India'
author:
- Arindam Biswas
- Jyoti Prakash Saha
title: On minimal complements in groups
---
[^1]
[^2]
Introduction
============
Motivation
----------
Let $(G,.)$ be a group and $W\subseteq G$ be a nonempty subset. A nonempty set $W'\subseteq G$ is said to be a complement to $W$ if $$WW' = G.$$ Let $\mathcal{W}$ denote the set of all complements of $W$. Then it is clear that $\mathcal{W}\neq \emptyset$ (since $G\in \mathcal{W}$) and also the fact that the elements of $\mathcal{W}$ form a partially ordered set under inclusion.
A complement $W'$ to $W$ is minimal if no proper subset of $W'$ is a complement to $W$, i.e., $$WW' = G \,\text{ and }\, W.(W'\setminus \lbrace w'\rbrace)\subsetneq G \,\,\, \forall w'\in W'.$$
Given a minimal complement $W'$ of $W$, we see that the right translation $W'g$ is also a minimal complement of $W$ and $W'$ is a minimal complement of $gW$ for all $g \in G$. Thus, the existence of a minimal complement of a nonempty subset is equivalent to the existence of a minimal complement of any of its left translates.
It was shown by Nathanson (see [@NathansonAddNT4 Theorem 8]) that for a non-empty, finite subset $W$ in the additive group $\mathbb{Z}$, any complement to $W$ has a minimal complement. In the same paper he asked the following questions:
[@NathansonAddNT4 Problem 11] \[nathansonprob11\] “Let $W$ be an infinite set of integers. Does there exist a minimal complement to $W$? Does there exist a complement to $W$ that does not contain a minimal complement?”
[@NathansonAddNT4 Problem 12] \[nathansonprob12\] “Let $G$ be an infinite group, and let $W$ be a finite subset of $G$. Does there exist a minimal complement to $W$? Does there exist a complement to $W$ that does not contain a minimal complement?”
[@NathansonAddNT4 Problem 13] \[nathansonprob13\] “Let $G$ be an infinite group, and let $W$ be an infinite subset of $G$. Does there exist a minimal complement to $W$? Does there exist a complement to $W$ that does not contain a minimal complement?”
Since then the problems have generated considerable interest. Chen and Yang in 2012 gave examples of two infinite sets $W_1,W_2 \subset \mathbb{Z}$, such that $W_1$ has a complement that does not contain a minimal complement and every complement to $W_2$ contains a minimal complement (see [@ChenYang12]). They also gave certain necessary and certain sufficient conditions on the infinite set $W \subset \mathbb{Z}$ such that $W$ has a minimal complement (see [@ChenYang12 Theorem 1, 2]). Very recently, Kiss, Sándor and Yang [@KissSandorYangJCT19] succeeded in giving necessary and sufficient conditions for the existence of minimal complements of several other class of infinite sets in $\mathbb{Z}$ (which were not covered in the previous work of Chen and Yang). See [@KissSandorYangJCT19 Theorems 1, 2, 3].
Statement of results
--------------------
All the aforementioned progresses were in the setting of Question \[nathansonprob11\]. In this article, we deal with the Questions \[nathansonprob12\] and \[nathansonprob13\]. Specifically, we show that -
\[Theorem \[theorem1\]\] \[theorem1.1\] Let $G$ be an arbitrary group with $S$ a nonempty finite subset of $G$. Then every complement of $S$ in $G$ has a minimal complement.
See Section \[sec2\], Theorem \[theorem1\]. This answers Question \[nathansonprob12\] of Nathanson.\
We turn to Question \[nathansonprob13\]. Before commencing the discussion in detail, we state that it has a simple answer when the subset $W$ is a subgroup. Namely, in that case a minimal complement always exist. See Proposition \[prop4.1\]. For general infinite subsets, the situation is more delicate. To give an answer to Question \[nathansonprob13\], one needs to consider the infinite subsets which have less algebraic structure.
Our goal in this case is to establish the existence of minimal complements for a large class of infinite sets in finitely generated abelian groups. This will give the claimed partial solution to this question of Nathanson. In section \[Sec: Minimal complement\], we focus on the minimal complements of certain infinite subsets of free abelian groups of finite rank, which are of the form ${\ensuremath{\mathbb{Z}}}^d$ for some integer $d\geq 1$. It is interesting to consider the subsets $X$ of ${\ensuremath{\mathbb{Z}}}^d$ such that $x + {\ensuremath{\mathbb{N}}}u_1 + \cdots + {\ensuremath{\mathbb{N}}}u_d$ is contained in $X$ for any $x\in X$, i.e., $$\label{Eqn: preperiodic}
X \supseteq X + {\ensuremath{\mathbb{N}}}u_1 + \cdots + {\ensuremath{\mathbb{N}}}u_d,$$ where $u_1, \cdots, u_d$ are element of ${\ensuremath{\mathbb{Z}}}^d$ satisfying no nontrivial ${\ensuremath{\mathbb{Z}}}$-linear relation. However it turns out that such sets do not necessarily have a minimal complement. For instance, ${\ensuremath{\mathbb{Z}}}\times {\ensuremath{\mathbb{N}}}$ is a subset of ${\ensuremath{\mathbb{Z}}}^d$ which satisfies Equation , but do not have any minimal complement.
To obtain examples of infinite subsets of ${\ensuremath{\mathbb{Z}}}^d$ having minimal complements, we consider the periodic subsets of ${\ensuremath{\mathbb{Z}}}^d$ (these are subsets of ${\ensuremath{\mathbb{Z}}}^d$ satisfying Equation along with a finiteness condition, see Definition \[Defn: periodic\]). Unfortunately, there exist periodic subsets of ${\ensuremath{\mathbb{Z}}}^d$, which do not admit any minimal complement (see Proposition \[Cor: Nd minimal complement\]). So we consider a more general class of subsets of ${\ensuremath{\mathbb{Z}}}^d$ satisfying a weaker version of Equation along with certain finiteness condition (which we call *eventually periodic subsets*, see Definition \[Defn: periodic\] - these are $d$-dimensional analogues of the eventually periodic sets in $\mathbb{Z}$ considered by Kiss-Sándor-Yang in [@KissSandorYangJCT19]). Given an eventually periodic subset $W$ of ${\ensuremath{\mathbb{Z}}}^d$ (with periods $u_1, \cdots, u_d$), by Theorem \[Thm: Structure of Eventually Periodic\], there exist subsets ${\ensuremath{\mathcal {W}}}, {\ensuremath{\mathscr{W}}}$ of $W$ such that $$W = {\ensuremath{\mathscr{W}}}\sqcup ({\ensuremath{\mathcal {W}}}+ ({\ensuremath{\mathbb{N}}}u_1 + \cdots + {\ensuremath{\mathbb{N}}}u_d))$$ holds. It turns out that certain eventually periodic subsets of ${\ensuremath{\mathbb{Z}}}^d$ have minimal complements. The following result provides a necessary condition for an eventually periodic subset of ${\ensuremath{\mathbb{Z}}}^d$ to have a minimal complement. We prove the following results -
\[Theorem \[Thm: existence of min complement implies\]\] \[theorem1.2\] Let $W$ be an eventually periodic subset of ${\ensuremath{\mathbb{Z}}}^d$ with periods $u_1, \cdots, u_d$. Let ${\ensuremath{\mathscr{W}}}_1$ be as in Theorem \[Thm: existence of min complement implies\]. Suppose $W$ has a minimal complement in ${\ensuremath{\mathbb{Z}}}^d$. Then ${\ensuremath{\mathscr{W}}}_1$ is nonempty and there exists a nonempty finite subset ${\ensuremath{\mathcal{M}}}$ of ${\ensuremath{\mathbb{Z}}}^d$ such that the following conditions hold.
1. The map $\pi: ({\ensuremath{\mathcal{M}}}+ ({\ensuremath{\mathcal {
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper introduces and analyses the new grid-based tensor approach for approximate solution of the eigenvalue problem for linearized Hartree-Fock equation applied to the 3D lattice-structured and periodic systems. The set of localized basis functions over spatial $(L_1,L_2,L_3)$ lattice in a bounding box (or supercell) is assembled by multiple replicas of those from the unit cell. All basis functions and operators are discretized on a global 3D tensor grid in the bounding box which enables rather general basis sets. In the periodic case, the Galerkin Fock matrix is shown to have the three-level block circulant structure, that allows the FFT-based diagonalization. The proposed tensor techniques manifest the twofold benefits: (a) the entries of the Fock matrix are computed by 1D operations using low-rank tensors represented on a 3D grid, (b) the low-rank tensor structure in the diagonal blocks of the Fock matrix in the Fourier space reduces the conventional 3D FFT to the product of 1D FFTs. We describe fast numerical algorithms for the block circulant representation of the core Hamiltonian in the periodic setting based on low-rank tensor representation of arising multidimensional functions. Lattice type systems in a box with open boundary conditions are treated by our previous tensor solver for single molecules, which makes possible calculations on large $(L_1,L_2,L_3)$ lattices due to reduced numerical cost for 3D problems. The numerical simulations for box/periodic $(L,1,1)$ lattice systems in a 3D rectangular “tube” with $L$ up to several hundred confirm the theoretical complexity bounds for the tensor-structured eigenvalue solvers in the limit of large $L$.'
author:
- 'V. KHOROMSKAIA, [^1]'
- 'B. N. KHOROMSKIJ [^2]'
title: 'Tensor Numerical Approach to Linearized Hartree-Fock Equation for Lattice-type and Periodic Systems'
---
*AMS Subject Classification:* ** 65F30, 65F50, 65N35, 65F10
*Key words:* Hartree-Fock equation, tensor-structured numerical methods, 3D grid-based tensor approximation, Fock operator, core Hamiltonian, periodic systems, lattice summation, block circulant matrix, Fourier transform.
Introduction {#sec:introduct}
============
The efficient numerical simulation of periodic and perturbed periodic systems is one of the most challenging computational tasks in quantum chemistry calculations of crystalline, metallic and polymer-type compounds. The reformulation of the nonlinear Hartree-Fock equation for periodic molecular systems based on the Bloch theory [@Bloch:1925] has been addressed in the literature for more than forty years ago, and nowadays there are several implementations mostly relying on the analytic treatment of arising integral operators [@CRYSTAL:2000; @CRYSCOR:12; @GAUSS:09]. The mathematical analysis of spectral problems for PDEs with the periodic-type coefficients was an attractive topic in the recent decade, see [@CancesDeLe:08; @CanEhrMad:2012; @Ortn:ArX] and the references therein. However, the systematic developments and optimization of the basic numerical algorithms in the Hartree-Fock calculations for large lattice structured compounds still are largely unexplored.
Grid-based approaches for single molecules and moderate size systems based on the locally adaptive grids and multiresolution techniques have been discussed (see [@HaFaYaBeyl:04; @SaadRev:10; @Frediani:13; @CanEhrMad:2012; @Ortn:ArX; @BiVale:11; @RahOsel:13] and references therein).
In this paper, we consider the Hartree-Fock equation for extended systems composed of atoms or molecules, determined by means of an $(L_1, L_2, L_3)$ lattice in a box, both for open boundary conditions and in the periodic setting (supercell). The grid-based tensor-structured method is applied (see [@KhKhFl_Hart:09; @VKH_solver:13; @KhorSurv:10; @VeBoKh:Ewald:14] and references therein) to calculate the core Hamiltonian in the localized Gaussian-type basis sets living on a box/periodic spatial lattice. To perform numerical integration by using low-rank tensor formats we represent all basis functions on the fine global grid covering the whole computational box (supercell). The Hartree-Fock equation for periodic systems is reformulated as the eigenvalue problem for large block circulant matrices which are diagonalizable in the Fourier space, that allows efficient computations on large lattices of size $L=\max\{L_1,L_2,L_3\}$. In the following we consider the model problem for the Fock operator confined to the core Hamiltonian part.
One of the severe difficulties in the Hartree-Fock calculations for lattice-structured periodic or box-restricted systems is the computation of 3D lattice sums of a large number of long-distance Coulomb interaction potentials. This problem is traditionally treated by the so-called Ewald-type summation techniques [@Ewald:27] combined with the fast multipole expansion or/and FFT methods. Notice that the traditional approaches for lattice summation by the Ewald-type methods scale as $O(L^3 \log L)$ at least, for both periodic and box-type lattice sums. We apply the recent lattice summation method [@VeBoKh:Ewald:14] by assembled rank-structured tensor decomposition, which reduces the asymptotic cost at this computational step to linear scaling in $L$, i.e. $O(L)$.
In the presented approach the Fock matrix is calculated directly by 3D grid-based tensor numerical methods in the basis set of localized Gaussian-type-orbitals (GTO) specified by $m_0$ elements in the 3D unit cell [@VeKh_Diss:10; @VKH_solver:13]. Hence, we do not impose explicitly the periodicity-like features of the solution by means of the approximation ansatz that is normally the case in the Bloch formalism. Instead, the periodic properties of the considered system appear implicitly through the block structure in the Fock matrix. In periodic case this matrix is proved to inherit the three-level symmetric block circulant form, that allows its efficient diagonalization in the Fourier basis [@KaiSay_book:99; @Davis]. In the case of $d$-dimensional lattice ($d=1,2,3$), the weak overlap between lattice translated basis functions improves the block sparsity thus reducing the storage cost to $O(m_0^2 L)$, while the FFT-based diagonalization procedure amounts to $O(m_0^2 L^d \log L)$ operations. Introducing the low-rank tensor structure into the diagonal blocks of the Fock matrix represented in the Fourier space, and using the initial block-circulant structure it becomes possible to further reduce the numerical costs to linear scaling in $L$, $O(m_0^2 L \log L)$. We present numerical tests in the case of a rectangular 3D “tube” composed of $(L, 1, 1)$ cells with $L$ up to several hundred.
In the new approach one can potentially benefit from the additional flexibility that allows to treat slightly perturbed periodic systems in a straightforward way. Such situations may arise, for example, in the case of finite extended systems in a box (open boundary conditions) also considered in this paper, or for slightly perturbed periodic compounds, say for quasi-periodic systems with vacancies [@BGKh:12]. The proposed numerical scheme can be applied in the framework of self-consistent Hartree-Fock calculations, in particular, in the reduced Hartree-Fock model [@CancesDeLe:08], where the similar block-structure in the Fock matrix can be observed. The Wannier-type basis constructed by the lattice translation of the initial localized molecular orbitals precomputed on the reference unit cell, can be also adapted to our framework.
Furthermore, the arising block-structured matrix representing the stiffness matrix $H$ of the core Hamiltonian, as well as some auxiliary function-related tensors, can be shown to be well suited for further optimization by imposing the low-rank tensor formats, and in particular, the quantics-TT (QTT) tensor approximation [@KhQuant:09] of long vectors, which especially benefits in the limiting case of large $L$-periodic systems. In the QTT approach the algebraic operations on the 3D $n\times n\times n$ Cartesian grid can be implemented with logarithmic cost $O(\log n)$. Literature surveys on tensor algebra and rank-structured tensor methods for multi-dimensional PDEs can be found in [@Kolda; @KhorSurv:10; @GraKresTo:13], see also [@HaKhSaTy:08; @DoKhSavOs_mEIG:13].
The rest of the paper is organized as follows. Section \[sec\_MLBlock-circ\] recalls the main properties of the multilevel block circulant matrices with special focus on their diagonalization by FFT. Section \[sec:core\_H\] includes the main results on the analysis of core Hamiltonian on lattice structured compounds. In particular, section \[Core\_Hamil\] describes the tensor-structured calculation of the core Hamiltonian for large lattice-type molecular/atomic systems. We recall tensor-structured calculation of the Laplace operator and fast summation of lattice potentials by assembled canonical tensors. The complexity reduction due to low-rank tensor structures in the matrix blocks is discussed (see Proposition \[prop:low
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show that conservation laws in quantum mechanics naturally lead to metric spaces for the set of related physical quantities. All such metric spaces have an “onion-shell” geometry. We demonstrate the power of this approach by considering many-body systems immersed in a magnetic field, with a finite ground state current. In the associated metric spaces we find regions of allowed and forbidden distances, a “band structure” in metric space directly arising from the conservation of the $z$ component of the angular momentum.'
author:
- 'P. M. Sharp and I. D’Amico'
bibliography:
- 'References.bib'
title: Metric Space Formulation of Quantum Mechanical Conservation Laws
---
Introduction
============
Conservation laws are a central tenet of our understanding of the physical world. Their tight relationship to natural symmetries was demonstrated by Noether in 1918 [@Noether1918] and has since been a fundamental tool for developing theoretical physics. In this paper we demonstrate how these laws induce appropriate “natural” metrics on the related physical quantities. Conservation laws are central to the behavior of physical systems and we show how this relevant physics is translated into the metric analysis. We argue that this alternative picture provides a new powerful tool to study certain properties of many-body systems, which are often complex and hardly tractable when considered within the usual coordinate space-based analysis, while may become much simpler when analyzed within metric spaces. We exemplify this concept by considering functional relationships fundamental to current density functional theory (CDFT) [@Vignale1987; @Vignale1988].
We will first introduce a way to derive appropriate “natural” metrics from a system’s conservation laws. Second, as an example application of the approach, we will explicitly consider an important class of systems – systems with applied external magnetic fields. In contrast with those to which standard density functional theory (DFT) [@Dreizler1990] can be applied, systems subject to external magnetic fields are not simply characterized by their particle densities as even their ground states may display a finite current [@Vignale1987; @Vignale1988]. These systems are of great importance, e.g., due to the emerging quantum technologies of spintronics and quantum information where, for example, few electrons in nano- or microstructures immersed in magnetic fields are proposed as hardware units [@Takahashi2010; @daSilva2009; @Brandner2013; @Amaha2013; @Castellanos-Beltran2013].
To analyze systems immersed in a magnetic field, we will introduce a metric associated with the paramagnetic current density, which can be associated with the angular momentum components. We will show that, at least for systems which preserve the $z$ component of the angular momentum, the paramagnetic current density metric space displays an “onion-shell” geometry, directly descending from the related conservation law. In recent work [@D'Amico2011; @Artacho2011; @D'Amico2011b] appropriate metrics for characterizing wavefunctions and particle densities within quantum mechanics were introduced. It was shown that wavefunctions and their particle densities both form metric spaces with an “onion-shell” structure [@D'Amico2011]. We will show that, within the same general procedure used for the paramagnetic current, these metrics descend from the respective conservation laws. We will then focus on ground states and characterize them not only through the mapping between wavefunctions and particle densities, but importantly through mappings involving the paramagnetic current density. In fact, for systems with an applied magnetic field, ground state wavefunctions are characterized uniquely only by knowledge of both particle *and* paramagnetic current densities (and vice versa), as demonstrated within CDFT [@Vignale1987; @Vignale1988].
The rest of this paper is organized as follows: In Sec. \[metric\] we introduce our general approach to derive metric spaces from conservation laws. We demonstrate the application of this approach to wavefunctions, particle densities, and paramagnetic current densities in Sec. \[apply\]. We consider systems subject to magnetic fields in Sec. \[cdft\]. Here we use the metrics derived from our approach to study the fundamental theorem of CDFT. We present our conclusions in Sec. \[conclusion\].
Derivation of Metric Spaces from Conservation Laws {#metric}
==================================================
A metric or distance function $D$ over a set $X$ satisfies the following axioms for all $x,y,z \in X$ [@Megginson1998; @Sutherland2009]: $$\begin{aligned}
D(x,y) &\geqslant 0\ \text{and}\ D(x,y)=0 \iff x=y, \label{axiom1}\\
D(x,y) &= D(y,x), \label{axiom2}\\
D(x,y) &\leqslant D(x,z)+D(z,y), \label{axiom3}\end{aligned}$$ with (\[axiom3\]) known as the triangle inequality. The set $X$ with the metric $D$ forms the metric space $(X,D)$. It can be seen from the axioms (\[axiom1\]) - (\[axiom3\]) that many metrics could be devised for the same set, some trivial. Here we introduce “natural” metrics associated to conservation laws: this will avoid arbitrariness and in turn will ensure that the proposed metrics stem from core characteristics of the systems analyzed and contain the related physics.
In quantum mechanics, many conservation laws take the form $$\label{conservation}
\int {\left|f(x)\right|}^{p} dx = c$$ for $0<c<\infty$. For each value of $1\leqslant p<\infty$, the entire set of functions that satisfy (\[conservation\]) belong to the $L^p$ vector space, where the standard norm is the $p$ norm [@Megginson1998] $$\label{lp_norm}
{\left|\left|f(x)\right|\right|}_p =\left[\int {\left|f(x)\right|}^{p} dx \right]^{\frac{1}{p}}.$$ From any norm a metric can be introduced in a standard way as $D(x,y)={\left|\left|x-y\right|\right|}$ so that with $p$ norms we get $$\label{lp_metric}
D_{f}(f_1,f_2):={\left|\left|f_1-f_2\right|\right|}_p.$$ However before assuming this metric for the physical functions related to the conservation laws, an important consideration must be made: Eq. (\[lp\_metric\]) has been derived assuming the ensemble $\{f\}$ to be a vector space; this is in fact necessary to introduce a norm. If we want to retain the metric (\[lp\_metric\]), but restrict it to the ensemble of *physical* functions satisfying (\[conservation\]), which does not necessarily form a vector space, we must show that (\[lp\_metric\]) is a metric for this restricted function set. This can be done using the general theory of metric spaces: given a metric space $(X,D)$ and $S$ a non empty subset of $X$, $(S,D)$ is itself a metric space with the metric $D$ inherited from $(X,D)$. The metric axioms (\[axiom1\]) - (\[axiom3\]) automatically hold for $(S,D)$ because they hold for $(X,D)$ [@Megginson1998; @Sutherland2009]. Hence, we have a metric for the functions of interest, as their sets are non empty subsets of the respective $L^p$ sets.
The metric (\[lp\_metric\]) is then the one that *directly descends* from the conservation law (\[conservation\]). Conversely any conservation law which can be recast as (\[conservation\]) (for example conservation of quantum numbers) can be interpreted as inducing a metric on the appropriate, physically relevant, subset of $L^{p}$ functions. This provides a general procedure to derive “natural” metrics from physical conservation laws.
Applications of the Metric Space Approach {#apply}
=========================================
We now consider specific quantum mechanical functions and conservation laws. Following Ref. [@D'Amico2011] we use a convention where wavefunctions are normalized to the particle number $N$ [^1]. Then the particle density of an $N$-particle system and its paramagnetic current density are defined as $$\begin{aligned}
\rho({\mathbf{r}})&=\int {\left|\psi\left({\mathbf{r}},{\mathbf{r}}_{2},\ldots,{\mathbf{r}}_{N}\right)\right|}^{2} d{\mathbf{r}}_{2}\ldots d{\mathbf{r}}_{N},\label{density}\\
{\mathbf{j}}_{p}({\mathbf{r}})&=-\frac{i}{2}\int \left(\psi^{\ast}\nabla\psi - \psi\nabla\psi^{\ast}\right) d{\mathbf{r}}_{2}\ldots d{\mathbf{r}}_{N}.\label{current}\end{aligned}$$ First of all we note that $\psi\left({\mathbf{r}}_1,{\mathbf{r}}_{2},\ldots ,{\mathbf{r}}_{N}\right)$ and $\rho({\mathbf{r}})$ are subject to the following conservation laws (wavefunction norm and particle conservation): $$\begin{aligned}
&\int{\left|\frac{\psi\left({\mathbf{r}}_1,{\mathbf{r}}_{2},\ldots ,{\mathbf{r}}_{N}\right)}{\sqrt{N}}\right|}^{2}d{\mathbf{r}}_{1}\ldots d{\mathbf{r}}_{N} = 1,\label{psi_cons}\\
&\int\rho({\mathbf{r}}) d
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We examine the convexity and tractability of the two-sided linear chance constraint model under Gaussian uncertainty. We show that these constraints can be applied directly to model a larger class of nonlinear chance constraints as well as provide a reasonable approximation for a challenging class of quadratic chance constraints of direct interest for applications in power systems. With a view towards practical computations, we develop a second-order cone outer approximation of the two-sided chance constraint with provably small approximation error.'
author:
- Miles Lubin
- Daniel Bienstock
- Juan Pablo Vielma
bibliography:
- 'refs.bib'
date: February 2016
title: 'Two-sided linear chance constraints and extensions'
---
Introduction
============
Chance constraints (or probabilistic constraints) were among the first extensions proposed to linear programming as a natural formulation for treating constraints where some of the coefficients are uncertain at the time of optimization [@CharnesCooper]. In the chance constraint model, we suppose that the uncertain values follow a known distribution and enforce that the constraint holds with high probability as a function of the decision variables.
Nemirovski and Shapiro [@NS2007] observe that, in general, convexity and tractability results in chance constraints are a rare combination. When the corresponding deterministic constraint is convex, the chance constraint may be nonconvex. And even for those chance constraints which are in fact convex, the authors [@NS2007] cite examples where such constraints remain computationally intractable because it is NP-Hard to test if the constraint is satisfied. For linear chance constraints of the form $$\label{eq:onesidechance}
\mathbb{P}(x^T\xi \le b) \ge 1-\epsilon,$$ where $x \in \mathbb{R}^n$ and $b \in \mathbb{R}$ are decision variables, the constraint is known to be convex (that is, the set $\{ (x,b) : \mathbb{P}(x^T\xi \le b) \ge 1-\epsilon\}$ is convex) and computationally tractable when $\xi$ has an *elliptical log-concave* distribution [@Lagoa05], examples of which include the multivariate Gaussian distribution and few others. The computational challenges presented by chance constraints have motivated approximation schemes [@NS2007] and alternative formulations such as robust optimization [@BenTalNemirovskiRobust2000].
Even more challenging than linear chance constraints, *joint chance constraints* require that a set of linear constraints hold jointly with high probability. Prékopa [@PrekopaBook] reviews many of the standard results. In particular, he proves convexity of the constraint $\mathbb{P}(x \ge \xi) \ge 1-\epsilon$ with respect to $x\in \mathbb{R}^n$ when $\xi$ follows a multivariate continuous log-concave distribution and of the constraint $\mathbb{P}(Tx \ge 0) \ge 1-\epsilon$ when some elements of the matrix $T$ are random with a joint Gaussian distribution and have a specialized covariance structure between the rows of $T$ (further generalized by [@Copulas]). Van Ackooij et al. [@VanAckooij10] consider *rectangular* chance constraints of the form $\mathbb{P}(a \le \xi \le b) \ge 1-\epsilon$ with respect to vectors $a$ and $b$ where $\xi$ follows a multivariate Gaussian distribution. Their model does not allow for products between random variables and decision variables.
The basic model we consider in this work, which is a special case of a joint chance constraint, is the two-sided chance constraint $$\label{eq:twosideintro}
\mathbb{P}(a \le x^T\xi \le b) \ge 1-\epsilon,$$ where $a \in \mathbb{R}, b \in \mathbb{R}$, and $x \in \mathbb{R}^n$ are decision variables, and $\xi$ is jointly Gaussian with known mean and covariance. In Section \[sec:cvx2side\], we prove that this constraint is in fact convex in $a$, $b$ and $x$ given $\epsilon \le \frac{1}{2}$. The proof, which we believe is the first, follows from a geometrical insight combined with standard tools for chance constraints such as log-concavity. The major methodological contributions of this work lie in the subsequent generalizations of the model and in our analysis of the computational tractability of the chance constraint. In Section \[sec:exact\_extensions\] we show that a number of seemingly more complex and nonlinear constraints can be formulated by using the two-sided constraint . In Section \[sec:tractability\], we demonstrate computational tractability of these constraints under a modern mathematical optimization lens. In addition to an exact derivative-based nonlinear formulation, we develop an approximate second-order cone (SOC) formulation for with provable approximation quality. This SOC formulation permits one to incorporate such constraints into large-scale models solvable by state-of-the-art commercial and open-source software.
Using as a primitive, we develop an approximation for the more challenging chance constraint $$\label{eq:quadintro}
\mathbb{P}((a^T\xi + b)^2 + (c^T\xi + d)^2 \le k) \ge 1-\epsilon,$$ where $a,c \in \mathbb{R}^n$, $b,d,k \in \mathbb{R}$ are decision variables, and $\xi$ is jointly Gaussian with known mean and covariance. This constraint is motivated by applications in power systems which we discuss in Section \[sec:motivation\]. In Section \[sec:quadapprox\], we study the constraint in detail and compare a number of approximation schemes, ultimately demonstrating that our approximation based on two-sided constraints is reasonable and of practical interest for its tractability.
Motivation {#sec:motivation}
==========
The basic question which motivates this work is the short-term planning problem, known as *optimal power flow* (OPF), which is solved as part of the real-time operation of the power grid to determine the minimum-cost production levels of controllable generators subject to reliably delivering electricity to customers across a large geographical area [@OPFreview; @BergenBook]. Conceptually, OPF is similar to a network flow problem with the additional complication that power flows according to the nonlinear Kirchhoff laws. On top of the nonlinear power flow laws, we aim to consider the uncertainty in production levels of renewable energy sources such as wind and solar photovoltaic.
In its traditional, deterministic form, OPF seeks to minimize total production costs $$\operatorname*{minimize}_{p,\theta,f} \sum_{i \in \mathcal{G}} c_{i}p_i\label{eq:Det_OPF}$$ $$\begin{gathered}
\sum_{n : \{b,n\} \in \mathcal{L}} f_{bn} - \sum_{m : \{m,b\} \in \mathcal{L}} f_{mb} = \sum_{i \in G_b} p_i + w_b - d_b, \quad \forall b \in \mathcal{B},\label{eq:balance} \\
\label{eq:gencapacity} p_{i}^{min} \leq p_i \leq p_i^{max}, \quad \forall i \in \mathcal{G},\\
\label{eq:flowdef} f_{mn} = \beta_{mn}(\theta_m - \theta_n), \quad \forall \{m,n\} \in \mathcal{L}, \\
\label{eq:Det_OPF_end} -f_{mn}^{max} \leq f _{mn} \leq f_{mn}^{max}, \quad \forall \{m,n\} \in \mathcal{L}, \end{gathered}$$ where $\mathcal{B}$ is the set of nodes (buses) in the grid, $\mathcal{G}$ is the set of generators, $G_b$ is the set of generators located at node $b$, and $\mathcal{L}$ is the set of edges (transmission lines). Decision variables $p_i$ denote the production levels of generator $i$, and the variables $f_{mn}$ denote the flow from node $m$ to node $n$. The value $d_b$ is the demand at each node (assumed to be known), and the value $w_b$ is the forecast production level from renewable energy sources (again assumed to be known). Constraint is the familiar flow balance constraint which balances supply with demand at each node. Constraints and enforce the capacities of the generators and transmission lines, respectively. The constraint links the flows to the bus angles $\theta$ and arises from the standard “DC” linearization of the nonlinear power flow laws; hence, this formulation is often called DCOPF. The formulation as stated above is efficiently solvable by linear programming on large-scale systems with tens of thousands of nodes within real-time operational constraints.
Our motivation is to address two major deficiencies in the standard DCOPF model. The first major deficiency is the deterministic nature of the model. In particular, the amount of power generated by renewable energy sources such as wind is highly variable and must be accounted for in short-term planning.
The line of work by [@ccopf-sirev; @JuMPChanceCaseStudy] addresses this deficiency by introducing chance constraints. More specifically, Bienstock et al. [@ccopf-
|
{
"pile_set_name": "ArXiv"
}
|
Why QCD Explorer stage of the LHeC should have high(est) priority
S. A. Çetin$^{a}$, S. Sultansoy$^{b}$, G. Ünel$^{c}$
$^{a}$[Doğuş University, Istanbul, Turkey]{}
$^{b}$[TOBB University of Economics and Technology, Ankara, Turkey\
and ANAS Institute of Physics, Baku, Azerbaijan ]{}
$^{c}$[University of California, Irvine, USA]{}
*Abstract: The QCD Explorer will give opportunity to enlighten the origin of the 98.5% portion of the visible universe’s mass, clarify the nature of the strong intecartions from parton to nuclear level and provide precision pdf’s for the LHC. Especially the $\gamma{}$-nucleus option seems to be very promising for QCD studies.*
Linac-ring type colliders have two main goals: to explore TeV scale with lepton-hadron and photon-hadron collisions and to achieve highest luminosity at flavor factories (the history of corresponding proposals can be found in \[1\]). This note concentrates on the first goal which is represented by the linac-ring option of the Large Hadron electron Collider (LHeC), proposed to explore the highest energy proton and ion beams available at the LHC probed by energetic electron or gamma beams from a linac tangent to the LHC. The Conceptual Design Report (CDR) of the LHeC project which is published in \[2\], investigates two options for the collider: Linac-Ring (LR) type collider where electrons are provided by linac and Ring-Ring (RR) option which assumes an additional electron ring in the LHC tunnel.
The idea of the LHC based linac-ring type ep/$\gamma{}$p collider includes two stages: QCD Explorer (E$_{e }$= 60, 140 GeV) and Energy Frontier (E$_{e }$$\geq{}$ 500 GeV). The first stage is mandatory for a deeper understanding of the strong interactions and an adequate interpretation of the LHC data which requires precision pdf’s. The second stage, which actually depends on the outcomes of the LHC, hence called provisional, will mainly have great potential for BSM physics complementary to LHC and exceeding the possible ILC. It should be noted that the Energy Frontier as well as $\gamma{}$p options of both QCD Explorer and Energy Frontier can only be realised with the linac-ring option.
Today, LR option is considered as the basic one for the LHeC. Actually this decision was almost obvious from the beginning due to the complications in constructing by-pass tunnels around the existing experimental caverns and installing the e-ring in the already commisioned tunnel. Let us remind that the CDR sthgeaof the LHC assumed also ep collisions using the already existed LEP ring; but it turned out that LHC installation required dismantling of LEP from the tunnel.
Now that LR is the choice for the LHeC, Energy Recovery Linac (ERL) is being pushed as a basic choice instead of the single-pass option using the argument that it could provide an order of magnitude higher luminosity. Nevertheless, keeping in mind that such higher luminosity is not necesseary for the QCD Explorer, it is likely that the single-pass option will become dominant soon, however we believe the sooner the better. It should ne mentioned that a very important advantage of the LR option, namely the opportunity to construct $\gamma{}$p/$\gamma{}$A collider loses its strength at rhe ERL based LHeC, moreover the single-pass option will give the opportunity to increase the energy of the electrons by lengthening the linac further.
Concerning the physics program of the QCD Explorer, putting forward the search for SUSY or other BSM physics or even detailed study of the Higgs boson as the main goal would have serious drawbacks. The uniqueness of such a machine lies in its potential to probe the nature of the strong interactions from parton to nuclear level and provide precision pdf’s for the adequate interpretation of the LHC results. It is well known that big challenges still exist in the QCD part of the Standard Model like understanding confinement and quark-gluon plasma. QCD Explorer will give the opportunity to reach very small x$_{g}$ region \[3\] shedding light on confinement. Then according to vector meson dominance the $\gamma{}$A collider will act as a $\rho{}$A collider which will give an opportunity to investigate formation of the quark gluon plasma at very high temperatures and low densities.
In light of the discussions presented above we propose the following phases for QCD Explorer based on single-pass linac option. First phase: ep collider with luminosity of 10$^{32}$cm$^{-2}$s$^{-1}$ and eA collider with luminosity of AxL$_{eA}$=10$^{31}$cm$^{-2}$s$^{-1}$ which seems sufficient for QCD studies. Second Phase: $\gamma{}$p and $\gamma{}$A collider with similar luminosities. Third Phase: construction of a second single-pass linac for energy recovery \[4\] to achieve much higher luminosities. Fourth Phase: lengthening the single-pass linac to switch to Energy Frontier stage.
With the discovery of the long sought Higgs boson, the electroweak sector of the Standard Model has filled its gaps. At this point it is worth mentioning that the Higgs Mechanism accounts for only $\sim$1.5% of the mass of the visible universe and the rest, $\sim$98.5% is provided by the QCD. Hence another strong motivation of the QCD Explorer is to better understand the formation of the visible universe.
In conclusion, we hope that the presented qualitative arguments justify the necessity of the QCD Explorer for the future of the high energy physics.
**References**
1\. A.N. Akay, H. Karadeniz and S. Sultansoy, *Review of Linac-Ring Type Collider Proposals*, Int. J. Mod. Phys. A25 (2010) 4589-4602; e-Print: arXiv:0911.3314 \[physics.acc-ph\]
2\. J L Abelleira Fernandez et al. (LHeC Study Group), *A Large Hadron Electron Collider at CERN: Report on the Physics and Desigon Concepts for Machine and Detector*, J. Phys. G. 39 (2012) 075001; e-Print: arXiv:1206.2913 \[physics.acc-ph\]
3\. U. Kaya, S. Sultansoy, G. Unel, *Probing small x(g) region with the LHeC based gamma-p colliders*, Nov 2012, e-Print: arXiv:1211.5061 \[hep-ph\]
4\. V. Litvinenko, *LHeC with \~100% energy recovery linac*, 2nd CERN-ECFA-NuPECC workshop on LHeC, Divonne-les-Bains, 1-3 Sep (2009).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'New mechanism of magnetoresistivity in itinerant metamagnets with a structural disorder is introduced basing on analysis of experimental results on magnetoresistivity, susceptibility, and magnetization of structurally disordered alloys (Y$_{1-x}$Gd$_{x}$)Co$_{2}$. In this series, YCo$_{2}$ is an enhanced Pauli paramagnet, whereas GdCo$_{2}$ is a ferrimagnet (T$_{\rm c}$=400 K) with Gd sublattice coupled antiferromagnetically to the itinerant Co-3d electrons. The alloys are paramagnetic for $x < 0.12$. Large positive magnetoresistivity has been observed in the alloys with magnetic ground state at temperatures T$<$T$_{\rm c}$. We show that this unusual feature is linked to a combination of structural disorder and metamagnetic instability of itinerant Co-3d electrons. This new mechanism of the magnetoresistivity is common for a broad class of materials featuring a static magnetic disorder and itinerant metamagnetism.'
author:
- 'A. T. Burkov, A. Yu. Zyuzin'
- 'T. Nakama, K. Yagasaki'
title: 'Anomalous magnetotransport in (Y$_{1-x}$Gd$_{x}$)Co$_{2}$ alloys: interplay of disorder and itinerant metamagnetism.'
---
Introduction
============
Interplay of structural disorder and magnetic interactions opens a rich field of new physical phenomena. Among them are the actively discussed possibility of disorder-induced Non-Fermi Liquid (NFL) behavior near a magnetic Quantum Critical Point (QCP) as well as a broader scope of effects of disorder on magnetotransport. [@Hertz76; @Millis93; @Varma2001] Structurally disordered alloys Y$_{1-x}$Gd$_{x}$Co$_{2}$ are quasi-binary solid solutions of Laves phase compounds YCo$_2$ and GdCo$_2$. The compounds belong to a large family of isostructural composites RCo$_2$. YCo$_2$ is an enhanced Pauli paramagnet whose itinerant Co-3d electron system is close to magnetic instability. In external magnetic field of about 70 T this system undergoes a metamagnetic transition into ferromagnetic (FM) ground state.[@Goto89] GdCo$_2$ is, on the other hand, a ferrimagnet with a Curie temperature of 400 K in which the spontaneous magnetization of 4f moments is anti-parallel to the induced magnetization of the Co-3d band. Compounds of this family and their alloys provide a convenient ground for experimental studies of magnetotransport phenomena. The electronic structure in the important for the transport vicinity of the Fermi energy is composed mainly of Co-3d states and is, to the first approximation, the same for all compounds of RCo$_2 $ family. It has been found that the main contribution to the resistivity of RCo$_2$ comes from the scattering of conduction electrons on magnetic fluctuations due to strong s–d exchange coupling, [@Gratz95] therefore the transport properties are expected to be especially sensitive to the magnetic state of the sample. GdCo$_2$ occupies a special place in RCo$_2$ family since Gd 4f magnetic moment has no orbital contribution and, therefore crystal-field effects are not important for this compound.
The experimental results on the transport properties of Y$_{1-x}$Gd$_{x}$Co$_{2}$ alloys has been published partly in our previous article. [@Nakama2001] Here we analyze these and new experimental results in order to reveal the physical mechanism of anomalous megnetotransport properties observed in the alloys. In this paper we will discuss the magnetotransport properties of the FM alloys. The properties of paramagnetic alloys will be published elsewhere.
Experimental
============
Samples of Y$_{1-x}$Gd$_{x}$Co$_{2}$ were prepared from pure components by melting in an arc furnace under a protective Ar atmosphere and were subsequently annealed in vacuum at 1100 K for about one week. An X-ray analysis showed no traces of impurity phases. A four–probe dc method was used for electrical resistivity measurements. Magnetoresistivity (MR) was measured with longitudinal orientation of electrical current with respect to the magnetic field. The size of the samples was typically about 1$\times $ 1$ \times $10 mm$^{3}$. Magnetization was measured by a SQUID magnetometer for samples from the same ingot as that used for the resistivity and AC susceptibility measurements.
Experimental results
====================
The magnetic phase diagram of the Y$_{1-x}$Gd$_{x}$Co$_{2}$ system inferred from the transport and magnetic measurements [@Nakama2001] is shown in Fig. \[PasDiag\].
![The upper panel shows the ordering temperature T$_{\rm
c}$ $\blacksquare $ (right y-axis), and the MR $\bigoplus $ (left axis) of the Y$_{1-x}$Gd$_{x}$Co$_{2}$ system [@Nakama2001]. The MR was measured at T = 2 K in magnetic field of 15 T. The dotted vertical lines indicate phase boundaries at zero temperature. The lower panel displays normalized resistivity $\frac{\rho(2~K)}{\rho(300~K)}$.[]{data-label="PasDiag"}](fig1.eps){width="1.0\linewidth"}
Curie temperature $T_{\mathrm{c}}$ decreases with increasing content of Y and eventually drops to zero. A precise determination of the critical concentration $x_c$ which separates the magnetically ordered ground state and the paramagnetic region is difficult, since on the onset of the long range order its signatures in the magnetic and transport properties are very weak. The first firm evidence of the long range order are found for alloy with x=0.14 in ac susceptibility at T=27 K, Fig. \[Sus\].
![The ac susceptibility of the Y$_{1-x}$Gd$_{x}$Co$_{2}$ alloys. Note, the experimental data for YCo$_2$ and for the alloy with $x=0.14$ are multiplied by factor 20.[]{data-label="Sus"}](fig2.eps){width="1.0\linewidth"}
Quantum critical scaling theory predicts that when Curie temperature of a FM system continuously depends on an external parameter $x$, this dependence is expressed as [@Millis93]: $$T_{\rm c}\varpropto \left|
x-x_{\rm c}\right| ^{\frac{z}{d+z-2}}$$ with critical index $z=3$ for a FM system of spatial dimension $d=3.$ The experimental T$_{\rm c}$ vs. $x$ dependency does follow this relation, but with additional kink at $x=x_{\rm t}$. A possible origin of this kink will be discussed later. Linear extrapolation of the phase separation line on the phase diagram Fig. \[PasDiag\] to $T_{\rm
c} =0$ gives as the critical concentration $x_c=0.12$. We do not claim however that QCP exists in this alloy system. Direct experimental verification that T$_{\rm c}$ $ \rightarrow$ 0 as $x$ approaches $x_{\rm c}$ from magnetically ordered state is difficult for a disordered alloy system.
The very surprising result is the positive MR in the FM phase at low temperatures, see Figs. \[PasDiag\] and \[DRvsT\]. The well known theoretical result for MR of a localized moment ferromagnet was derived long ago by Kasuya and De Gennes. [@Kasuya56] As it follows from their theory, MR of a metallic ferromagnet should be negative, having a maximum absolute value at Curie temperature, and approaching zero as T $\rightarrow$ 0, and in the limit of high temperatures. Qualitatively this behavior has been supported by experiment, as well as by later more detailed theoretical calculations. The present experimental results are in a qualitative agreement with this theoretical behavior only for alloys with large Gd content ($x \geqslant 0.4$) (Fig. \[DRvsT\]). MR of the FM alloys with composition $0.3>x>0.14$ fundamentally differs from this theoretical behavior. Let us note that this composition range falls into the region of the phase diagram between the paramagnetic phase and the additional phase boundary indicated by the kink in the T$_c$ vs $x$ dependency, see Fig. \[PasDiag\]. MR of these alloys is positive below Curie temperature and is very large.
![The upper panel shows the temperature dependence of the MR of the Y$_{1-x}$Gd$_{x}$Co$_{2} $ alloys, measured in field of 15 T. Large positive MR of FM alloys ($x\leq0.3$) is observed at low temperatures. The field dependencies of MR, measured at T=2 K, are presented on the lower panel.[]{data-label="DRvsT"}](fig3.eps){width="1.1\linewidth"}
The known mechanisms of a positive MR can not explain the experimental data. A rough estimate of Lorenz force-driven MR one can get from a comparison with the MR of pure YCo$_2$. [@Burkov98] In the most pure samples of YCo$_2$ (with residual resistivity of about 2 $\mu \Omega $cm) the Lorenz force-driven positive contribution to the total MR does not exceed 5%. On the other hand the resistivity of the FM alloys at low temperatures falls into the region from 30 to 100 $\mu \Omega $ cm, i.e. at least one order of magnitude larger than
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Direct evaluation of the 1-loop fluctuation determinant of non-static degrees of freedom in a complete static background is advocated to be more efficient for the determination of the effective three-dimensional model of the electroweak phase transition than the one-by-one evaluation of Feynman diagrams. The relation of the couplings and fields of the effective model to those of the four-dimensional finite temperature system is determined in the general ’t Hooft gauge with full implementation of renormalisation effects. Only field renormalisation constants display dependence on the gauge fixing parameter. Characteristics of the electroweak transition are computed from the effective theory in Lorentz-gauge. The dependence of various physical observables on the three-dimensional gauge fixing parameter is investigated.'
author:
- |
[A. Jakovác$^{1}$ and A. Patkós$^{2}$]{}\
[Department of Atomic Physics]{}\
[Eötvös University, Budapest, Hungary]{}\
title: |
Finite Temperature Reduction\
of the SU(2) Higgs-Model\
with Complete Static Background\
---
A new wave of investigations of finite temperature gauge theories is driven by the challenge of the matter-antimatter asymmetry of the Universe. Anomalous baryon number violating processes thermally excited near the electroweak phase transition certainly have had impact on any [*a priori*]{} asymmetry. Additional non-equilibrium and CP-violating effects, occuring during the transition, might have contributed to the generation of the present day value of the symmetry.
Temperature introduces a natural mass-scale into the relevant field theory. It builds up a hierarchy among the fluctuations, which should be exploited in the evaluation of the partition function. Heavy modes with non-zero Matsubara index are important for the accurate determination of the couplings between the (almost) T-scale independent static modes, which drive the phase transition. This physical picture is the content of the dimensional reduction [@pisarski; @landsman] of finite temperature field theories. The validity of the assumed mass-hierarchy should be checked carefully after each reduction step.
A correctly reduced 3-d effective model offers important advantages from the point of view of the application of standard methods of statistical physics to the electroweak phase transition [@arnold1; @march; @gleiser]. Also lattice simulations are greatly facilitated if the full 4-d system is replaced by the coresponding 3-d effective model [@kajantie; @farakos1; @karsch], since the extreme weak coupling situation makes the simulation of the 4-d system a particularly involved task [@bunk; @montvay].
We emphasize, that for the success of the above strategies the most faithful possible mapping of the 4-d couplings on the temperature dependent 3-d ones is essential. For instance, in the renormalisation group flow of the\
3-d model [*dim 6*]{} operators might play important role. The determination of their weights in the Lagrangian of the effective model with help of the usual Feynman diagram technique requires calculations of increased complexity.
The first complete determination of the reduced model up to [*dim 4*]{} operators in the 1-loop approximation, including field renormalisation effects has been published very recently [@farakos2]. The authors evaluate all relevant Feynman diagrams with two and four, zero-momentum external field insertions. The computation has been performed in the Landau gauge using dimensional regularisation, followed by the application of the $\overline {{\rm MS}}$ renormalisation scheme.
In this note we present evidence that the evaluation of the functional fluctuation determinant in a complete static background ($A_i^a({\bf
x}), A_{4}^{a}({\bf x}), \Phi ({\bf x})$) offers a simpler and more compact calculational scheme. It allows the unified determination of all renormalisation constants of the 4-d theory, and in principle it is easily extendable also to the computation of the higher dimensional operators. (After the completion of our investigation we received a paper by Chapman [@chapman], where an analogous calculation has been performed for SU(N) pure gauge theories up to [*dim 6*]{} operators. The calculational technique, however, was fully different.)
Since the method of symbolic evaluation of the functional determinant with constant complete background is of equal difficulty for any member of a certain gauge class, without any extra complication one is able to study the dependence of the action of the effective theory on the gauge fixing parameter. Specifically, we shall perform the reduction with general ’t Hooft gauge fixing, applying 3-d momentum cut-off regularisation. The normalisation of the scalar potential piece of the effective action will be fixed by imposing Linde’s conditions [@linde].
We shall show, that the effective theory and the expressions of the 3-d couplings do not depend on the gauge fixing parameter. The 1-loop effective potential of the 3-d theory will be determined next in the general (three-dimensional) Lorentz gauge, and the dependence of the critical data ($T_c$, order parameter discontinuity, etc...) on the parameter of the 3-d gauge fixing be discussed. This point essentially follows [@arnold2], going beyond it in the implementation of the detailed relation between the couplings of the 3-d theory to the 4-d ones, and the numerical evaluation of the physical characteristics of the transition, not restricting the discussion to analytic perturbative considerations. 1truecm 1. The model under consideration is the SU(2) gauge+scalar theory at finite temperature $$S=\int_{0}^{\beta}d\tau \int d^{3}x\bigl [{1\over 4}F_{mn}^{a}F_{mn}^{a}+
{1\over 2}(D_{m}\Phi)^{\dagger}(D_{m}\Phi)+V(\Phi )\bigr ],$$ $$V(\Phi )= {1\over 2}m^{2}\Phi^{\dagger}\Phi\
+{\lambda\over 24}(\Phi^{\dagger}\Phi)^{2},$$ $$D_{m}\Phi =(\partial_{m}+igA_{m}^{a}\tau^{a}/2)\Phi,$$ m=1,...,4; a=1,2,3. (In eqs. (1-3) the renormalised parameters appear, the counterterms are not displayed explicitly, also Euclidean metrics is understood). The 1-loop integration over non-static modes will be peformed with full background, that is all fields are split into a non-zero static and a non-static part: $$\begin{aligned}
&
A_m=A_m({\bf x})+a_m({\bf x},\tau),\nonumber\\
&
\Phi=\left(\matrix{ 0 \cr \Phi_{0}({\bf x})\cr}\right)
+\left(\matrix{\xi_{1}({\bf x},\tau )+i\xi_{2}({\bf x},\tau )\cr
\xi_{3}({\bf x},\tau )+i\xi_{4}({\bf x},\tau )\cr}
\right).
\label{eq4}\end{aligned}$$
We shall demonstrate that the full renormalised reduced action can be recovered by choosing the static background [*constant*]{} (with the most general orientation in the isospace). Upon substituting the decomposition (4) into (1) one separates terms containing the non-static fields up to second power, for the 1-loop integration. The piece depending only on the constant background takes the form: $$U^{(0)}=\beta V\bigl [{1\over 4}g^2(A_i\times A_j)^2+{1\over 2}
g^2(A_i\times A_4)^2+{1\over 8}g^2(A_i^2+A_4^2)\Phi_0^{\dagger}
\Phi_0+V(\Phi_0)\bigr ]$$ (i=1,2,3). The part quadratic in the non-static components will not be displayed explicitly, since its expression is lengthy and not enlightening. The only important point for us is, that the fluctuations are characterised by a $16\times 16$ matrix, because the 12 gauge field components and 4 real Higgs scalar components are fully coupled in the most general constant background.
The gauge fixing conditions imposed on the fluctuations $a_{m},
\xi_{\alpha} $ are $$\begin{aligned}
&
F^1 =(D_{\mu}(A)a_{\mu})^{1}-\alpha{g\Phi_0\over 2}\xi_2,\nonumber \\ &
F^2 =(D_{\mu}(A)a_{\mu})^2-\alpha{g\Phi_0\over 2}\xi_1,\nonumber \\ &
F^3 =(D_{\mu}(A)a_{\mu})^3+\alpha{g\Phi_0\over 2}\xi_4,\end{aligned}$$ ($D_{\mu}(A)$ is the covariant derivative in the background field $A$, $\alpha$ is the gauge fixing parameter). The corresponding Faddeev-Popov determinant is $$\det \{ [K^2 +g^2(A_i^2+A_4^2)+{\alpha \over 4}g^2\Phi_0^2 ]\delta^{a,b}-
2ig\epsilon^{abc}K_mA_m^c-g^2A_m^aA_m^b)\},$$ where $K^2 ={\bf k}^2+\omega_n^2$.
Since the distinguishing difference of the proposed method relative to the conventional Feynman diagrams consists of the explicit evaluation of the fluctuation determinant in constant background, we are going to elaborate on certain technical details
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Current steps attributed to resonant tunneling through individual InAs quantum dots embedded in a GaAs-AlAs-GaAs tunneling device are investigated experimentally in magnetic fields up to 28 T. The steps evolve into strongly enhanced current peaks in high fields. This can be understood as a field-induced Fermi-edge singularity due to the Coulomb interaction between the tunneling electron on the quantum dot and the partly spin polarized Fermi sea in the Landau quantized three-dimensional emitter.'
address: |
$^1$Institut für Festkörperphysik, Universität Hannover, Appelstra[ß]{}e 2, D-30167 Hannover, Germany\
$^2$Institut für Theoretische Physik, Universität Hannover, Appelstra[ß]{}e 2, D-30167 Hannover, Germany\
$^3$Grenoble High Magnetic Field Laboratory, MPIF-CNRS, B.P. 166, F-38042 Grenoble Cedex 09, France\
$^4$Physikalisch-Technische Bundesanstalt Braunschweig, Bundesallee 100, D-38116 Braunschweig, Germany
author:
- 'I. Hapke-Wurst,$^1$ U. Zeitler,$^1$ H. Frahm,$^2$ A. G. M. Jansen,$^3$ R. J. Haug,$^1$ and K. Pierz$^4$'
title: ' Magnetic-field-induced singularities in spin dependent tunneling through InAs quantum dots'
---
\#1[[$\backslash$\#1]{}]{}
The interaction of the Fermi sea of a metallic system with a local potential can lead to strong singularities close to the Fermi edge. Such effects have been predicted more than thirty years ago for the X-ray absorption and emission of metals[@theoXray] and observed subsequently[@expXray]. Similar singularities as a consequence of many body effects are also known from the luminescence of quantum wells[@Lum]. Matveev and Larkin were the first to predict interaction-induced singularities in the tunneling current via a localized state[@Matveev:1992] which were measured experimentally in several resonant tunneling experiments[@Geim:1994; @Cobden:1995; @Benedict:1998] from [*two-dimensional*]{} electrodes through a zero-dimensional system.
Here we report on singularities observed in the resonant tunneling from highly doped [*three-dimensional*]{} (3D) GaAs electrodes through an InAs quantum dot (QD) embedded in an AlAs barrier. These Fermi-edge singularities (FES) show a considerable magnetic field dependence and a strong enhancement in high magnetic fields where the 3D electrons occupy the lowest Landau level in the emitter. We observe an asymmetry in the enhancement for electrons of different spins with an extremely strong FES for electrons carrying the majority spin of the emitter. The experimental observations are explained by a theoretical model taking into account the electrostatic potential experienced by the conduction electrons in the emitter due to the charged QD. We will show that the partial spin polarisation of the emitter causes extreme values of the edge exponent $\gamma$ not observed until present and going beyond the standard theory valid for $\gamma \ll 1$ [@Matveev:1992].
The active part of our samples are self-organized InAs QDs with 3-4 nm height and 10-15 nm diameter embedded in the middle of a 10 nm-thick AlAs barrier and
sandwiched between two 3D electrodes. They consist of a 15 nm undoped GaAs spacer layer and a GaAs-buffer with graded doping. A typical InAs dot is sketched in inset (a) of Fig. \[steps\], the vertical band structure across a dot is schematically shown in inset (b).
Current voltage ($I$-$V$) characteristics were measured in large area vertical diodes ($40\times 40~\mu$m$^2$) patterned on the wafer. In Fig. \[steps\] we show a part of a typical $I$-$V$-curve with several discrete steps. We have demonstrated previously that such steps can be assigned to single electron tunneling from 3D electrodes through individual InAs QDs [@Hapke:1999] consistent with other resonant tunneling experiments through self-organized InAs QDs [@tunnel].
For the positive bias voltages shown in Fig. \[steps\] the electrons tunnel from the bottom electrode into the base of an InAs QD and leave the dot via the top. The tunneling current is mainly determined by the tunneling rate through the effectively thicker barrier below the dot (single electron tunneling regime). A step in the current occurs at bias voltages where the energy level of a dot, $E_D$, coincides with the Fermi level of the emitter, $E_F$.
In the following we will concentrate on the step labeled (\*) in Fig. \[steps\]. Other steps in the same structure as well as steps observed in the $I$-$V$-characteristics of other structures show a very similar behavior.
After the step edge a slight overshoot in the tunneling current occurs consistent with other tunneling experiments through a localized impurity [@Geim:1994] or through InAs dots [@Benedict:1998]. This effect is caused by the Coulomb interaction between a localized electron on the dot and the electrons at the Fermi edge of the emitter. The decrease of the current $I(V)$ towards higher voltages $V >V_0$ follows a power law $I \propto (V-V_0)^{-\gamma}$ [@Geim:1994] ($V_0$ is the voltage at the step edge) with an edge exponent $\gamma = 0.02 \pm 0.01$.
The evolution of step (\*) in a magnetic field applied parallel to the current direction is shown in Fig. \[Babh\]a. The step develops into two separate peaks with onset voltages marked as $V_\downarrow$ and $V_\uparrow$. The Landau quantization of the emitter leads to an oscillation of $V_\downarrow$ and $V_\uparrow$ and a shift to smaller voltages as a function of magnetic field, see Fig. \[Babh\]b. This reflects the magneto-quantum-oscillation of the Fermi energy in the emitter [@Bumbel; @Main:2000]. From the period and the amplitude of the oscillation we can extract a Fermi energy (at $B = 0$) $E_0 = 13.6~$meV and a Landau level broadening $\Gamma = 1.3~$meV in the 3D emitter. The measured $E_0 = 13.6~$meV agrees well with the expected electron concentration at the barrier derived from the doping profile in the electrodes.
For $B > 6$ T only the lowest Landau level remains occupied. With a level broadening $\Gamma = 1.3~$meV the Fermi level $E_F$ for 15 T $<$ B $<$ 30 T is within less than $2~$meV pinned to the bottom of the lowest Landau band, $E_L = \hbar \omega_c/2$. As a consequence the onset voltage shifts to lower values as $\alpha e\Delta V \approx -\hbar \omega_c/2$ with $\alpha = 0.34$. The diamagnetic shift of the energy level in the dot can be neglected compared to this shift of the Fermi energy in the emitter. For the dot investigated in [@Hapke:1999] with $r_0 = 3.7$ nm the diamagnetic shift at 30 T is $\Delta E_D = 3.5$ meV negligible compared to $E_L = 26~$meV.
The two distinct steps with onset voltages $V_\downarrow$ and $V_\uparrow$ originate from the spin-splitting of the energy level $E_D$ in the dot. Their distance $\Delta V_p$ is given by the Zeeman splitting $\Delta E_z = g_D \mu_B B = \alpha e \Delta V_p$ with an energy-to-voltage conversion factor $\alpha = 0.34$ [@explain-alpha]. As shown in Fig. \[Babh\]c $\Delta V_p$ is indeed linear in B, with a Landé factor $g_D = 0.8$ in agreement with other experiments on InAs dots [@Thornton:1998].
For low magnetic fields ($B \le 9~$T in our case, see graph for $B = 9~T$ in Fig. \[Babh\]a) the size of the steps is very similar for both spins and about half of the size at zero field. Also the slight overshoot in the current as the signature of a Fermi edge singularity is similar for both spin orientations and comparable to the zero field case with an edge exponent $\gamma < 0.05$ for all magnetic fields $B < 10~$T.
The form of the current steps changes drastically in high magnetic fields where only the lowest Landau level of the emitter remains occupied. In particular, the second current step at higher voltage evolves into a strongly enhanced peak with a peak current of one order of magnitude higher compared to the zero-field case.
Following [@Thornton:1998] we assume that $g_D$ is positive whereas the Landé factor in bulk GaAs is negative. This assumption is verified by the fact that the energetically lower lying state (first peak in Fig. \[Tabh\]) is thermally occupied at higher temperatures and can therefore be identified with the minority
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We establish a Fredholm criterion for an arbitrary operator in the Banach algebra of singular integral operators with piecewise continuous coefficients on Nakano spaces (generalized Lebesgue spaces with variable exponent) with Khvedelidze weights over Carleson curves with logarithmic whirl points.'
address: ' Universidade do Minho, Centro de Matemática, Escola de Ciências, Campus de Gualtar, 4710-057 Braga, Portugal'
author:
- 'A. Yu. Karlovich'
title: ALGEBRAS OF SINGULAR INTEGRAL OPERATORS ON NAKANO SPACES WITH KHVEDELIDZE WEIGHTS OVER CARLESON CURVES WITH LOGARITHMIC WHIRL POINTS
---
Introduction
============
Fredholm theory of one-dimensional singular integral operators (SIOs) with piecewise continuous ($PC$) coefficients on weighted Lebesgue spaces was constructed by Gohberg and Krupnik [@GK92] and [@GK70; @GK71] in the beginning of 70s in the case of Khvedelidze weights and piecewise Lyapunov curves (see also the monographs [@CG81; @KS01; @LS87; @MP86]). Simonenko and Chin Ngok Min [@SCNM86] suggested another approach to the study of the Banach algebra of singular integral operators with piecewise continuous coefficients on Lebesgue spaces with Khvedelidze weights over piecewise Lyapunov curves. This approach is based on Simonenko’s local principle [@Simonenko65]. In 1992 Spitkovsky [@Spitkovsky92] made a next significant step: he proved a Fredholm criterion for an individual SIO with $PC$ coefficients on Lebesgue spaces with Muckenhoupt weights over Lyapunov curves. Finally, Böttcher and Yu. Karlovich extended Spitkovsky’s result to the case of arbitrary Carleson curves and Banach algebras of SIOs with $PC$ coefficients. After their work the Fredholm theory of SIOs with $PC$ coefficients is available in the maximal generality (that, is, when the Cauchy singular integral operator $S$ is bounded on weighted Lebesgue spaces). We recommend the nice paper [@BK01] for a first reading about this topic and the book [@BK97] for a complete and self-contained exposition.
It is quite natural to consider the same problems in other, more general, spaces of measurable functions on which the operator $S$ is bounded. Good candidates for this role are rearrangement-invariant spaces (that is, spaces with the property that norms of equimeasurable functions are equal). These spaces have nice interpolation properties and boundedness results can be extracted from known results for Lebesgue spaces applying interpolation theorems. The author extended (some parts of) the Böttcher-Yu. Karlovich Fredholm theory of SIOs with $PC$ coefficients to the case of rearrangement-invariant spaces with Muckenhoupt weights [@K98; @K02]. Notice that necessary conditions for the Fredholmness of an individual singular integral operator with $PC$ coefficients are obtained in [@K03] for weighted reflexive Banach function spaces on which the operator $S$ is bounded.
Nakano spaces $L^{p(\cdot)}$ (generalized Lebesgue spaces with variable exponent) are a nontrivial example of Banach function spaces which are not rearrangement-invariant, in general. Many results about the behavior of some classical operators on these spaces have important applications to fluid dynamics (see [@DR03] and the references therein). Kokilashvili and Samko [@KS-GMJ] proved that the operator $S$ is bounded on weighted Nakano spaces for the case of nice curves, nice weights, and nice (but variable!) exponents. They also extended the Gohberg-Krupnik Fredholm criterion for an individual SIO with $PC$ coefficients to this situation [@KS-Proc] (see also [@Samko05]). The author [@K05] has found a Fredholm criterion and a formula for the index of an arbitrary operator in the Banach algebra of SIOs with $PC$ coefficients on Nakano spaces with Khvedelidze weights over either Lyapunov curves or Radon curves without cusps.
Very recently Kokilashvili and Samko [@KS-Memoirs] (see also [@Kokilashvili05 Theorem 7.1]) have proved a boundedness criterion for the Cauchy singular integral operator $S$ on Nakano spaces with Khvedelidze weights over arbitrary Carleson curves. Combining this boundedness result with the machinery developed in [@K03], we are able to prove a Fredholm criterion for an individual SIO on a Nakano space with a Khvedelidze weight over a Carleson curve satisfying a “logarithmic whirl condition" (see [@BK95], [@BK97 Ch. 1]) at each point. Further, we extend this result to the case of Banach algebras of SIOs with $PC$ coefficients, using the approach developed in [@BK95; @BK97; @K03; @K05].
The paper is organized as follows. In Section \[sect:preliminaries\] we define weighted Nakano spaces and discuss the boundedness of the operator $S$ on these spaces. Section \[sect:individual\] contains a Fredholm criterion for an individual SIO with $PC$ coefficients on weighted Nakano spaces. The proof of this result is based on the local principle of Simonenko type and factorization technique. In Section \[sect:tools\] we formulate the Allan-Douglas local principle and the two projections theorem. The results of Section \[sect:tools\] are the main tools allowing us to construct a symbol calculus for the Banach algebra of SIOs with $PC$ coefficients acting on a Nakano space with a Khvedelidze weight over a Carleson curve with logarithmic whirl points in Section \[sect:symbol\].
Preliminaries {#sect:preliminaries}
=============
Weighted Nakano spaces $L_w^{p(\cdot)}$
---------------------------------------
Function spaces $L^{p(\cdot)}$ of Lebesgue type with variable exponent $p$ were studied for the first time by Orlicz [@Orlicz31] in 1931, but notice that another kind of Banach spaces is called after him. Inspired by the successful theory of Orlicz spaces, Nakano defined in the late forties [@Nakano50; @Nakano51] so-called *modular spaces*. He considered the space $L^{p(\cdot)}$ as an example of modular spaces. In 1959, Musielak and Orlicz [@MO59] extended the definition of modular spaces by Nakano. Actually, that paper was the starting point for the theory of Musielak-Orlicz spaces (generalized Orlicz spaces generated by Young functions with a parameter), see [@Musielak83].
Let $\Gamma$ be a Jordan (i.e., homeomorphic to a circle) rectifiable curve. We equip $\Gamma$ with the Lebesgue length measure $|d\tau|$ and the counter-clockwise orientation. Let $p:\Gamma\to(1,\infty)$ be a measurable function. Consider the convex modular (see [@Musielak83 Ch. 1] for definitions and properties) $$m(f,p):=\int_\Gamma|f(\tau)|^{p(\tau)}|d\tau|.$$ Denote by $L^{p(\cdot)}$ the set of all measurable complex-valued functions $f$ on $\Gamma$ such that $m(\lambda f,p)<\infty$ for some $\lambda=\lambda(f)>0$. This set becomes a Banach space when equipped with the *Luxemburg-Nakano norm* $$\|f\|_{L^{p(\cdot)}}:=\inf\big\{\lambda>0: \ m(f/\lambda,p)\le 1\big\}$$ (see, e.g., [@Musielak83 Ch. 2]). Thus, the spaces $L^{p(\cdot)}$ are a special case of Musielak-Orlicz spaces. Sometimes the spaces $L^{p(\cdot)}$ are referred to as Nakano spaces. We will follow this tradition. Clearly, if $p(\cdot)=p$ is constant, then the Nakano space $L^{p(\cdot)}$ is isometrically isomorphic to the Lebesgue space $L^p$. Therefore, sometimes the spaces $L^{p(\cdot)}$ are called generalized Lebesgue spaces with variable exponent or, simply, variable $L^p$ spaces.
We shall assume that $$\label{eq:reflexivity}
1<{\rm ess}\inf_{\!\!\!\!\!\!\!\!t\in\Gamma} p(t),
\quad
{\rm ess}\sup_{\!\!\!\!\!\!\!\!\!t\in\Gamma} p(t)<\infty.$$ In this case the conjugate exponent $$q(t):=\frac{p(t)}{p(t)-1}
\quad (t\in\Gamma)$$ has the same property.
A nonnegative measurable function $w$ on the curve $\Gamma$ is referred to as a [*weight*]{} if $0<w(t)<\infty$ almost everywhere on $\Gamma$. The [*weighted Nakano space*]{} is defined by $$L_w^{p(\cdot)}=
\big\{f\mbox{ is measurable on }\Gamma\mbox{ and }fw\in L^{p(\cdot)}\big\}.$$ The norm in $L_w^{p(\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Prior research has shown that autocorrelation and variance in voltage measurements tend to increase as power systems approach instability. This paper seeks to identify the conditions under which these statistical indicators provide reliable early warning of instability in power systems. First, the paper derives and validates a semi-analytical method for quickly calculating the expected variance and autocorrelation of all voltages and currents in an arbitrary power system model. Building on this approach, the paper describes the conditions under which filtering can be used to detect these signs in the presence of measurement noise. Finally, several experiments show which types of measurements are good indicators of proximity to instability for particular types of state changes. For example, increased variance in voltages can reliably indicate the location of increased stress, while growth of autocorrelation in certain line currents is a reliable indicator of system-wide instability.'
author:
- 'Goodarz Ghanavati, *Student Member, IEEE*, Paul D. H. Hines, *Senior Member, IEEE*, Taras I. Lakoba[^1]'
bibliography:
- 'Pow\_sys.bib'
nocite:
- '[@avalos2009equivalency; @grijalva2012individual]'
- '[@Browne2008]'
- '[@Anderson:1983]'
- '[@haesen2009probabilistic; @perninge2010risk; @bu2012probabilistic; @munoz2013affine]'
- '[@*]'
title: Identifying Useful Statistical Indicators of Proximity to Instability in Stochastic Power Systems
---
Power system stability, phasor measurement units, time series analysis, stochastic processes, principal component analysis, autocorrelation, critical slowing down.
Introduction
============
To make optimal use of constrained infrastructure, power systems frequently operate near their stability limits. Bifurcation theory provides a framework for understanding these instabilities [@alvarado1994computation]–[@perninge2010risk] and has motivated the development of new methods for online stability monitoring [@haque2003line]–[@kamwa2011robust].
This existing work has largely focused around deterministic power system models. However, real power systems are constantly influenced by stochastic perturbations in load and (increasingly) variable renewable generation. Because random fluctuations can substantially change the stability properties of a system [@wangfokker], several have proposed the use of stochastic approaches to stability analysis (e.g., [@nwankpa1992stochastic]–[@huang2013quasi]).
Indeed, outside of the power systems literature, there is growing evidence that complex systems show statistical early warning signs as they approach instability [@scheffer2009early; @Scheffer:2012]. This phenomenon, known as critical slowing down (CSD) [@mori1963relaxation], is the tendency of a dynamical system to return to equilibrium more slowly in response to perturbations as it approaches a critical bifurcation. Increasing autocorrelation and variance in measurements, two common signs of signal proximity to critical transitions in a variety of dynamical systems [@scheffer2009early]. However, not all measurements show these signs early enough to provide warning with sufficient time to take mitigating actions [@boerlijst2013catastrophic]. Understanding which variables provide useful early warning of instability is necessary for the practical application of these concepts. Doing so requires a detailed knowledge of how autocorrelation and variance change as a system’s state changes.
A few papers have studied the properties of variance and autocorrelation as indicators of instability in power systems. Reference [@Cotilla2012] showed, using simulations, that variance and autocorrelation of bus voltages increase before bifurcation. Reference [@podolsky2013] derives the autocorrelation function of a power system’s state vector near a saddle-node bifurcation and uses the result to estimate the collapse probability for power systems. In [@Dhople2013], a framework is proposed to study the impact of stochastic power injections on power system dynamics by computing the moments of the states. In [@ghanavati2013understanding], the authors showed that for some state variables, increases in autocorrelation and variance appear only when a power system is very close to the indicating that CSD does not always provide useful early warning of instability. Reference [@yuan2014stochastic] calculates the variance of state variables to analyze the impact of wind turbine mechanical power input fluctuations on small-signal stability.
The goal of this paper is to present a general method for estimating the autocorrelation and variance of state variables from a power system model and to use the results to determine which variables in a power system provide useful early warning of critical transitions in the presence of measurement noise. To this end, Sec. \[sec:Analytical\] presents a semi-analytical method for calculating the variance and autocorrelation of algebraic and differential variables. This method enables the fast calculation of voltage and current statistics for many potential operating scenarios in large power systems, and unlike the method in [@podolsky2013], is not limited to the immediate vicinity of a bifurcation. Sec. \[sec:Useful-early-signs\] illustrates the method using the 39-bus test case and shows that some variables are better indicators of proximity to instability than others. Sec. \[sec:Detectability\] extends the analysis to systems with measurement noise and presents a method for detecting CSD in the presence of measurement noise. Sec. \[sec:Stressed-Area\] uses this approach to identify stressed areas in a power network. Finally, our conclusions are presented in Sec. \[sec:Conclusions\].
Calculation of Autocorrelation and Variance in Multimachine Power Systems \[sec:Analytical\]
============================================================================================
This section presents a semi-analytical method for the fast calculation of variance $\left(\sigma^{2}\right)$ and autocorrelation $\left(R\left(\Delta t\right)\right)$ of bus voltage magnitudes and line currents in power system. Fluctuations of load and generation are well known sources of stochasticity in power systems. While this section models only randomness in load, the method can be readily extended to the case of stochasticity in power injections.
System Model\[sub:System-Model\]
--------------------------------
Adding stochastic load to the set of general differential-algebraic equations (DAE) that model a power system gives: $$\begin{aligned}
\dot{\underline{x}} & = & f\left(\underline{x},\underline{y}\right)\label{eq:diff}\\
0 & = & g\left(\underline{x},\underline{y},\underline{u}\right)\label{eq:alg}\end{aligned}$$ where $f,g$ represent differential and algebraic equations, $\underline{x},\underline{y}$ are vectors of differential and algebraic variables (generator rotor angles, bus voltage magnitudes, etc.), and $\underline{u}$ is the vector of load fluctuations. The algebraic equations consist of nodal power flow equations and static equations for components such as generator, exciter, and turbine governor. The differential equations describe the dynamic behavior of the equipment. In this paper, for modeling load fluctuations, we take an approach similar to [@perninge2010risk], [@hauer2007] and assume that load fluctuations $\underline{u}$ follow the Ornstein–Uhlenbeck process: $$\dot{\underline{u}}=-E\underline{u}+\underline{\xi}\label{eq:load_corr}$$ where $E$ is a diagonal matrix whose diagonal entries equal $t_{\textnormal{{corr}}}^{-1}$, where $t_{\textnormal{{corr}}}$ is the correlation time of the load fluctuations, and $\underline{{\xi}}$ is a vector of independent Gaussian random variables: $$\begin{aligned}
\textnormal{{E}}\left[\underline{\xi}\left(t\right)\right] & = & 0\label{eq:xi1}\\
\textnormal{\textnormal{{E}}}\left[\xi_{i}\left(t\right)\xi_{j}\left(s\right)\right] & = & \delta_{ij}\sigma_{\xi}^{2}\delta_{I}(t-s)\label{eq:xi2}\end{aligned}$$ where $t,s$ are two arbitrary times, $\delta_{ij}$ is the Kronecker delta function, $\sigma_{\xi}^{2}$ is the intensity of noise and $\delta_{I}$ represents the unit impulse (delta) function. Equations (\[eq:diff\])–(\[eq:load\_corr\]) form the set of SDAEs that models a power system with stochastic load.
We also consider the frequency-dependence of loads, which can measurably impact the statistics of voltage magnitudes [@ghanavati2013understanding]. Loads are thus modeled as follows [@berg1973power; @Milano2008]: $$\begin{aligned}
\Delta\omega & = & \frac{1}{2\pi f_{n}}\frac{d\left(\theta-\theta^{0}\right)}{dt}\label{eq:freq1}\\
P & = & P^{0}\left(1+\Delta\omega\right)^{\beta_{P}}\label{eq:freq2}\\
Q & = & Q^{0}\left(1+\Delta\omega\right)^{\beta_{Q}}\label{eq:freq3}\end{aligned}$$ where $\Delta\omega$ is the frequency deviation at the load bus, $\theta^{0},P^{0},Q^{0}$ are the baseline voltage angle, active and reactive power of each load, $\beta_{P},\beta_{Q}$ are exponents that determine the level of frequency dependence, $f_{n}$ is the nominal frequency and $\theta$ is the bus voltage angle.
Using this model
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe a practical procedure for extracting the spatial structure and the growth rates of slow eigenmodes of a spatially extended system, using a unique experimental capability both to impose and to perturb desired initial states. The procedure is used to construct experimentally the spectrum of linear modes near the secondary instability boundary in Rayleigh-Bénard convection. This technique suggests an approach to experimental characterization of more complex dynamical states such as periodic orbits or spatiotemporal chaos.'
author:
- Kapilanjan Krishan
- Andreas Handel
- 'Roman O. Grigoriev'
- 'Michael F. Schatz'
title: Modal Extraction in Spatially Extended Systems
---
Numerous nonlinear nonequilibrium systems in nature and in technology exhibit complex behavior in both space and time ; understanding and characterizing such behavior (spatiotemporal chaos) is a key unsolved problem in nonlinear science [@cross]. Many such systems are modelled by partial differential equations; hence, in principle, their dynamics takes place in an infinite dimensional phase space. However, dissipation often acts to confine these systems’ asymptotic behavior to finite-dimensional subspaces known as invariant manifolds [@manneville]. Knowledge of the invariant manifolds provides a wealth of dynamical information; thus, devising methodologies to determine invariant manifolds from experimental data would significantly advance understanding of spatiotemporal chaos.
In this Letter, we describe experiments in Rayleigh-Bénard convection where several slow eigenmodes and their growth rates associated with instability of roll states are extracted quantitatively. Rayleigh-Bénard convection (RBC) serves well as a model spatially extended system; in particular, the spiral defect chaos (SDC) state in RBC is considered an outstanding example of spatiotemporal chaos. In SDC the spatial structure is primarily composed of curved but locally parallel rolls, punctuated by defects (Fig. \[eps:sdc\]) [@morris; @egolf1]. The recurrent formation and drift of defects in SDC is believed to play a key role in driving spatiotemporal chaos; moreover, many aspects of defect nucleation in SDC are related to defect formation observed at the onset of instability in patterns of straight, parallel rolls in RBC [@busse]. We obtain experimentally a low-dimensional description of the modes responsible for the nucleation of one important class of defects (dislocations), by first imposing reproducibly a linearly stable, straight roll state (stable fixed point) near instability onset. This state is subsequently subjected to a set of distinct, well-controlled perturbations, each of which initiates a relaxational trajectory from the disturbed state to the (same) fixed point. An ensemble of such trajectories is used to construct a suitable basis for describing the embedding space by means of a modified Karhunen-Loeve decomposition. The dynamical evolution of small disturbances is then characterized by computing both finite-time Lyapunov exponents and the spatial structure of the associated eigenmodes (a similar approach was carried out numerically by [*Egolf et al.*]{} [@egolf2]). This capability is an important step toward developing a systematic way of characterizing and, perhaps, controlling, spatiotemporally chaotic states like SDC where localized “pivotal” events like defect formation play a central role in driving complex behavior.
![\[eps:sdc\] Shadowgraph visualization reveals spontaneous defect nucleation in the spiral defect chaos state of Rayleigh-Benard convection. Two convection rolls are compressed together (higher contrast region in left image). (b.) A short time later (right image), one of the rolls pinches off and two dislocations form.](sdc){width="8cm"}
The convection experiments are performed with gaseous CO$_2$ at a pressure of 3.2 MPa. A 0.697$\pm$0.06 mm-thick gas layer is contained in a 27 mm square cell, which is confined laterally by filter paper. The layer is bounded on top by a sapphire window and on the bottom by a sheet of 1 mm-thick glass neutral density filter(NDF). The neutral density filter is bonded to a heated metal plate with heat sink compound. The temperature of the sapphire window held constant at 21.3 $^{\circ}$C by water cooling. The temperature difference between the top and bottom plates $\Delta T$ is held fixed at 5.50 $\pm$ 0.01 $^{\circ}$C by computer control of a thin film heater attached to the bottom metal plate. These conditions correspond to a dimensionless bifurcation parameter $\epsilon$=$(\Delta T - \Delta T_c)/\Delta T_c=0.41$, where $\Delta
T_c$ is the temperature difference at the onset of convection. The vertical thermal diffusion time, computed to be 2.1 s at onset, represents the characteristic timescale for the system.
We use laser heating to alter the convective patterns that occur spontaneously. A focused beam from an Ar-ion laser is directed through the sapphire window at a spot on the NDF. Absorption of the laser light by the NDF increases the local temperature of the bottom boundary and hence that of the gas, thereby inducing locally a convective upflow. The convection pattern may be modifed, either locally or globally, by rastering the hot spot created by the laser beam. The beam is steered using two galvanometric mirrors rotating about axes orthogonal to each other under computer control. The intensity of the beam is modulated using an acousto-optic modulator. This technique of optical actuation is used to impose convection patterns with desired properties, to perturb these convection patterns and to change the boundary conditions. Similar approaches for manipulating convective flows were explored earlier using a high intensity lamp and masks [@whitehead] in RBC and a rastered infrared laser in Bénard-Marangoni convection [@denis].
[![\[eps:rd\] Experimental images illustrate the flow response to two different perturbations applied, in turn, to the same state of straight convection rolls. Each image represents the difference between the perturbed and unperturbed convection states and therefore, each image highlights the effect of a given perturbation on the flow. In the two cases shown, the localized perturbation is applied directly on a region of either downflow (left image) or upflow (right image). In all cases, the disturbance created by the perturbation decays away and the flow returns to the original unperturbed state.](rd1_new_stable "fig:"){width="4cm"}]{} [![\[eps:rd\] Experimental images illustrate the flow response to two different perturbations applied, in turn, to the same state of straight convection rolls. Each image represents the difference between the perturbed and unperturbed convection states and therefore, each image highlights the effect of a given perturbation on the flow. In the two cases shown, the localized perturbation is applied directly on a region of either downflow (left image) or upflow (right image). In all cases, the disturbance created by the perturbation decays away and the flow returns to the original unperturbed state.](rd2_new_stable "fig:"){width="4cm"}]{}
The experiments begin by using laser heating to impose a well-specified basic state of stable straight rolls. The basic state is typically arranged to be near the onset of instability by imposing a sufficiently large pattern wavenumber such that at fixed $\epsilon$ the system’s parameters are near the skew-varicose stability boundary [@busse]. In this regime, the modes responsible for the instability are weakly damped and, therefore, can be easily excited.
The linear stability of the basic state is probed by applying brief pulses of spatially localized laser heating. For stable patterns, all small disturbances eventually relax. To excite all modes governing the disturbance evolution, we apply a set of localized perturbations consistent with symmetries of the (ideal) straight roll pattern – continuous translation symmetry in the direction along the rolls and discrete translation symmetry in the perpendicular direction plus the reflection symmetry in both directions. Therefore, localized perturbations applied across half a wavelength of the pattern form a “basis” for all such perturbations – any other localized perturbation at a different spatial location is related by symmetry. Localized perturbations are produced in the experiment by aiming the laser beam to create a “hot spot” whose location is stepped from the center of a (cold) downflow region to the center of an adjacent (hot) upflow region in different experimental runs. The perturbations typically last approximately 5 s and have a lateral extent of approximately 0.1 mm, which is less than 10 % of the pattern wavelength.
The evolution of the perturbed convective flow is monitored by shadowgraph visualization. A digital camera with a low-pass filter (to filter out the reflections from the Ar-ion laser) is used to capture a sequence of $256\times 256$ pixel images recorded with 12 bits of intensity resolution at a rate of 41 images per second. A background image of the unperturbed flow is subtracted from each data image; such sequences of difference images comprise the time series representing the evolution of the perturbation (Fig \[eps:rd\]).
The total power for each (difference) image in a time series is obtained from 2-D spatial Fourier transforms. The resulting time series of total power shows a strong transient excursion (corresponding to the initial response of the convective flow to a localized perturbation by laser heating) followed by exponential decay as the system relaxes back to the stable state of straight convection rolls. We restrict further analysis to the region of exponential decay, which typically represents about $3.5$ seconds of data for each applied perturbation.
The dimensionality of the raw data is too high to permit direct analysis, so each difference image is first windowed (to avoid aliasing effects) and Fourier filtered by discarding the Fourier modes outside a $31\times 31$ window centered at the zero frequency. The discarded high-frequency modes are strongly damped and contain less than 1% of the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Frenkel and Reshetikhin [@Fre] introduced $q$-characters to study finite dimensional representations of the quantum affine algebra ${\mathcal{U}}_q(\hat{{\mathfrak{g}}})$. In the simply laced case Nakajima [@Naa][@Nab] defined deformations of $q$-characters called $q,t$-characters. The definition is combinatorial but the proof of the existence uses the geometric theory of quiver varieties which holds only in the simply laced case. In this article we propose an algebraic general (non necessarily simply laced) new approach to $q,t$-characters motivated by the deformed screening operators [@Her01]. The $t$-deformations are naturally deduced from the structure of ${\mathcal{U}}_q(\hat{{\mathfrak{g}}})$: the parameter $t$ is analog to the central charge $c\in{\mathcal{U}}_q(\hat{{\mathfrak{g}}})$. The $q,t$-characters lead to the construction of a quantization of the Grothendieck ring and to general analogues of Kazhdan-Lusztig polynomials in the same spirit as Nakajima did for the simply laced case.'
address: 'David Hernandez: École Normale Supérieure - DMA, 45, Rue d’Ulm F-75230 PARIS, Cedex 05 FRANCE'
author:
- David Hernandez
title: 'Algebraic Approach to $q,t$-Characters'
---
Introduction
============
We suppose $q\in{\ensuremath{\mathbb{C}}}^*$ is not a root of unity. In the case of a semi-simple Lie algebra ${\mathfrak{g}}$, the structure of the Grothendieck ring $\text{Rep}({\mathcal{U}}_q({\mathfrak{g}}))$ of finite dimensional representations of the quantum algebra ${\mathcal{U}}_q({\mathfrak{g}})$ is well understood. It is analogous to the classical case $q=1$. In particular we have ring isomorphisms: $$\text{Rep}({\mathcal{U}}_q({\mathfrak{g}}))\simeq \text{Rep}({\mathfrak{g}})\simeq {\mathbb{Z}}[\Lambda]^W\simeq {\mathbb{Z}}[T_1,...,T_n]$$ deduced from the injective homomorphism of characters $\chi$: $$\chi(V)=\underset{\lambda\in\Lambda}{\sum}\text{dim}(V_{\lambda})\lambda$$ where $V_{\lambda}$ are weight spaces of a representation $V$ and $\Lambda$ is the weight lattice.
For the general case of Kac-Moody algebras the picture is less clear. In the affine case ${\mathcal{U}}_q(\hat{{\mathfrak{g}}})$, Frenkel and Reshetikhin [@Fre] introduced an injective ring homomorphism of $q$-characters: $$\chi_q:\text{Rep}({\mathcal{U}}_q(\hat{{\mathfrak{g}}}))\rightarrow {\mathbb{Z}}[Y_{i,a}^{\pm}]_{1\leq i\leq n,a\in{\ensuremath{\mathbb{C}}}^*}={\mathcal{Y}}$$
The homomorphism $\chi_q$ allows to describe the ring $\text{Rep}({\mathcal{U}}_q(\hat{{\mathfrak{g}}}))\simeq{\mathbb{Z}}[X_{i,a}]_{i\in I,a\in{\ensuremath{\mathbb{C}}}^*}$, where the $X_{i,a}$ are fundamental representations. It particular $\text{Rep}({\mathcal{U}}_q(\hat{{\mathfrak{g}}}))$ is commutative.
The morphism of $q$-characters has a symmetry property analogous to the classical action of the Weyl group $\text{Im}(\chi)={\mathbb{Z}}[\Lambda]^W$: Frenkel and Reshetikhin defined $n$ screening operators $S_i$ such that $\text{Im}(\chi_q)=\underset{i\in I}{\bigcap}\text{Ker}(S_i)$ (the result was proved by Frenkel and Mukhin for the general case in [@Fre2]).
In the simply laced case Nakajima introduced $t$-analogues of $q$-characters ([@Naa], [@Nab]): it is a ${\mathbb{Z}}[t^{\pm}]$-linear map $$\chi_{q,t}:\text{Rep}({\mathcal{U}}_q(\hat{{\mathfrak{g}}}))\otimes_{{\mathbb{Z}}}{\mathbb{Z}}[t^{\pm}]\rightarrow{\mathcal{Y}}_t={\mathbb{Z}}[Y_{i,a}^{\pm},t^{\pm}]_{i\in I,a\in{\ensuremath{\mathbb{C}}}^*}$$ which is a deformation of $\chi_q$ and multiplicative in a certain sense. A combinatorial axiomatic definition of $q,t$-characters is given. But the existence is non-trivial and is proved with the geometric theory of quiver varieties which holds only in the simply laced case.
In [@Her01] we introduced $t$-analogues of screening operators $S_{i,t}$ such that in the simply laced case: $$\underset{i\in I}{\bigcap}\text{Ker}(S_{i,t})=\text{Im}(\chi_{q,t})$$ It is a first step in the algebraic approach to $q,t$-characters proposed in this article: we define and construct $q,t$-characters in the general (non necessarily simply laced) case. The motivation of the construction appears in the non-commutative structure of the Cartan subalgebra ${\mathcal{U}}_q(\hat{{\mathfrak{h}}})\subset{\mathcal{U}}_q(\hat{{\mathfrak{g}}})$, the study of screening currents and of deformed screening operators.
As an application we construct a deformed algebra structure and an involution of the Grothendieck ring, and analogues of Kazhdan-Lusztig polynomials in the general case in the same spirit as Nakajima did for the simply laced case. In particular this article proves a conjecture that Nakajima made for the simply laced case (remark 3.10 in [@Nab]): there exists a purely combinatorial proof of the existence of $q,t$-characters.
This article is organized as follows: after some backgrounds in section \[back\], we define a deformed non-commutative algebra structure on ${\mathcal{Y}}_t={\mathbb{Z}}[Y_{i,a}^{\pm},t^{\pm}]_{i\in I,a\in{\ensuremath{\mathbb{C}}}^*}$ (section \[defoal\]): it is naturally deduced from the relations of ${\mathcal{U}}_q(\hat{{\mathfrak{h}}})\subset{\mathcal{U}}_q(\hat{{\mathfrak{g}}})$ (theorem \[dessus\]) by using the quantization in the direction of the central element $c$. In particular in the simply laced case it can be used to construct the deformed multiplication of Nakajima [@Nab] (proposition \[form\]) and of Varagnolo-Vasserot [@Vas] (section \[varva\]).
This picture allows us to introduce the deformed screening operators of [@Her01] as commutators of Frenkel-Reshetikhin’s screening currents of [@Freb] (section \[scr\]). In [@Her01] we gave explicitly the kernel of each deformed screening operator (theorem \[her\]).
In analogy to the classic case where $\text{Im}(\chi_q)=\underset{i\in I}{\bigcap}\text{Ker}(S_i)$, we have to describe the intersection of the kernels of deformed screening operators. We introduce a completion of this intersection (section \[complesection\]) and give its structure in proposition \[thth\]. It is easy to see that it is not too big (lemma \[leasto\]); but the point is to prove that it contains enough elements: it is the main result of our construction in theorem \[con\] which is crucial for us. It is proved by induction on the rank $n$ of ${\mathfrak{g}}$.
We define a $t$-deformed algorithm (section \[defialgo\]) analog to the Frenkel-Mukhin’s algorithm [@Fre2] to construct $q,t$-characters in the completion of ${\mathcal{Y}}_t$. An algorithm was also used by Nakajima in the simply laced case in order to compute the $q,t$-characters for some examples ([@Naa]) assuming they exist (which was geometrically proved). Our aim is different : we do not know [*a priori*]{} the existence in the general case. That is why we have to show the algorithm is well defined, never fails (lemma \[nfail\]) and gives a convenient element (lemma \[conv\]).
This construction gives $q,t$-characters for fundamental representations; we deduce from them the injective morphism of $q,t$-characters $\chi_{q,t}$ (definition \[mqt\]). We study the properties of $\chi_{q,t}$ (theorem \[axiomes\]). Some of them are generalization of the axioms that Nakajima defined in the simply laced case ([@Nab]); in particular we have constructed the morphism of [@Nab].
We have some applications: the morphism gives a deformation of the Grothendieck ring because the image of $\chi_{q,t}$ is a subalgebra for the deformed multiplication (section \[quanta\]). Moreover we define an antimultiplicative involution of the deformed Grothendieck ring (section \[invo\]); the construction of this involution is motivated by the new point view adopted
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose a list of conditions that consistency with thermodynamics imposes on linear and nonlinear generalizations of standard unitary quantum mechanics that assume a set of true quantum states without the restriction $\rho^2=\rho$ even for strictly isolated systems and that are to be considered in experimental tests of the existence of intrinsic (spontaneous) decoherence at the microscopic level.'
author:
- Gian Paolo Beretta
title: |
Nonlinear extensions of Schrödinger–vonNeumann quantum dynamics:\
a list of conditions for compatibility with thermodynamics
---
Understanding and predicting ’decoherence’ is important in future applications involving nanometric devices, fast switching times, clock synchronization, superdense coding, quantum computation, teleportation, quantum cryptography, etc. where entanglement structure and dynamics play a key role [@decoherence]. In the last three decades it has also been central in exploring possible limitations to the validity of standard unitary quantum mechanics (QM), by studying a variety of linear and nonlinear extensions that have been advocated by several authors on a variety of conceptual grounds [@extensions]. It has been suggested [@Domokos] that long-baseline neutrino oscillation experiments may provide means of testing the existence of spontaneous decoherence at the microscopic level and the validity of linear and nonlinear extensions of the Schrödinger–vonNeumann equation of motion of QM, thus prompting a renewed interest on such extensions [@Domokos; @Czachor; @Gheorghiu].
The aim of this Letter is to list the main conditions that must be imposed and checked on linear and nonlinear extensions of QM which assume an augmented set of true quantum states described by state operators $\rho$ without the restriction $\rho^2=\rho$. The reasoning and framework proposed here should provide useful guidance also to current efforts to define general measures of entanglement [@Yukalov].
The conditions proposed here form a very restrictive set. Yet, at least one possible extension has been proved to satisfy them all[@Beretta], with mathematics that has been partially rediscovered recently by researchers in different contexts and fields [@Gheorghiu; @Englman; @Aerts].
[**1. Causality. Forward and backward in time.\
**]{}\[causality\] We consider the set ${{\mathcal P}}$ of all linear, hermitian, nonnegative-definite, unit-trace operators $\rho$ (without the restriction $\rho^2=\rho$) on the standard QM Hilbert space ${{\mathcal H}}$ associated with a strictly isolated system[@isolated]. Every solution of the equation of motion, i.e., every trajectory $u(t,\rho)$ passing at time $t=0$ through state $\rho$ in ${{\mathcal P}}$, should lie entirely in ${{\mathcal P}}$ for all times $t$, $-\infty<t<+\infty$. This strong causality condition is nontrivial and demanding both from the conceptual and the technical mathematical points of view.
[**2. Conservation of energy and other invariants.**]{}\[energy\] The value of the energy functional $e(\rho)={{\rm Tr}}(\rho H)$, where $H$ is the standard QM Hamiltonian operator associated with the isolated system \[$H\ne H(t)$\], must remain invariant along every trajectory. If ${{\mathcal H}}$ is the Fock space of an isolated system consisting of $M$ types of elementary constituents (e.g., atoms and molecules if chemical and nuclear reactions are inhibited; or atomic nuclei and electrons for modelling chemical reactions) each with a number operator $N_i$ ($[H,N_i]=0$ and $[N_i,N_j]=0$), then also the value of each number-of-constituents functional $n_i(\rho)={{\rm Tr}}(\rho N_i)$ must remain invariant along every trajectory. Depending on the type of system, there may be other time-invariant functionals, e.g., the total momentum components $p_j (\rho)={{\rm Tr}}(\rho P_j)$, with $j=x,y,z$, for a free particle (in which case Galileian invariance must also be verified, for $[H,P_j]=0$ and $[P_i,P_j]=0$). In what follows, we denote by $g_i(\rho)={{\rm Tr}}(\rho G_i)$ the set of non-Hamiltonian time-invariant functionals, if any, with $[H,G_i]=0$ and $[G_i,G_j]=0$ (clearly, $H$ and the $G_i$’s have a common eigenbasis that we denote by $\{|\psi_\ell\rangle\}$).
[**3. Standard QM unitary evolution of $\rho^2=\rho$ states.\
**]{}\[standardQM\] Unitary time evolution of the states of QM according to the Schrödinger equation of motion must be compatible with the more general dynamical law. These trajectories, passing through any state $\rho$ such that $\rho^2=\rho$ and entirely contained in the state domain of QM, must be solutions also of the extended dynamical law. Because the states of QM are extreme points of the state domain ${{\mathcal P}}$, the trajectories of QM must be boundary solutions (limit cycles) of the extended dynamical law.
In general, any extended dynamical equation may be written in the form $$\begin{aligned}
\label{eqofmotion}
&&{\displaystyle {{\frac{\displaystyle {\rm d}\rho}{\displaystyle {\rm d}t}}} = - \frac{i}{\hbar}}[H,\rho] +D_M \\&&{\rm with}\ D_M={{\bf \hat D}}_M(\rho,H,G_i,\dots) \ ,\end{aligned}$$ where operator $D_M$ represents the [*dissipative*]{} part of the equation of motion and may depend linearly and/or nonlinearly (through superoperator ${{\bf \hat D}}_M$) on the state operator $\rho$, on the Hamiltonian $H$, on the linear operators $G_i$ associated with the other time invariants (if any), as well as on the structure and the number $M$ of elementary constituents of the system. Like the Schrödinger–vonNeumann term, also the dissipative term should not be responsible for rates of change of any of the invariant functionals ${{\rm Tr}}(\rho)$, $e(\rho) $, $g_i(\rho) $ and, therefore, $${{\rm Tr}}D_M=0\qquad{{\rm Tr}}D_M H=0\qquad{{\rm Tr}}D_M G_i=0\ .$$
If the complete dynamics preserves the feature of uniqueness of solutions throughout the state domain ${{\mathcal P}}$, then pure states can only evolve according to the Schrödinger equation of motion and, therefore, ${{\bf \hat D}}_M(\rho,H,G_i,\dots)=0$ when $\rho^2=\rho$. This feature that may be responsible for hiding the presence of deviations from QM in experiments where the isolated system is prepared in a pure state. It also implies that no trajectory can enter or leave the state domain of QM. Thus, by continuity, there must be trajectories that approach indefinitely these boundary solutions (of course, this can only happen backward in time, as $t\to -\infty$, for otherwise the entropy of the isolated system would decrease in forward time).
[**4. Conservation of effective Hilbert space dimensionality.**]{}
Unitary dynamics \[Eq. (\[eqofmotion\]) with $D_M=0$\] would maintain unchanged all the eigenvalues of $\rho$ and therefore cannot satisfy Condition 5 below [@unitary]. Instead, we only require that the dynamical law maintains zero the initially zero eigenvalues of $\rho$ and, therefore, conserves the cardinality of the set of zero eigenvalues, $ \dim{{\rm Ker}}(\rho)$. In other words, if the isolated system is prepared in a state that does not ’occupy’ the eigenvector $ |\psi_\ell\rangle $ of $H$ (and the $G_i$’s), i.e., if $\rho(0)|\psi_\ell\rangle =0$ (so that $|\psi_\ell\rangle $ is also an eigenvector of $\rho$ corresponding to a zero eigenvalue), then such energy eigenvector remains ’unoccupied’ at all times, i.e., $\rho(t)|\psi_\ell\rangle =0$.
This condition preserves an important feature that allows remarkable model simplifications within QM: the dynamics is fully equivalent to that of a model system with Hilbert space ${{\mathcal H}}'$ (a subspace of ${{\mathcal H}}$) defined by the linear span of all the $|\psi_\ell\rangle$’s such that $\rho(t)|\psi_\ell\rangle \ne 0$ at some time $t$ (and, hence, by our condition, at all times). The relevant operators $X'$ on ${{\mathcal H}}'$ ($\rho'$, $H'$, $G'_i$, …) are defined from the original $X$ on ${{\mathcal H}}$ ($\rho$, $H$, $G_i$, …) so that $\langle \alpha_k|X'|\alpha_\ell\rangle =\langle \alpha_k|X|\alpha_\ell\rangle$ with $|\alpha_k\rangle$ any basis of ${{\mathcal H}}'$.
It is also consistent with recent experimental tests [@exp1] that rule out, for pure states, deviations from linear and unitary dynamics and confirm that initially unoccupied eigenstates cannot spontaneously become occupied. This fact adds nontrivial experimental and conceptual difficulty to the problem of designing a fundamental test of QM, capable of ascertaining whether decoherence originates from uncontrolled
|
{
"pile_set_name": "ArXiv"
}
|
{
"pile_set_name": "ArXiv"
}
|
|
---
abstract: 'We present a comparison of Doppler-shifted H$\alpha$ line emission observed by the Global Jet Watch from freshly-launched jet ejecta at the nucleus of the Galactic microquasar SS433 with subsequent ALMA imaging at mm-wavelengths of [*the same*]{} jet ejecta. There is a remarkable similarity between the transversely-resolved synchrotron emission and the prediction of the jet trace from optical spectroscopy: this is an a priori prediction not an a posteriori fit, confirming the ballistic nature of the jet propagation. The mm-wavelength of the ALMA polarimetry is sufficiently short that the Faraday rotation is negligible and therefore that the observed ${\bf E}$-vector directions are accurately orthogonal to the projected local magnetic field. Close to the nucleus the ${\bf B}$-field vectors are perpendicular to the direction of propagation. Further out from the nucleus, the ${\bf B}$-field vectors that are coincident with the jet instead become parallel to the ridge line; this occurs at a distance where the jet bolides are expected to expand into one another. X-ray variability has also been observed at this location; this has a natural explanation if shocks from the expanding and colliding bolides cause particle acceleration. In regions distinctly separate from the jet ridge line, the fractional polarisation approaches the theoretical maximum for synchrotron emission.'
author:
- 'Katherine M. Blundell , Robert Laing, Steven Lee and Anita Richards,'
title: |
SS433’s jet trace from ALMA imaging and Global Jet Watch spectroscopy:\
evidence for post-launch particle acceleration
---
Introduction
============
Since shortly after its discovery four decades ago the prototypical Galactic microquasar SS433 has been known to eject oppositely-directed jets whose launch axis precesses with a cone angle of about 19 degrees approximately every 162 days and whose speeds average to about a quarter of the speed of light. Striking images of the emission at radio (cm) wavelengths reveal a zigzag/corkscrew structure that arises because of the above properties modulated by light-travel time effects arising from its orientation with respect to our line-of-sight [@Hjellming1981; @Stirling2002; @Blundell04; @Roberts2008; @Miller-Jones2008]. The optical spectra of this object are characterised by a strong Balmer H$\alpha$ emission line complex close to the rest-wavelength of this line, and also blue-shifted and red-shifted lines whose observed wavelengths change successively on a daily basis according to the instantaneous speed and angle of travel with respect to our line of sight. Fitted parameters to the kinematic model developed from the first few years of optical spectroscopy were presented by e.g. @Margon84 and @Eikenberry01. Hitherto the timing of optical spectroscopy and spatially resolved radio imaging has not permitted the observation of the same ejecta both at launch and after propagation. We present the first mm-wave image of SS433 from ALMA in combination with optical spectroscopy (Sec\[sec:SpeedsFromZeds\]) of the same ejecta observed during the year prior to the ALMA observations (Sec\[sec:ALMA\]). This allows us to distinguish ballistic motion post-launch from deceleration [e.g. @Stirling2004].
A long standing question is why SS433’s jet ejecta are primarily line emitting at launch yet synchrotron emitting at largest distances from the nucleus; the polarisation changes explored in Sec.\[sec:polar\] shed light on this. Inference of the magnetic-field structure in the jets is complicated by the combined effects of Faraday rotation and time-variable structure. Previous studies [@Stirling2004; @Roberts2008; @Miller-Jones2008] have been hampered by lack of resolution and frequency coverage as well as the uncertain effects of spatial- and temporal-variations in Faraday rotation. The dependence of Faraday rotation on the square of the wavelength ($\lambda$), means that wide-band observations at mm wavelengths allow the projected field direction to be determined accurately in a single observation even close to the core, where Faraday rotation measures may be large [@Roberts2008]. We present our polarimetric 230GHz results in Sec.\[sec:polar\].
Optical spectroscopy and inference from Doppler shifts {#sec:SpeedsFromZeds}
======================================================
Spectra of SS433 spanning a wavelength range of approximately 5800 to 8500 Angstroms, and having a spectral resolution of $\sim4000$ were observed in the year prior to the ALMA observations whenever this target was a nighttime object. These were carried out with the multi-longitude Global Jet Watch telescopes each of which is equipped with an Aquila spectrograph; the design and testing of these high-throughput spectrographs are described by @Lee2018. The observatories, astronomical operations, processing and calibration of the spectroscopic data streams are described in @Blundell2018. Almost all of these spectra contain a pair of so-called “moving lines” arising from the most recently launched jet bolides in SS433. The wavelengths corresponding to the centroids of the blue-shifted and red-shifted H$\alpha$ emission were converted into redshift pairs with respect to H$\alpha$ in the rest frame of SS433 according to its systemic velocity with respect to Earth [@lockman2007]. From these redshift and blueshift pairs from a given spectrum were derived the launch speed of each pair of bolides [@Blundell05 equation 2]. This avoids the approximation of constant ejection speed, which has been shown to be inaccurate from archival spectroscopy [@Blundell05; @Blundell11]. Assuming that the subsequent motion is ballistic (this assumption is discussed in Sec\[sec:ballistic\]), and adopting the standard kinematic model [@Hjellming1981], the locations they attain by the Julian Date of the mid-point of the ALMA observations (2457294.4836) are calculated, and plotted in Fig\[fig:combine\]. The assumed parameters of the kinematic model using the notation of @Eikenberry01 and @Hjellming1981 are: cone angle $\theta = 19^\circ$ (Hjellming et al. use $\psi$), inclination $i = 79^\circ$, rotation on the sky $\chi = 10^\circ$ (position angle $+100^\circ$), period $P = 162.34$day (Blundell et al., in preparation) and distance $d =
5.5$kpc [@Blundell04; @lockman2007]. The ejection phase was determined by fitting to the observed redshift pairs from JD 2457000 to JD 2457293.5. The phase $\phi = (2\pi/P)(t - t_{\rm ref}) + \phi_0$ with $\phi_0 =
-0.241$rad for a reference Julian date of $t_{\rm ref} =
2456000$. $\phi$ is used as in equation 1 of @Eikenberry01; @Hjellming1981 denote the same quantity by $\Omega(t_0-t_{\rm ref})$.
Millimetre polarimetric imaging {#sec:ALMA}
===============================
SS433 was observed using 27 ALMA antennas between 2015 September 28 21:26 and September 29 01:46 UT. Three execution blocks were run almost in sequence and under similar conditions. The precipitable water vapour column was around 1.4mm. The correlator was set up in Time Division Multiplex mode with a total bandwidth of 7.5GHz, in four 1.75-GHz spectral windows (spw) centred at 224, 226, 240 and 242GHz. Each spw was divided into 64 spectral channels and XX, YY, XY and YX correlations were recorded. The longest and shortest baselines were 2270 and 43m, sensitive to angular scales $\la 3.7$arcsec.
The quasar J1751+0939 was used as a bandpass, polarization and flux scale calibrator and J1832+0731 was used as the phase reference source on an approximately 8 min cycle. The total integration time on SS433 was $\approx$2hr. Initial data reduction followed standard ALMA scripts, executed in CASA [@Schnee2014]. The flux density of J1751+0939 during these observations was taken to be 3.7275Jy at 232.86GHz with a spectral index $\alpha = -$0.441 (defined in the sense $S(\nu) \propto \nu^{-\alpha}$) and the total flux scale uncertainty is about 10%. Polarization leakage was calibrated as described by @Nagai16. Several iterations of [clean]{} in multi-frequency synthesis mode [@Rau] followed by self-calibration were used to improve the imaging of SS433. The final iteration of amplitude and phase self-calibration was made by combining all four spectral windows using a model with two Taylor series terms. We show the zero-order Taylor series $I$ image after self-calibration, together with polarised intensity images derived from $Q$ and $U$ for the entire band (we demonstrate below that Faraday rotation is negligible for our frequency range). The off-source rms levels are 13, 11 and 12$\mu$Jybeam$^{-1}$ in $I$, $Q$ and $U$, respectively, consistent with the expectations for thermal noise alone. The restoring beam has FWHM $0.19 \times 0.16$arcsec$^2$.
The $I$ image (Fig\[fig:combine\], central panel greyscale) shows the familiar zigzag/corkscrew shape of SS433. The peak flux density at 233GHz is 86.0 mJy/beam. The in-band spectral index of the core is $-0.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose an efficient method for mapping and storage of a quantum state of propagating light in atoms. The quantum state of the light pulse is stored in two sublevels of the ground state of a macroscopic atomic ensemble by activating a synchronized Raman coupling between the light and atoms. We discuss applications of the proposal in quantum information processing and in atomic clocks operating beyond quantum limits of accuracy. The possibility of transferring the atomic state back on light via teleportation is also discussed.'
address: 'Institute of Physics and Astronomy, University of Aarhus, Ny Munkegade, DK-8000 Aarhus C, Denmark'
author:
- 'A. E. Kozhekin, K. M[ø]{}lmer and E. Polzik'
title: Quantum Memory for Light
---
Light is an ideal carrier of quantum information, but photons are difficult to store for a long time. In order to implement a storage device for quantum information transmitted as a light signal, it is necessary to faithfully map the quantum state of the light pulse onto a medium with low dissipation, allowing for storage of this quantum state. Depending on the particular application of the memory, the next step may be either a (delayed) measurement projecting the state onto a certain basis, or further processing of the stored quantum state, e.g., after a read-out via the teleportation process. The delayed projection measurement is relevant for the security of various quantum cryptography and bit commitment schemes [@Bras]. The teleportation read-out is relevant for full scale quantum computing.
In this Letter we propose a method that enables quantum state transfer between propagating light and atoms with an efficiency up to 100% for certain classes of quantum states. The long term storage of these quantum states is achieved by utilizing atomic ground states. In the end of the paper we propose an atom-back-to-light teleportation scheme as a read-out method for our quantum memory.
We consider the stimulated Raman absorption of propagating quantum light by a cloud of $\Lambda$ atoms. As shown in the inset of Fig.\[fig:var\], the weak quantum field and the strong classical field are both detuned from the upper intermediate atomic state(s) by $\Delta$ which is much greater than the strong field Rabi frequency $\Omega_{s}$, the width of an upper level $\gamma_{i}$ and the spectral width of the quantum light $\Gamma_{q}$. The Raman interaction “maps” the non-classical features of the quantum field onto the coherence of the lower atomic doublet, distributed over the atomic cloud.
In our analysis we eliminate the excited intermediate states, and we treat the atoms by an effective two-level approximation. We start with the quantum Maxwell-Bloch equations in the lowest order for the slowly varying operator $\hat{Q}$: $\hat{Q}=\hat{\sigma _{31}}
e^{-i(\omega_{q} - \omega_{s})t +i (k_{q}-k_{s})z}$ (it will be assumed, that $(k_{q}-k_{s}) L \ll 1$, where $L$ is the length of the atomic cloud, $z$ is the propagation direction, and $\omega_{q,s}$ and $k_{q,s}$ are frequencies and wavevectors of “quantum” and “strong” fields respectively) [@Raym81; @Raym85]
$$\begin{aligned}
&& \frac{d}{dt}\hat{Q}(z,t) =-i\kappa_{1}^{\ast} \hat{E}_{q}(z,t)
E_{s}^{\ast} (z,t) - \Gamma \hat{Q}(z,t) + \hat{F}(z,t) \label{Bloch} \\
&& \left( \frac{\partial}{\partial z} + \frac{1}{c} \frac{\partial}{\partial t}
\right) \hat{E}_{q}(z,t) = -i \kappa_{2} \hat{Q}(z,t) E_{s}(z,t)
\label{Max}\end{aligned}$$
$\Gamma$ is the dephasing rate of the $1\leftrightarrow 3$ coherence which also includes the strong field power broadening $\Gamma_{s}
\simeq \omega^{3} \hbar \kappa_{1}^{2} |E_{s}|^{2}/(3c^{3})$ due to spontaneous Raman scattering [@Raym81], $\hat{F}(z,t)$ is the associated quantum Langevin force with correlation function $\langle
\hat{F^{\ast}}(z,t) \hat{F}(z^{\prime}, t^{\prime })\rangle =2\Gamma
/n \delta (z-z^{\prime })\delta (t-t^{\prime })$, and $\kappa_{1} =
\sum_{i} \mu_{1i} \mu_{3i}/(\hbar^{2} \Delta_{i})$, $\kappa_{2} = 2\pi
n\hbar \omega \kappa_{1}/c$, where $\mu_{ji}$ are dipole moments of the atomic transitions and $n$ is the density of the atoms. A one-dimensional wave equation is sufficient to describe the spatial propagation of light in a pencil-shaped sample with a Fresnel number ${\cal F}= A / \lambda L$ near unity ($A$ is the cross-sectional area of the sample and $\lambda$ is the optical wavelength) [@Raym85].
If the strong field is not depleted in the process of quantum field absorption and if most of the atomic population stays in the initial level $1$, Eqs.(\[Bloch\]-\[Max\]) can be integrated to get
$$\begin{aligned}
\hat{Q}(z,\tau ) &=&e^{-\Gamma \tau }\hat{Q}(z,0)-e^{-\Gamma \tau
}\int_{0}^{z}dz^{\prime }\,\hat{Q}(z^{\prime },0)\sqrt{\frac{a(\tau )}
{z-z^{\prime }}}J_{1}(2\sqrt{a(\tau )(z-z^{\prime })}) \nonumber \\
&-&i\kappa _{1}\int_{0}^{\tau }d\tau ^{\prime }\,e^{-\Gamma (\tau -\tau
^{\prime })}\hat{E}_{q}(0,\tau ^{\prime })E_{s}(\tau ^{\prime })J_{0}(2
\sqrt{z(a(\tau )-a(\tau ^{\prime }))})+\int_{0}^{\tau }d\tau ^{\prime
}\,e^{-\Gamma (\tau -\tau ^{\prime })}\hat{F}(z,\tau ^{\prime }) \nonumber
\\
&-&\int_{0}^{\tau }d\tau ^{\prime }\,\int_{0}^{z}dz^{\prime }\,e^{-\Gamma
(\tau -\tau ^{\prime })}\hat{F}(z^{\prime },\tau ^{\prime })
\sqrt{\frac{a(\tau )-a(\tau ^{\prime })}{z-z^{\prime }}}
J_{1}(2\sqrt{(a(\tau )-a(\tau
^{\prime }))(z-z^{\prime })}) \label{Q} \\
\hat{E}_{q}(z,\tau ) &=&\hat{E}_{q}(0,\tau )-i\kappa _{2}E_{s}(\tau
)e^{-\Gamma \tau }\int_{0}^{z}dz^{\prime }\,\hat{Q}(z^{\prime },0)J_{0}(2
\sqrt{a(\tau )(z-z^{\prime })}) \nonumber \\
&-&\kappa _{1}^{\ast }\kappa _{2}E_{s}(\tau )\int_{0}^{\tau }d\tau ^{\prime
}\,e^{-\Gamma (\tau -\tau ^{\prime })}\hat{E}_{q}(0,\tau ^{\prime
})E_{s}^{\ast }(\tau ^{\prime })\sqrt{\frac{z}{a(\tau )-a(\tau ^{\prime })}}
J_{1}(2\sqrt{z(a(\tau )-a(\tau ^{\prime }))}) \nonumber \\
&-&i\kappa _{2}E_{s}(\tau )\int_{0}^{\tau }d\tau ^{\prime
}\,\int_{0}^{z}dz^{\prime }\,e^{-\Gamma (\tau -\tau ^{\prime })}\hat{F}
(z^{\prime },\tau ^{\prime })J_{0}(2\sqrt{(a(\tau )-a(\tau ^{\prime
}))(z-z^{\prime })}) \label{E}\end{aligned}$$
where $\tau =t-z/c$, and $a(\tau )=\kappa _{1}^{\ast }\kappa
_{2}\int_{0}^{\tau }d\tau ^{\prime \prime }\,|E_{s}(\tau ^{\prime
\prime })|^{2}$ and $\hat{Q}(z,0)$ is the initial atomic coherence.
Integrating Eq.(\[Q\]) over space we obtain the collective atomic spin operator, which is the atomic variable on which the quantum light field is mapped. $$\begin{aligned}
\hat{{\cal Q}}_{L}(\tau ) \equiv n\int_{0}^{L}\,dz\,\hat{Q}(z,\tau )
&=&ne^{-\Gamma \tau }\int_{0}^{L}\,dz^{\prime }J_{0}(2\sqrt{a(\tau
)(L-z^{\prime })})\hat{Q}(z^{\prime },0)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $\Gamma$ be a crystallographic group of dimension $n,$ i.e. a discrete, cocompact subgroup of $\operatorname{Isom}(\R^n)$ = $O(n)\ltimes\R^n.$ For any $n\geq 2,$ we construct a crystallographic group with a trivial center and a trivial outer automorphism group.'
author:
- 'R. Lutowski, A. Szczepański [^1]'
title: Crystallographic groups with trivial center and outer automorphism group
---
[^2] [^3]
Introduction
============
Let $\Gamma$ be a discrete, cocompact subgroup of $O(n)\ltimes\R^n$ = $\operatorname{Isom}(\R^n)$ i.e. a crystallographic group. If $\Gamma$ is a torsion free group, then $M = \R^n/\Gamma$ is a flat manifold (that is a compact Riemannian manifold without boundary with the sectional curvature $K_x = 0$ for any $x\in M$). Moreover $\pi_{1}(M) = \Gamma.$ In 2003 R. Waldmüller found a torsion free crystallographic group $\Gamma\subset O(141)\ltimes\R^{141}$ (a flat manifold $M = \R^{141}/\Gamma$) with the following properties: $(i)$ $Z(\Gamma) = \{e\},$ $(ii)$ $\operatorname{Out}(\Gamma) = \{e\},$ where $Z(\Gamma)$ is the center of the group $\Gamma,$ and $\operatorname{Out}(\Gamma) = \operatorname{Aut}(\Gamma)/\operatorname{Inn}(\Gamma)$ denotes the group of outer automorphisms of $\Gamma$ (see [@S Appendix C] and [@wald]). Equivalently, $(i)$ means that the abelianization of $\Gamma$ is finite (the first Betti number of $M$ is equal to zero). Moreover, if both conditions $(i)$ and $(ii)$ are satisfied, then the group of affine diffeomorphisms $\operatorname{Aff}(M)$ of the manifold $M$ is trivial (see [@charlap] and [@S]). We do not know if there exist such flat manifolds in dimensions less than $141.$ For example in dimensions up to six such Bieberbach groups do not exist. In this paper we are interested in the existence of not necessarily torsion free crystallographic groups with the above properties. We shall prove that for any $n\geq 2$ there exists a crystallographic group of dimension $n$ which satisfies conditions $(i)$ and $(ii).$
The main motivation for us is the article [@BL] of M. Belolipetsky and A. Lubotzky. For any $n\geq 3$ they found an infinite family of hyperbolic compact manifolds of dimension $n$ with the following property: for every manifold $M$ from this family, $\operatorname{Out}(\pi_1(M))$ = $\{e\}.$ Since the center of the fundamental group of a compact hyperbolic manifold is trivial, the above result gives us an infinite family of groups which satisfy conditions $(i)$ and $(ii).$ The construction of the above hyperbolic examples uses the properties of simple Lie groups of $\R$-rank one and, in particular, follows from the existence of non arithmetic lattices. In our construction the most important are Bieberbach theorems, and specific properties of crystallographic groups.
Crystallographic groups with trivial center and outer automorphism group
========================================================================
In this part we shall prove our main result. Let $\Gamma$ be a torsion free crystallographic group. From Bieberbach’s theorems (see [@S Chapter 2]) we have a short exact sequence of groups $$0\to\Z^n\to\Gamma\stackrel{p}{\rightarrow} G\to 0,$$ where $\Z^n$ is a maximal abelian subgroup of $\Gamma$ and $G$ is a finite group. Moreover, let $h_{\Gamma}:G\to \operatorname{GL}(n,\Z)$ be the integral holonomy representation defined by the formula $$\forall_{g\in G} h_{\Gamma}(g)(e) = \bar{g}e\bar{g}^{-1},$$ where $\bar{g}\in\Gamma, p(\bar{g}) = g$ and $e\in\Z^n.$ Let $$N = N_{\operatorname{GL}(n,\Z)}(h_{\Gamma}(G)) = \{X\in \operatorname{GL}(n,\Z)\mid \forall_{f\in h_{\Gamma}(G)}\hskip 2mm XfX^{-1}\in h_{\Gamma}(G)\}.$$ In the case when $Z(\Gamma) = \{e\},$ we have the following commutative diagram ([@S p. 65-69]) with exact rows and columns: $$\begin{diagram}
\node{}\node{0}\arrow{s}\node{0}\arrow{s}\node{0}\arrow{s}\\
\node{0}\arrow{e}\node{\Z^n}\arrow{s}\arrow{e}\node{\Gamma}\arrow{s}\arrow{e}\node{G}\arrow{s,r}{h_\Gamma}\arrow{e}\node{0}\\
\node{0}\arrow{e}\node{Z^1(G,\Z^n)}\arrow{s}\arrow{e}\node{\operatorname{Aut}(\Gamma)}\arrow{s}\arrow{e,t}{F}\node{N_\alpha}\arrow{s}\arrow{e}\node{0}\\
\node{0}\arrow{e}\node{H^1(G,\Z^n)}\arrow{s}\arrow{e}\node{\operatorname{Out}(\Gamma)}\arrow{s}\arrow{e}\node{N_\alpha/G}\arrow{s}\arrow{e}\node{0}\\
\node{}\node{0}\node{0}\node{0}
\end{diagram}$$
Diagram 1
where $Z^1(G,\Z^n)$ is the group of 1-cocycles. Moreover $$N_{\alpha} = \{n\in N\mid n\ast\alpha =\alpha\},$$ and $\alpha\in H^2(G,\Z^n)$ is the cohomology class of the first row of the diagram. The action $\ast:N\times H^2(G,\Z^n)\to H^2(G,\Z^n)$ is defined by the formula $$n\ast [a] = [n\ast a],$$ where $n\in N, a\in Z^2(G,\Z^n),\hskip 2mm [a]$ is the cohomology class of $a$ and $$\forall_{g_1,g_2\in G} \; n\ast a(g_1,g_2) = n a(n^{-1}g_{1}n,n^{-1}g_{2}n).$$ We have the following proposition.
$\operatorname{Aut}(\Gamma)$ is a crystallographic group if and only if $\operatorname{Out}(\Gamma)$ is a finite group.
[**Proof:**]{} We start with an observation that $Z^1(G,\Z^n)$ is a free abelian group of rank $n$ which is a faithful $N_{\alpha}$ module. First, assume that $\operatorname{Aut}(\Gamma)$ is a crystallographic group with the maximal abelian subgroup $M.$ From [@charlap Proposition I.4.1], $M$ is the unique normal maximal abelian subgroup of $\operatorname{Aut}(\Gamma).$ Hence, $M = Z^1(G,\Z^n),$ and $\operatorname{Out}(\Gamma)$ is a finite group. The reverse implication is obvious. This finishes the proof of the proposition. $\Box$ Let us formulate our main result.
\[main\] For every $n\geq 2$ there exists a crystallographic group $\Gamma$ of dimension $n$ with $Z(\Gamma) = \operatorname{Out}(\Gamma) = \{e\}.$
[**Proof:**]{} We shall need the following lemma.
Let $G, H$ be finite groups and $H\subset G\subset \operatorname{GL}(n,\Z).$ If the group $N_{\operatorname{GL}(n,\Z)}(H)$ is finite, then $N_{\operatorname{GL}(n,\Z)}(G)$ is finite.
[**Proof of Lemma:**]{} From the assumption, $\operatorname{Aut}(H)$ and $\operatorname{Aut}(G)$ are finite. Moreover, we have monomorphisms: $$N_{\operatorname{GL}(n,\Z)}(H)/C_{\operatorname{GL}(n,\Z)}(H)\stackrel{\bar{\phi}}{\rightarrow}\operatorname{Aut}(H)$$ and $$N_{\operatorname{GL}(n,\Z)}(G)/C_{\operatorname{GL}(n,\Z)}(G)\stackrel{\bar{\phi}}{\rightarrow}\operatorname{Aut}(G),$$ where $\bar{\phi}$ is induced by $\phi(s)(g)=sgs^{-1},g\in G, s\in \operatorname{GL}(n,\Z).$ Since $C_{\operatorname{GL}(n,\Z)}(G)\subset C_{\operatorname{GL}(n,\Z)}(H),$ our Lemma is proved. $\Box$
If $\mid\operatorname{Out}(\Gamma)\mid < \infty,$ then $\mid\operatorname{Out}(\operatorname{Aut}(\Gamma))\mid < \infty.$
$\Box$
Assume $Z(\Gamma) = \{e\},$ then
1. $H^1(G,\Z^n)\simeq
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'N. G. Guseva'
- 'P. Papaderos'
- 'H. T. Meyer'
- 'Y. I. Izotov'
- 'K. J. Fricke'
date: 'Received ; Accepted'
title: 'An investigation of the luminosity-metallicity relation for a large sample of low-metallicity emission-line galaxies [^1], [^2]'
---
[We present 8.2m VLT spectroscopic observations of 28 H [[ii]{}]{} regions in 16 emission-line galaxies and 3.6m ESO telescope spectroscopic observations of 38 H [[ii]{}]{} regions in 28 emission-line galaxies. These emission-line galaxies were selected mainly from the Data Release 6 (DR6) of the Sloan Digital Sky Survey (SDSS) as metal-deficient galaxy candidates. ]{} [We collect photometric and high-quality spectroscopic data for a large uniform sample of star forming galaxies including new observations. Our aim is to study the luminosity-metallicity ($L-Z$) relation for nearby galaxies, especially at its low-metallicity end and compare it with that for higher-redshift galaxies. ]{} [Physical conditions and element abundances in the new sample are derived with the $T_{\rm e}$-method, excluding six H [[ii]{}]{} regions from the VLT observations and nearly two third of the H [[ii]{}]{} regions from the 3.6m observations. Element abundances for the latter galaxies were derived with the semiempirical strong-line method. ]{} [ From our new observations we find that the oxygen abundance in 61 out of the 66 H [[ii]{}]{} regions of our sample ranges from 12 + log O/H = 7.05 to 8.22. Our sample includes 27 new galaxies with 12 + log O/H $<$ 7.6 which qualify as extremely metal-poor star-forming galaxies (XBCDs). Among them are 10 H [[ii]{}]{} regions with 12 + log O/H $<$ 7.3. The new sample is combined with a further 93 low-metallicity galaxies with accurate oxygen abundance determinations from our previous studies, yielding in total a high-quality spectroscopic data set of 154 H [[ii]{}]{} regions. 9000 more galaxies with oxygen abundances, based mainly on the $T_{\rm e}$-method, are compiled from the SDSS. Photometric data for all galaxies of our combined sample are taken from the SDSS database while distances are from the NED. Our data set spans a range of 8 mag with respect to its absolute magnitude in SDSS $g$ (–12 $\ga M_g \ga$ –20) and nearly 2 dex in its oxygen abundance (7.0$\la$12 + log O/H$\la$8.8), allowing us to probe the $L-Z$ relation in the nearby universe down to the lowest currently studied metallicity level. The $L-Z$ relation established on the basis of the present sample is consistent with previous ones obtained for emission-line galaxies. ]{}
Introduction \[intro\]
======================
It was shown more than 20 years ago that low-luminosity dwarf galaxies have systematically lower metallicities compared to more luminous galaxies [@Lequeux1979; @Skillman1989; @RicherMcC1995]. This dependence, initially obtained for irregular galaxies, was later confirmed for galaxies of different morphological types [e.g. @Vila1992; @KobylZarit1999; @MelbourneSalzer2002; @Lee2004; @Pil2004; @Lee45mu2006].
The differences between giant and dwarf galaxies are usually attributed to different chemical evolution of galaxies with different masses [e.g. @Lequeux1979; @Tremonti2004; @Lee45mu2006; @Ellison2008; @Gavilan2009]. Thus, more efficient mechanisms seem to be at work in massive galaxies converting gas into stars and/or less efficient ones ejecting enriched matter into the galactic halo or even into the intergalactic medium. While the mass of a galaxy is one of the key physical parameters governing galaxy evolution, its determination is not easy and somewhat uncertain. Therefore, very often the luminosity, which is directly derived from observations, is used instead of the mass. In addition, some authors also use other global characteristics of a galaxy such as Hubble morphological type, rotation velocity, the gas mass fraction, surface brightness of the galaxy, to study correlations between metallicity and macroscopic properties of a galaxy [e.g. @Tremonti2004; @Pil2004].
Metallicity reflects the level of the gas astration in the galaxy. Hence, the metallicity of a galaxy depends strongly on its evolutionary state, specifically, on the fraction of the gas converted into stars. The metallicity in emission-line galaxies is defined in terms of the relative abundance of oxygen to hydrogen (usually 12 + log O/H) in the interstellar medium (ISM). Different mechanisms were considered in chemical evolution models to account for the low metallicity of dwarf galaxies, mainly 1) enriched galactic wind outflow which expells the newly synthesized heavy elements from the galaxy, resulting in slowing enrichment of the galaxy ISM; 2) inflow of metal-poor intergalactic gas and its mixing with the galaxy ISM which results in decreasing ISM metallicity, and 3) the burst character of star formation with a very low level of astration between the bursts. In principle, chemical evolution models could predict the slope and scatter of the mass-metallicity $M-Z$ (and luminosity-metallicity $L-Z$) relations over a large range in mass (luminosity) and metallicity invoking the mechanisms mentioned above.
Usually, $L-Z$ relations are based on optical observations of nearby galaxies. However, it was shown in recent studies that the near infrared (NIR) range could be more promising for such studies. @Saviane2008 collected abundances obtained by means of the temperature-sensitive method and NIR luminosities for a sample of dwarf irregular galaxies with –20 $<$ $M_H$ $<$ –13, located in nearby groups of galaxies. They obtained a tight $M-Z$ relation with a low scatter of 0.11 dex around its linear fit. @Salzer05 [see also @Vaduvescu07] noted that the NIR luminosities are more fundamental than the $B$-band ones, since they are largely free of absorption effects and are more directly related to the stellar mass of the galaxy than optical luminosities. Nevertheless, this statement is correct only for galaxies with low and moderate SF activity. In emission-line galaxies with high star formation rate (SFR), such as blue compact dwarf (BCD) galaxies, the young, low mass-to-light ($M/L$) ratio stellar component may provide up to $\sim$50% of the total $K$ band emission [@Noeske03]. Additionally, in such systems the contribution of ionized gas to the total luminosity could be high [see e.g. @I97b; @P98; @P02], especially in the NIR range [see e.g. @Vanzi00; @Smith2009], and should be taken into account.
Recently, studies of the $L-Z$ relation were extended to larger volumes by including moderate- and high-redshift galaxies [@KobylZarit1999; @Contini2002; @Maier2004]. Variations of the $L-Z$ relation with redshift can provide a means to study the galaxy evolution with look-back time [see, e.g., @Kobulniky2003]. It was established in this study that the slopes and zero points of the $L-Z$ relation evolve smoothly with redshift. Its large dispersion has been attributed to galaxy evolution effects. However, these results and their comparison with those for nearby galaxies should be considered with caution. The high-redshift samples are biased by different selection criteria and metallicity calibrations as compared to the local galaxies. They consist on average of more luminous and higher metallicity galaxies. Star-forming dwarf galaxies in the relatively high-redshift (up to $z$ $\sim$ 1) samples are rare because of their intrinsic faintness. Moreover, due to the weakness of the \[O [iii]{}\]$\lambda$4363 emission line in the spectra of these galaxies, their abundance determinations are more uncertain and could lead to a large scatter in the $L-Z$ diagrams. This fact could be the reason for a larger scatter of high-redshift dwarf galaxies if the direct $T_{\rm e}$-method is used instead of the empirical R$_{23}$ one [e.g., @Kakazu07].
In summary, it is difficult to obtain reliable metallicities over a large luminosity range in a homogeneous manner, i.e. employing a unique technique (e.g. the direct $T_{\rm e}$-method), even for nearby galaxies. Therefore, different methods for abundance determination are applied for galaxies of different types. The direct method is mainly used for nearby low-metallicity galaxies, while various empirical methods are used for nearby high-metallicity galaxies and for almost all high-redshift galaxies. The variety of methods results in significant differences in the $L-Z$ relations obtained with the direct $T_{\rm e}$-method and those based on strong emission line ratio calibrations, such as $R_{23}$, $P$-method, N2 and O3N2 methods. These differences were reported by many authors [e.g., @Pil2004; @Shi2005; @Hoyos2005; @Kakazu07].
Large surveys, such as the Two-Degree Field Galaxy Redshift Survey (2dFGRS) and Sloan Digital Sky Survey (SDSS), provide rich data sets for statistically improved
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Nian-Sheng Tang[^1] Xiao-Dong Yan and Pu-Ying Zhao\
\
title: Exponentially tilted likelihood inference on growing dimensional unconditional moment models
---
[**Abstract**]{}: Growing-dimensional data with likelihood unavailable are often encountered in various fields. This paper presents a penalized exponentially tilted likelihood (PETL) for variable selection and parameter estimation for growing dimensional unconditional moment models in the presence of correlation among variables and model misspecification. Under some regularity conditions, we investigate the consistent and oracle properties of the PETL estimators of parameters, and show that the constrainedly PETL ratio statistic for testing contrast hypothesis asymptotically follows the central chi-squared distribution. Theoretical results reveal that the PETL approach is robust to model misspecification. We also study high-order asymptotic properties of the proposed PETL estimators. Simulation studies are conducted to investigate the finite performance of the proposed methodologies. An example from the Boston Housing Study is illustrated.\
[***Keywords***]{}: Growing-dimensional data analysis; Model misspecification; Moment uncondition models; Penalized exponentially tilted likelihood; Variable selection.
[GBK]{}[song]{}
Introduction
============
Exponentially tilted (ET) likelihood (Imbens, Spady and Johnson, 1998) is a useful nonparametric approach to evaluate estimates and confidence regions of unknown parameters in unconditional moment models of the form $E\{g(x;\theta)\}=0$, which provides a unified approach for parameter estimation in a class of statistical models with likelihood function unavailable, where $g(x;\theta)$ is a vector-valued nonlinear function of a random vector $x$ and a parameter vector $\theta$. The merits of the ET likelihood include (i) it behaves better than empirical likelihood under model misspecification (Schennach, 2007), that is, the ET likelihood is robust to model misspecification, (ii) it allows a computationally convenient treatment of misspecified models (Kitamura, 2000), and (iii) it is flexible in incorporating auxiliary information. Hence, several authors, for example, Schennach (2005, 2007), Zhu et al. (2009) and Caner (2010), discussed its properties and applications when the number of parameters is fixed and less than or equal to sample size.
Growing-dimensional parametric or semiparametric models are widely used to make statistical inference on complicated data sets such as longitudinal and panel data in econometrics (Fan and Peng, 2004). It is commonly assumed that only a small number of covariates actually contribute to the considered models, which leads to the well-known sparse models for helping interpretation and improving prediction accuracy (Bradic, Fan and Wang, 2011). To this end, many penalized methods have been developed to simultaneously select the important covariates and estimate parameters in various statistical models when the number of parameters diverges. For example, Fan and Peng (2004) investigated the nonconcave penalized likelihood with a growing number of nuisance parameters in a linear regression model; Lam and Fan (2008) presented a profile-kernel likelihood inference in a linear regression model; Wang, Li and Leng (2009) studied shrinkage tuning parameter selection; Zou and Zhang (2009) proposed an adaptive elastic-net procedure for a linear regression model; Li, Peng and Zhu (2011) investigated asymptotic properties of a nonconcave penalized M-estimator in a sparse, diverging-dimensional, linear regression model; Caner and Zhang (2014) extended the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to generalized method of moments (GMMs); Caner, Han and Lee (2016) presented an adaptive elastic net GMM estimation for many invalid moment conditions. Recently, Leng and Tang (2012) presented a penalized empirical likelihood method in estimating equations, which can be used to improve the efficiency of parameter estimation by incorporating some auxiliary information when likelihood function is unavailable, with a diverging number of parameters, but their empirical likelihood method is sensitive to model misspecification. Also, to the best of our knowledge, there is little work done on extending the above mentioned approaches to unconditional moment models with a diverging number of parameters in the presence of model misspecification. More importantly, this extension is challenging in the presence of model misspecification and high correlation among variables because (i) the number of Lagrange multipliers used to obtain the solution to minimizing the ET likelihood function increases with sample size, (ii) the nonconvex optimization is involved (Leng and Tang, 2012), and (iii) there is a well-known ill-posed problem, i.e., the resulting estimator has very slow rate of convergence (see, e.g., Ai and Chen, 2003; Hall and Horowitz, 2005; Darolles, Fan, Florens and Renault, 2011; Chen and Pouzo, 2012).
In this paper, we develop a penalized ET (PET) likelihood procedure for parameter estimation, variable selection and statistical inference for unconditional moment models with a diverging number of parameters in the presence of model misspecification and high correlation among variables via the sieve method (Ai and Chen, 2003). With a proper penalty function and diverging rate of dimensionality, we demonstrate that (i) the resulting estimator possesses the advantages of the penalized likelihood approach, i.e., the PET method has the oracle properties (Fan and Li, 2001) that it identifies the true sparse structure of the considered model with probability tending to one and with the optimal efficiency; (ii) the resulting estimator has the advantages of the ET likelihood method, i.e., the PET method behaves better than the penalized empirical likelihood approach in the presence of model misspecification; (iii) the constrainedly profiled PET likelihood ratio statistic is asymptotically distributed as the chi-squared distribution indicating that the Wilks’ theorem holds, which can be used to test hypotheses and construct confidence regions of parameters of interest. In addition, we extend the high-order asymptotic properties of the ET estimator given in Schennach (2007) for a fixed number of parameters to the case that the number of parameters diverges; and we also establish selection consistency for NP dimension case.
The rest of this paper is organized as follows. Section 2 first introduces the PET likelihood, and then investigates the oracle properties of the proposed PET estimators, asymptotic chi-squared distribution, high-order asymptotic properties and selection consistency. Simulation studies are given in Section 3. An example from the Boston Housing Study is analyzed in Section 4. Some concluding remarks are given in Section 5. Proofs of Theorems are presented in Appendix.
Methods
=======
Exponentially tilted likelihood
-------------------------------
Suppose that $X_1,\ldots,X_n$ are independent and identically distributed (i.i.d.) random vectors from an unknown distribution $F(x)$ with $x\in \mathcal{X}\subset \mathcal{R}^{\iota}$. Without assuming a specific form of $F(x)$, we are interested in making inference on a $p\times 1$ vector of unknown parameters of interest, denoted by $\theta$, based on $r$ ($r\geq p$) functionally independent estimating functions $g(X_i;\theta)=(g_1(X_i;\theta),\ldots,g_r(X_i;\theta))^{{\!\top\!}}$ that satisfy the unconditional moment condition of the form: $E_{F_x}\{g(X_i;\theta_0)\}=0$ for $\theta_0\in\Theta\subset\mathcal{R}^p$ and $i=1,\ldots,n$, which is usually referred to as general estimating equations or unconditional moment models (Owen, 2001), where $\theta_0$ is the unique true value of $\theta$ and $E_{F_x}$ denotes the expectation taken with respect to $F(x)$. The selection of $g(X;\theta)$ is flexible and the details can refer to Leng and Tang (2012).
When $r=p$, one can obtain estimation of $\theta$ by solving the following unconditional moment conditions: $n^{-1}\sum_{i=1}^ng(X_i;\theta)=0$ (Leng and Tang, 2012). When $r>p$ and $p$ is fixed, one can employ empirical likelihood approach to obtain more efficient estimation of $\theta$ by combining available information (Qin and Lawless, 1994). However, when $r>p$ and $p$ is large, it is commonly assumed that only a small number of variables actually contribute to unconditional moment conditions, which leads to the sparsity pattern in unknown parameter vector $\theta$ and thus makes variable selection crucial (Bradic, Fan and Wang, 2011). To this end, Leng and Tang (2012) studied growing dimensional unconditional moment models via empirical likelihood approach, and presented a penalized empirical likelihood procedure for parameter estimation and variable selection. In what follows, we present an ET approach to investigate parameter estimation and variable selection for unconditional moment models with a growing number of parameters because the ET likelihood is a robust nonparametric tool to make statistical inference on unconditional moment models (Imbens et al., 1998; Owen, 2001) when unconditional moment models are misspecified.
For $i=1,\ldots,n$, let $w_i=dF(X_i)={\rm Pr}(\mathbb{X}_i=X_i)$, where $X_i$ is the observation of random vector $\mathbb{X}_i$. The ET likelihood can be defined as the Kullback-Leibler divergence between the empirical frequencies $1/n$ and $w_i$ subject to some restrictions. Following Imbens et al. (1998), the ET estimator $\hat
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
*Parameterized quantum circuits (PQC, aka, variational quantum circuits) are among the proposals for a computational advantage over classical computation of near-term (not error corrected) digital quantum computers. PQCs have to be “trained” — i.e., the expectation value function has to be maximized over the space of parameters.*
This paper deals with the number of samples (or “runs” of the quantum computer) which are required to train the PQC, and approaches it from an information theoretic viewpoint. The main take-away is a disparity in the large amount of information contained in a single exact evaluation of the expectation value, vs the exponentially small amount contained in the random sample obtained from a single run of the quantum circuit.
**Keywords:** Near-term quantum computing; parameterized quantum circuits.
author:
- 'Evgenii Dolzhkov$^{a}$'
- 'Bahman Ghandchi$^{a,b}$'
- |
Dirk Oliver Theis$^{a,b}$\
$^a$ Theoretical Computer Science, University of Tartu, Estonia\
$^b$ Ketita Labs [OÜ]{}, Tartu, Estonia\
`ghandchi@`{`ketita.com`, `ut.ee`}, `dotheis@`{`ketita.com`, `ut.ee`}
date: |
Version: Fri Mar 29 17:49:52 CET 2019\
Compiled:
title: Information content of queries in training Parameterized Quantum Circuits
---
Introduction {#sec:intro}
============
Hybrid quantum-classical computing with parameterized (or variational) quantum circuits (PQCs) works by alternately running the parameterized quantum circuit on a digital, gate-based quantum computer, and updating parameters in classical hard- and software. The hybrid process aims to find parameter settings which optimize some objective function derived from the measurement results of the PQC, for example with the goal to find the ground state of the measurement Hamiltonian. This process has become known as “training” the PQC.
For the purpose of this paper, a Parameterized Quantum Circuit consists of a sequence of quantum operations, applied to a known initial state which we denote by $\ket0\bra0$, and followed by a measurement. Some of the quantum operations are unitaries of the form $$\label{eq:ham-unitary}
\rho \mapsto e^{-\pi i x_j H_j}\rho e^{\pi i x_j H_j}, \text{ $j=1,\dots,n$,}$$ where the $H_j$ are hermitian operators and $x\in\RR^n$ is the vector of parameters. For simplicity(!), we assume that the $H_j$ have $\pm1$ eigenvalues. (We allow more general dependence on the parameters in Section \[sec:eval-queries\].) We also assume that the observable in the final measurement has eigenvalues $\pm1$. Hence, a single run of the quantum circuit (with measurement) with parameters set to $x$ yields a random number in $\pm1$, whose expectation we denote by $f(x)$, and refer to as the *expectation value function* of the PQC. In this simplified setting, the training problem is this: $$\text{maximize } f(x) \text{ over $x\in\RR^n$.}$$ (Note that $n$ is the number of parameters, not the number of qubits.)
Even though, in applications, a good local maximum is often sufficient, training PQCs is known to be difficult for a variety of reasons. The least of it is that, as a non-concave maximization[^1] problem, the training problem is likely to be NP-hard: But classical neural network training has the same property, and it is not a huge problem there. More specific to the quantum case is the existence of “plateaus”: large regions of the parameter space where the gradient is close to 0 [@McClean-Boixo-Smelyanskiy-Babbush-Neven:barren:2018]. While training seems to work fine in practice with a small number of qubits, the exponential dependence on the number of qubits of indicators of “trouble” are worrysome. In this paper, we add one new worrying perspective to the discussion: The information content of the random output of a run of the PQC. For that we consider a setting which is very generous to the designer of a training algorithm: The algorithm is only ever used on a fixed $n$, and a fixed, finite number (depending on $n$) of functions $f_c$, $c\in\mathcal C$, all of which are known to the algorithm. The algorithm itself can be randomized. The algorithm has infinite computational resources; e.g., it can represent real numbers exactly, and make instantaneous computations on them (for parameters and expectation values).
Formally, we define the following. A *sample query* consists of the training algorithm setting the parameters to an $x\in\RR^n$ *ad libitum*, and then running once the quantum circuit with this setting, retrieving the resulting random number $F \in \{\pm1\}$ with $\Exp F=f(x)$. In contrast, in an *evaluation query* after setting the parameters *ad libitum,* the algorithm is given the real number $f(x)$ exactly.
The success of the algorithm is only measured in some definition of average-case — not worst-case — over $c\in \mathcal C$ and over its internal randomness. The following algorithm and theorem underlines just how ridiculously generous the compuational model is.
Pick a random parameter setting $x\in\RR^n$ \[step:random-x\]\
Query $f(x)$.\
Iterating over all $c\in \mathcal C$, find one with $f_c(x) = f(x)$\
Look up the parameter setting $x^*$ maximizing $f_c$ in a table\
output $x^*$.
\[thm:super\] Let $\mu$ be an arbitrary absolutely continuous probability measure on $\RR^n$. If in Step \[step:random-x\], $x$ is drawn according to $\mu$, then the Omnipotent Algorithm succeeds with probability 1.
The proof of this theorem, in Section \[sec:eval-queries\], will show that with probability 1 over the choice of $x$, the mapping $f_c \mapsto (x,f_c(x))$ is one-to-one.
In infomation theoretic terms, if $C$ is randomly chosen in $\mathcal C$, then a single evaluation query of $f_C$ at a random point contains all information about $f_C$: $$\label{eq:eval-q:cond-ent-0}
{\mathbb H}\bigl(f_A \mid (X,f_A(X)) \bigr) =0.$$
Our *Superman* algorithms with infinite memory and tables with worked-out solutions hit Kryptonite, when we replace evaluation-query access by sample-query access — even when we allow the output to be off by a significant amount from the true maximum. We propose the the following definition.
Let be a Las-Vegas (randomized) algorithm which is given sample-query access to one (unknown) element in a family $\mathcal C$ of PQCs with $n$ parameters. For $\alpha\in\RR_+$, we say that *$\alpha$-succeeds* in the training problem, if it outputs a parameter setting $x^*$ with $f(x^*) \ge \max_x f(x) -\alpha$, after performing a number of sample quries whose number depends on: the internal randomness of ; the random choice of $c$ uniformly in $\mathcal C$; and the randomness in the sample query results.
\[thm:sample\] There are constants $c>1$ and $\alpha \approx 1$ and a family of simple PQCs ($3^n$ for each number of qubits $n$) such that every $\alpha$-successful training algorithm, with probability $1-c^{-n}$, requires at least $c^n$ sample queries.
In terms of information content of queries: For any $m \ll c^n$, for the queries at the (random) parameter settings $X_1,\dots,X_m$ that a randomized training algorithm performs, and the corresponding sample query results $Q_1,\dots,Q_m$, if $C \in \mathcal C$ is chosen uniformly at random, we have $$\label{eq:sample-q:mutinf-exp-small}
{\mathbb I}\bigl(C : (X_1,Q_1,\dots,X_m,Q_m) \bigr) \le m 2^{-\Omega(n)}{\mathbb H}(C).$$
Clearly, the difference in the perfect performance of the algorithms for evaluation and sample queries lies in the information content of the queries, vs . The contribution of this paper lies in bringing the consideration of information content of queries to the table with regards to algorithms and lower bounds for PQC training.
#### This paper is organized as follows
In Section \[sec:eval-queries\], we prove Theorem \[thm:super\], Section \[sec:sample-queries\] is dedicted to the proof
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We give a geometrical construction of Connes spectral triples or noncommutative Dirac operators ${{ \slashed{D} }}$ starting with a bimodule connection on the proposed spinor bundle. The theory is applied to the example of $M_2({{\mathbb{C}}})$, and also applies to the standard $q$-sphere and the $q$-disk with the right classical limit and all properties holding except for ${{\mathcal{J}}}$ now being a twisted isometry. We also describe a noncommutative Chern construction from holomorphic bundles which in the $q$-sphere case provides the relevant bimodule connection.'
address: |
Dept of Mathematics, Swansea University\
Singleton Parc, Swansea SA2 8PP\
[ ]{}+\
Queen Mary, University of London\
School of Mathematics, Mile End Rd, London E1 4NS, UK
author:
- 'Edwin Beggs & Shahn Majid'
title: Spectral triples from bimodule connections and Chern connections
---
Introduction
============
A main difference between the well-known Connes approach to noncommutative geometry coming out of cyclic cohomology and the more constructive ‘quantum group’ approach to noncommutative geometry lies in the attitude towards the Dirac operator. In Connes’ approach this is defined axiomatically as an operator ${{ \slashed{D} }}$ on a Hilbert space which plays the role of Dirac operator on a spinor bundle and which is the starting point for Riemannian geometry, while in the quantum groups approach one builds up the geometry layer by layer starting with the differential algebra structure and often (but not necessarily) guided by quantum group symmetry, and arrives at ${{ \slashed{D} }}$ as an endpoint, normally after the Riemannian structure. This approach also should contain $q$-deformed and quantum group-related examples but it is known that these may take us beyond Connes axioms if we want to have the correct classical limit. For example, for the standard $q$-sphere where the construction in [@DS-sphere] meets Connes axioms at some algebraic level but has spectral dimension 0 (the eigenvalues of the Dirac operator distributes in a typical manner for a zero dimensional manifold) and hence do not have the correct classical limit.
The present paper joins up these two approaches, namely we show how within the constructive approach we can naturally obtain spectral triples, at least up to issues of functional analysis, from a bimodule connection on a chosen vector bundle (thought of as a ‘spinor bundle’), having fixed a first order differential calculus for our space and a ‘Clifford action’ ${{\triangleright}}$ of its 1-forms on the bundle. The latter plays the role of the Clifford structure. Our construction is still quite general and we don’t assume that the bundle is associated to a quantum frame bundle and connection induced by a quantum ‘spin’ connection on it as per the classical case, although that will be the case in the $q$-sphere example.
An outline of the paper is as follows. In Section 2.1, we recall Connes’ axioms [@Con; @ConMar] for a real spectral triple. Then in Section 2.2 we provide our main result, Theorem \[sptripres\], which constructs examples of these from bimodule connections at an algebraic level, i.e. before worrying about adjoints. Section 2.3 establishes further constraints on the bimodule connection and inner product data to have ${{ \slashed{D} }}$ hermitian and ${{\mathcal{J}}}$ an (antilinear) isometry. Section 2.4 completes the general theory with an explanation of how varying the bimodule connection amounts to an inner fluctuation of the spectral triple in the sense of Connes[@ConMar].
One of the first ingredients in Section 2.2 is that the ‘commutativity condition’ in Connes’ axioms (see (4) in our recap below) can be seen as making the Hilbert space ${{\mathcal{S}}}$ a bimodule [@LordDirac], see also [@Barrmatrix]. However, our notion of bimodule connection means a single (say, left) connection $\nabla$ which admits a modified right-connection rule via a generalised braiding [@Mou; @DV1; @DV2; @MMM; @Sitarz; @BegMa3; @BegMa4; @BegMa5; @MaTao]. This allows for connections on tensor products of bimodules which will be critical for what follows and is very different from what is meant by ‘bimodule connection’ in [@LordDirac], which comes from [@CuntzQuillen] and uses two unrelated connections, one left and one right, on a bimodule. Classically, the latter reduces to defining two unrelated connections on the same bundle and is not what we need. Specifically, the lack of relation between the left and right structures means that the antilinear ${{\mathcal{J}}}$ operator for the reality condition for Connes’ definition of Dirac operator could not be studied. In the context of what we mean by bimodule connections, another main tool in Section 2.2 is a conjugate bimodule whereby the antilinear map ${{\mathcal{J}}}:{{\mathcal{S}}}\to {{\mathcal{S}}}$ is formulated in terms of a linear map $j:{{\mathcal{S}}}\to \overline{{{\mathcal{S}}}}$. We use our previous work [@BegMa3] for the conjugate bimodule connection and related matters. Although one could view the use of bar categories here as a bookkeeping device to keep explicit track of anti/linearity, it is essential for tensor product operations like ${{\rm id}}{\otimes}j$ to make sense. In the context of general monoidal categories, the idea of bar category can be less trivial [@BegMa2], but it is very useful even in the present case of complex vector spaces and someantilinear maps.
Section 3 shows how the theory works on three examples. Section 3.1 covers the finite geometry of $2\times 2$ matrices $M_2({{\mathbb{C}}})$ as ‘coordinate algebra’. This is of course very well studied and we refer to [@Barrmatrix] for a recent treatment of spectral triples here. In our approach we start with a natural $*$-differential calculus $\Omega^1$ which is 2-dimensional over the algebra. As it happens we take the same bimodule for ${{\mathcal{S}}}$, i.e. 2-spinors. We take a natural choice of ${{\triangleright}}$ in this context and fixing this data we find a unique bimodule connection that meets our requirements of Section 2. This results in a single spectral triple which we compute as ${{ \slashed{D} }}={1\over 2}\gamma^2{\otimes}[\gamma^2,\ ]-{1\over 2}\gamma^1{\otimes}[\gamma^1,\ ]$ where $\gamma^i={\mathrm{i}}\sigma^i$ in terms of Pauli matrices. The commutators are inner derivations or ‘vector fields’ on $M_2({{\mathbb{C}}})$ and uniqueness means that fluctuations of this would entail a change of either the differential structure or the Clifford structure.
Section 3.2 covers the $q$-sphere ${{\mathbb{C}}}_q[S^2]$ with the geometrically correct spin bundle ${{\mathcal{S}}}={{\mathcal{S}}}_+\oplus{{\mathcal{S}}}_-$ given by $q$-monopole sections of charges $\pm 1$ as used in [@Ma:rieq]. This uses the standard 2D differential calculus coming from the 3D one[@Wor] on ${{\mathbb{C}}}_q[SU_2]$, a Clifford action ${{\triangleright}}$ given by the holomorphic structure introduced in [@Ma:rieq] and a $q$-monopole principal connection [@BrzMaj:gau], all of which led to a $q$-deformed ${{ \slashed{D} }}$ in a quantum frame bundle approach. Our new result is that the relevant covariant derivative on ${{\mathcal{S}}}$ is in fact a bimodule connection and we find a ${{\mathcal{J}}}$ and inner product (given by the Haar integral) so that all the axioms (1)-(6) of a real spectral triple of dimension 2 are satisfied at the pre-functional analysis level except for one: we find that ${{\mathcal{J}}}$ is necessarily not an isometry but some kind of twisted $q$-isometry in the sense $${{\langle}}\!{{\langle}}{{\mathcal{J}}}(\phi),{{\mathcal{J}}}(\psi){{\rangle}}\!{{\rangle}}=q^{\pm 1} {{\langle}}\!{{\langle}}\varsigma^{-1}(\psi),\phi{{\rangle}}\!{{\rangle}},\quad\forall \phi,\psi\in {{\mathcal{S}}}_\pm$$ where the brackets are the Hilbert space inner product and $\varsigma$ is the automorphism that makes the Haar integral a twisted trace in the sense of [@Murphy]. We identified ${{\mathcal{S}}}_\pm$ with degree $\mp1$ subspaces of ${{\mathbb{C}}}_q[SU_2]$ under the $U(1)$ action of the quantum principal bundle. More precisely, we obtain a 1-parameter family of ${{ \slashed{D} }}$ where a parameter $\beta$ extends the Clifford action from the canonical choice $\beta=1$ in [@Ma:rieq]. Our construction is different from another attempt at the $q$-sphere Dirac operator with 2D spinor space [@DLPS-sphere], where the ‘first
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper gives an explicit method for computing the resultant of any sparse unmixed bivariate system with given support. We construct square matrices whose determinant is exactly the resultant. The matrices constructed are of hybrid Sylvester and Bézout type. The results extend those in [@Khe] by giving a complete combinatorial description of the matrix. Previous work by D’Andrea [@D] gave pure Sylvester type matrices (in any dimension). In the bivariate case, D’Andrea and Emiris [@DE] constructed hybrid matrices with one Bézout row. These matrices are only guaranteed to have determinant some multiple of the resultant. The main contribution of this paper is the addition of new Bézout terms allowing us to achieve exact formulas. We make use of the exterior algebra techniques of Eisenbud, Fl[ø]{}ystad, and Schreyer [@ES; @EFS].'
address: 'Department of Mathematics, UC Berkeley, Berkeley CA, USA'
author:
- 'Amit Khetan [^1]'
bibliography:
- 'jsc02.bib'
title: The Resultant of an Unmixed Bivariate System
---
Introduction
============
Let $f_1, \dots, f_{n+1} \in \mathbb{C}[x_1, x_1^{-1}, \dots, x_n,
x_n^{-1}]$ be Laurent polynomials in $n$ variables with the same Newton polytope $Q \subset \mathbb{R}^n$. Let $A = Q \cap
\mathbb{Z}^n$. So we can write:
$$f_i = \sum_{\alpha \in A} C_{i\alpha}x^{\alpha}.$$
We will assume that $Q$ is actually $n$-dimensional, and furthermore that $A$ affinely spans $\mathbb{Z}^n$.
\[d:ares\] The $A$-[*resultant*]{} ${\rm Res}_A(f_1, \dots, f_{n+1})$ is the irreducible polynomial in the $C_{i\alpha}$, unique up to sign, which vanishes whenever $f_1, \dots, f_{n+1}$ have a common root in the algebraic torus $(\mathbb{C}^\ast)^n$.
The existence, uniqueness, and irreducibility of the $A$-resultant are proved in the book by Gelfand, Kapranov, and Zelevinsky [@GKZ]. The $A$-resultant, also called the sparse resultant, allows one to eliminate $n$ variables from $n+1$ unmixed equations. Hence, resultants can be quite useful in solving systems of polynomial equations [@CLO2]. It is an important problem to find efficiently computable, explicit formulas for the resultant.
When $n=1$, we are in the case of the classical resultant of two polynomials in one variable of the same degree. There are two formulas due to Sylvester and Bézout which represent the resultant as the determinant of an easily computable matrix. Sylvester’s matrix has entries that are either 0 or a coefficient of $f_1$ or $f_2$. The entries in Bézout’s matrix are linear in the coefficients of each of the $f_i$ hence quadratic overall.
Our work deals with the case $n=2$. We give a determinantal formula which is of hybrid Sylvester and Bézout type. A preliminary version of these results appeared in the ISSAC 2002 Proceedings [@Khe]. This paper makes the formula completely explicit and provides complete proofs. Our approach follows work by Jouanolou [@J] and Dickenstein and D’Andrea [@DD] who found formulas for the “dense” resultant, when the polytope $Q$ is a coordinate simplex of some degree. We make heavy use of new techniques by Eisenbud, Fl[ø]{}ystad and Schreyer [@ES; @EFS] relating resultants to complexes over an exterior algebra.
\[thm:blmtrx\] The resultant of a system $(f_1, f_2, f_3) \in \mathbb{C}[x_1, x_2,
x_1^{-1}, x_2^{-1}]$ with common Newton polygon $Q$ is the determinant of the block matrix: $$\label{e:blmtrx}
\begin{pmatrix}
B & L \\ \tilde{L} & 0 \end{pmatrix}.$$ The entries of $L$ and $\tilde{L}$ are linear forms, and the entries of $B$ are cubic forms in the coefficients $C_{i\alpha}$.
The columns of $B$ and $\tilde{L}$ are indexed by the lattice points in $Q$, the rows of $B$ and $L$ are indexed by the interior lattice points in $2 \cdot Q$, the matrix $\tilde{L}$ has three rows indexed by $\{ f_1, f_2, f_3 \}$, and the columns of the matrix $L$ are indexed by pairs $(f_i, a)$ where $i \in \{1,2,3\}$ and $a$ runs over the interior lattice points of $Q$. Each entry of $L$ and $\tilde L$ is either zero or is a coefficient of some $f_i$ and is determined in the following straightforward manner. The entry of $\tilde L$ in row $f_i$ and column $a$ is the coefficient of $x^a$ in $f_i$. The entry of $L$ in row $b$ and column $(f_i, a)$ is the coefficient of $x^{b-a}$ in $f_i$. The entries of the matrix $B$ are linear forms in [*bracket variables*]{}. A bracket variable is defined as
$$[abc] = \det \bmatrix C_{1a} & C_{1b} & C_{1c} \\
C_{2a} & C_{2b} & C_{2c} \\
C_{3a} & C_{3b} & C_{3c} \endbmatrix,$$
where $C_{ia}$ is the coefficient of $x^a$ in $f_i$. An explicit formula for $B$ is described in Section 3 below.
\[example\]
$$\begin{aligned}
f_1 &= C_{11} + C_{12}x + C_{13}y + C_{14}xy + C_{15}x^2y + C_{16}xy^2 \\
f_2 &= C_{21} + C_{22}x + C_{23}y + C_{24}xy + C_{25}x^2y + C_{26}xy^2\\
f_3 &= C_{31} + C_{32}x + C_{33}y + C_{34}xy + C_{35}x^2y + C_{36}xy^2\end{aligned}$$
The system above has Newton polygon as shown in Figure \[f:newt\]. We will show that the resultant of this system is the determinant of the matrix in Table \[tbl:matrix\].
---------- --------------- ---------- --------------- --------------- ---------- ---------- ---------- ----------
0 $[124]$ 0 $[126]-[234]$ $-[235]$ $-[236]$ $c_{11}$ $c_{21}$ $c_{31}$
0 0 0 0 0 0 $c_{12}$ $c_{22}$ $c_{32}$
0 $[126]-[135]$ 0 $[146]-[236]$ $[156]+[345]$ $[346]$ $c_{13}$ $c_{23}$ $c_{33}$
0 $-[145]$ 0 $[156]-[345]$ $[256]$ $[356]$ $c_{14}$ $c_{24}$ $c_{34}$
0 0 0 0 0 0 $c_{15}$ $c_{25}$ $c_{35}$
0 $[156]$ 0 $[356]$ $[456]$ 0 $c_{16}$ $c_{26}$ $c_{36}$
$c_{11}$ $c_{12}$ $c_{13}$ $c_{14}$ $c_{15}$ $c_{16}$ 0 0 0
$c_{21}$ $c_{22}$ $c_{23}$ $c_{24}$ $c_{25}$ $c_{26}$ 0 0 0
$c_{31}$ $c_{32}$ $c_{33}$ $c_{34}$ $c_{35}$ $c_{36}$ 0 0 0
---------- --------------- ---------- --------------- --------------- ---------- ---------- ---------- ----------
: Resultant matrix for Example \[example\][]{data-label="tbl:matrix"}
In Section \[s:toric\] we provide some preliminary results about toric varieties and their homogeneous coordinates which allow us to present our formula in Section \[s:formula\]. Section \[s:tate\] describes the exterior algebra techniques of Eisenbud, Schreyer, and Fl[ø]{}ystad. Section \[s:torictate\] applies these results to the toric setting, while Section \[s:proofs\]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'An automorphism $\theta$ of a spherical building $\Delta$ is called *capped* if it satisfies the following property: if there exist both type $J_1$ and $J_2$ simplices of $\Delta$ mapped onto opposite simplices by $\theta$ then there exists a type $J_1\cup J_2$ simplex of $\Delta$ mapped onto an opposite simplex by $\theta$. In previous work we showed that if $\Delta$ is a thick irreducible spherical building of rank at least $3$ with no Fano plane residues then every automorphism of $\Delta$ is capped. In the present work we consider the spherical buildings with Fano plane residues (the *small buildings*). We show that uncapped automorphisms exist in these buildings and develop an enhanced notion of “opposition diagrams” to capture the structure of these automorphisms. Moreover we provide applications to the theory of “domesticity” in spherical buildings, including the complete classification of domestic automorphisms of small buildings of types $\sF_4$ and $\sE_6$.'
author:
- James Parkinson
- Hendrik Van Maldeghem
title: Opposition diagrams for automorphisms of small spherical buildings
---
Introduction {#introduction .unnumbered}
============
Let $\theta$ be an automorphism of a thick irreducible spherical building $\Delta$ of type $(W,S)$. The *opposite geometry* of $\theta$ is the set $\operatorname{\mathrm{Opp}}(\theta)$ of all simplices $\sigma$ of $\Delta$ such that $\sigma$ and $\sigma^{\theta}$ are opposite in $\Delta$. This geometry forms a natural counterpart to the more familiar fixed element geometry $\mathrm{Fix}(\theta)$, however by comparison very little is known about $\operatorname{\mathrm{Opp}}(\theta)$.
This paper is the continuation of [@PVM:17a], where we initiated a systematic study of $\operatorname{\mathrm{Opp}}(\theta)$ for automorphisms of spherical buildings. In particular in [@PVM:17a] we showed that if $\Delta$ is a thick irreducible spherical building of rank at least $3$ containing no Fano plane residues then $\operatorname{\mathrm{Opp}}(\theta)$ has the following weak closure property: if there exist both type $J_1$ and $J_2$ simplices in $\operatorname{\mathrm{Opp}}(\theta)$ then there exists a type $J_1\cup J_2$ simplex in $\operatorname{\mathrm{Opp}}(\theta)$. Automorphisms with this property are called *capped*, and the thick irreducible spherical buildings of rank at least $3$ with no Fano plane residues are called *large buildings*. Thus every automorphism of a large building is capped.
In the present paper we investigate $\operatorname{\mathrm{Opp}}(\theta)$ for the thick irreducible spherical buildings of rank at least $3$ containing a Fano plane residue. These are called the *small buildings*. In particular we show that, in contrast to the case of large buildings, uncapped automorphisms exist for all small buildings (with the possible exception of $\sE_8(2)$ where we provide conjectural examples).
A key tool in [@PVM:17a] was the notion of the *opposition diagram* of an automorphism $\theta$, consisting of the triple $(\Gamma,J,\pi)$, where $\Gamma$ is the Coxeter graph of $(W,S)$, $J$ is the union of all $J'\subseteq S$ such that there exists a type $J'$ simplex in $\operatorname{\mathrm{Opp}}(\theta)$, and $\pi$ is the automorphism of $\Gamma$ induced by $\theta$ (less formally, the opposition diagram is drawn by encircling the nodes $J$ of $\Gamma$). If $\theta$ is capped then this diagram turns out to encode a lot of information about the automorphism, essentially because it completely determines the partially ordered set $\mathcal{T}(\theta)$ of all types of simplices mapped onto opposite simplices by $\theta$. However for an uncapped automorphism the opposition diagram does not necessarily determine $\cT(\theta)$. For example in the polar space $\Delta=\sB_3(2)$ there are collineations $\theta_1$, $\theta_2$ and $\theta_3$ each with opposition diagram
at (0,0.3) ; at (-1.5,0) (1) [$\bullet$]{}; at (-0.5,0) (2) [$\bullet$]{}; at (0.5,0) (3) [$\bullet$]{}; (-1.5,0)–(-0.5,0); (-0.5,0.07)–(0.5,0.07); (-0.5,-0.07)–(0.5,-0.07); (1.north west) rectangle (1.south east); (2.north west) rectangle (2.south east); (3.north west) rectangle (3.south east);
(that is, each $\theta_i$ maps a vertex of each type to an opposite vertex) whose partially ordered sets $\mathcal{T}(\theta_i)$, for $i=1,2,3$, are the following (see Theorem \[thm:existenceBn(2)\] for explicit examples):
at (0,0.3) ; at (-2,-1) (1) [$\{1\}$]{}; at (0,-1) (2) [$\{3\}$]{}; at (2,-1) (3) [$\{2\}$]{}; at (-2,0) (4) [$\{1,3\}$]{}; at (0,0) (5) [$\{1,2\}$]{}; at (2,0) (6) [$\{2,3\}$]{}; at (0,1) (7) [$\{1,2,3\}$]{}; (1)–(4); (1)–(5); (2)–(4); (2)–(6); (3)–(5); (3)–(6); (4)–(7); (5)–(7); (6)–(7);
at (0,0.3) ; at (-2,-1) (1) [$\{1\}$]{}; at (0,-1) (2) [$\{3\}$]{}; at (2,-1) (3) [$\{2\}$]{}; at (-2,0) (4) [$\{1,3\}$]{}; at (0,0) (5) [$\{1,2\}$]{}; at (2,0) (6) [$\{2,3\}$]{}; (1)–(4); (1)–(5); (2)–(4); (2)–(6); (3)–(5); (3)–(6);
at (0,0.3) ; at (-2,-1) (1) [$\{1\}$]{}; at (0,-1) (2) [$\{3\}$]{}; at (2,-1) (3) [$\{2\}$]{}; at (-2,0) (4) [$\{1,3\}$]{}; at (2,0) (6) [$\{2,3\}$]{}; (1)–(4); (2)–(4); (2)–(6); (3)–(6);
Note that only $\theta_1$ is capped (hence, in particular, analogues of $\theta_2$ and $\theta_3$ cannot exist for polar spaces $\sB_3(\mathbb{F})$ with $|\mathbb{F}|>2$ by the main result of [@PVM:17a]).
Thus the opposition diagram of an uncapped automorphism needs to be enhanced to properly understand these automorphisms. We achieve this by defining the *decorated opposition diagram* of an uncapped automorphism.
The full definition is given in Section \[sec:1\], however for the purpose of this introduction consider the following simplified situation. Suppose that $\theta$ is an automorphism with the property that the induced automorphism $\pi$ of the Coxeter graph $\Gamma$ is the opposition automorphism $w_0$. Then the *decorated opposition diagram* of $\theta$ is the quadruple $(\Gamma,J,K,\pi)$ where $(\Gamma,J,\pi)$ is the opposition diagram, and
$K=\{j\in J\mid \text{there exists a type $J\backslash\{j\}$ simplex mapped onto an opposite simplex by $\theta$}\}$.
Less formally, the decorated opposition diagram is drawn by encircling the nodes of $J$, and then shading those nodes of $K$. Thus, for example, the decorated opposition diagrams of the two uncapped automorphisms of $\sB_3(2)$ given above are
at (0,0.3) ; at (-1.5,0) (1) [$\bullet$]{}; at (-0.5,0) (2) [$\bullet$]{}; at (0.5,0) (3) [$\bullet$]{}; (1.north west) rectangle (1.south east); (2.north west) rectangle (2.south east); (3.north west) rectangle (3.south east); (-1.5,0)–(-0.5,0); (-0.5,0.07)–(0.5,0.07); (-0.5,-0.07)–(0.5,-0.07); at (-1.5,0) (1) [$\bullet$]{}; at (-0.5,0) (2) [$\bullet$]{}; at (0.5,0) (3) [$\bullet$]{};
and
at (0,0.3) ; at (-1.5,0) (1) [$\bullet$]{}; at (-0.5,0) (2) [$\bullet$]{}; at (0.5,0
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A set $X \subseteq {{\mathbb R}}$ is strongly meager if for every measure zero set $H$, $X+H \neq {{\mathbb R}}$. Let ${{\mathcal{SM}}}$ denote the collection of strongly meager sets. We show that assuming ${\operatorname{\mathsf {CH}}}$, ${{\mathcal{SM}}}$ is not an ideal.'
address:
- |
Department of Mathematics and Computer Science\
Boise State University\
Boise, Idaho 83725 U.S.A.
- |
Department of Mathematics\
Hebrew University\
Jerusalem, Israel
author:
- Tomek Bartoszynski
- Saharon Shelah
title: Strongly meager sets do not form an ideal
---
[^1]
[^2]
Introduction
============
In 1919 Borel wrote the paper [@Borel] in which he attempted to classify all measure zero subsets of the real line. In this paper he introduced a class of measure zero sets, which are now called strong measure zero sets. In 70’s Galvin, Mycielski and Solovay found a characterization of strong measure zero sets that was formulated using only the concept of a first category set and of a translation. That allowed, after replacing first category with measure zero, to define a dual notion of a strongly meager set. It was expected that the global properties of both families of sets will be similar. Several results listed below support this expectation. Nevertheless additive properties of both families of sets are different. It is well known that the family of strong measure zero sets forms an ideal, i.e. is closed under finite unions. The result of this paper is that, assuming continuum hypothesis, the collection of strongly meager sets is not closed under finite unions.
In this paper we work exclusively in the space $2^\omega $ equipped with the standard product measure denoted as $\mu$. Let ${{\mathcal N}}$ and ${{\mathcal M}}$ denote the ideal of all $\mu$–measure zero sets, and meager subsets of $2^\omega $, respectively. For $x,y \in 2^\omega$, $x+y \in 2^\omega $ is defined as $(x+y)(n) = x(n)+y(n) \pmod 2$. In particular, $(2^\omega ,
\operatorname{+})$ is a group and $\mu$ is an invariant measure.
A set $X$ of real numbers or more generally, a metric space, is strong measure zero if, for each sequence $\{\varepsilon_n: n \in \omega\}$ of positive real numbers there is a sequence $\{X_n: n \in \omega\}$ of subsets of $X$ whose union is $X$, and for each $n$ the diameter of $X_n$ is less than $\varepsilon_n$.
The family of strong measure zero subsets of $2^\omega $ is denoted by ${{\mathcal {SN}}}$.
The following characterization of strong measure zero is the starting point for our considerations.
\[solo\] The following are equivalent:
1. $X \in \mathcal {SN}$,
2. for every set $F \in {{\mathcal M}}$, $X+F \neq 2^\omega$. ${\hspace{0.1in} \square \vspace{0.1in}}$
This theorem indicates that the notion of strong measure zero should have its category analog. Indeed, we define after Prikry:
\[defstrmea\] Suppose that $X \subseteq 2^\omega $.
We say that $X$ is strongly meager if for every $H \in
{{\mathcal N}}$, $X+H \neq 2^\omega $. Let ${{\mathcal{SM}}}$ denote the collection of strongly meager sets.
Observe that if $z \not\in X+F=\{x+f: x \in X, f\in F\}$ then $X \cap
(F+z) = \emptyset$. In particular, a strong measure zero set can be covered by a translation of any dense $G_\delta $ set, and every strongly meager set can be covered by a translation of any measure one set.
If $X \subseteq 2^\omega $ is a group then the concepts of strong measure zero and strongly meager connect to the classical construction of a nonmeasurable set by Vitali (a selector of ${{\mathbb R}}/{{\mathbb Q}}$).
Suppose that $X \subseteq 2^\omega $ is a dense subgroup of $(2^\omega,+)$. Then
1. $X \in {{\mathcal{SM}}}$ if and only if every selector from $2^\omega / X$ is nonmeasurable.
2. $X \in {{\mathcal {SN}}}$ if and only if every selector from $2^\omega / X$ does not have the Baire property.
[[Proof]{}. ]{}The proof below requires the group $X$ to be infinite and the set $2^\omega
/ X$ to be infinite. A dense group will have these properties.
We will show only (1), the proof of (2) is analogous. Note that if $X$ is a selector from $2^\omega / X$ and $X$ is as above then $X$ is nonmeasurable if and only if $X$ does not have measure zero.
$ \rightarrow $ Suppose that $X \in {{\mathcal{SM}}}$ and $H \in {{\mathcal N}}$. Let $x \not \in X+H$. It follows that $[x]_X \cap H =\emptyset$, hence no selector is contained in $H$.
$ \leftarrow$ Suppose that $X \not \in {{\mathcal{SM}}}$ and let $H \in {{\mathcal N}}$ be such that $X+H=2^\omega $. For each $x \in 2^\omega $, $[x]_X \cap H \neq
\emptyset$. It follows that we can choose a selector contained in $H$. ${\hspace{0.1in} \square \vspace{0.1in}}$
Note that $X \not \in {{\mathcal {SN}}}$ if there exists a meager set $F$ such that the family $\{F+x: x \in X\}$ covers $2^\omega $. Instead of the assignment $x {\mapsto}F+x$ we can consider a more general mapping $x {\mapsto}(H)_x$, where $H \subseteq 2^\omega \times
2^\omega $ is a Borel set such that $(H)_x = \{y:{\langle}x,y{\rangle}\in H\}\in {{\mathcal M}}$ for all $x \in 2^\omega $.
$X \in {\operatorname{\mathsf {COV}}}({{\mathcal M}})$ if for every Borel set $H \subseteq 2^\omega \times
2^\omega$ such that $(H)_x \in {{\mathcal M}}$ for all $x \in 2^\omega $, $$\bigcup_{x \in X} (H)_x \neq
2^\omega.$$ Similarly, $X \in {\operatorname{\mathsf {COV}}}({{\mathcal N}})$ if for every Borel set $H \subseteq 2^\omega \times
2^\omega$ such that $(H)_x \in {{\mathcal N}}$ for all $x \in 2^\omega $, $$\bigcup_{x \in X} (H)_x \neq
2^\omega.$$
Note that
${\operatorname{\mathsf {COV}}}({{\mathcal N}}) \subseteq {{\mathcal{SM}}}$ and ${\operatorname{\mathsf {COV}}}({{\mathcal M}}) \subseteq {{\mathcal {SN}}}$.
[[Proof]{}. ]{}Given $F \in {{\mathcal M}}$ let $H=\{(x,y): y \in F+x\}$. It is clear that, $\bigcup_{x \in X} (H)_x = F+X$. ${\hspace{0.1in} \square \vspace{0.1in}}$
Families ${{\mathcal {SN}}}$ and ${{\mathcal{SM}}}$ as well as ${\operatorname{\mathsf {COV}}}({{\mathcal M}})$ and ${\operatorname{\mathsf {COV}}}({{\mathcal N}})$ are dual to each other and we are interested to what extent the properties of one family are shared by the dual one.
Below we present several results of that kind. The proofs of these results as well as quite a lot of additional material can be found in [@BJbook].
Let Borel Conjecture (${\operatorname{\mathsf {BC}}}$) be the assertion that there are no uncountable strong measure zero sets, and Dual Borel Conjecture (${\operatorname{\mathsf {DBC}}}$) be the assertion that there are no uncountable strongly meager sets.
Sierpinski showed that Borel Conjecture contradicts ${\operatorname{\mathsf {CH}}}$. His proof essentially yields the following:
Assume $ {\bf MA} $. Both ${\operatorname{\mathsf {COV}}}({{\mathcal M}})$ and ${\operatorname{\mathsf {COV}}}({{\mathcal N}})$ contain sets of size $
2^{\boldsymbol\aleph_0} $. In particular, both Borel Conjectures are false.
There are many weaker assumptions than $ {\bf MA} $ that contradict ${\operatorname{\mathsf {BC}}}$ or ${\operatorname{\mathsf {DBC}}}$. Nevertheless we have the following:
Borel Conjecture is consistent with ${{\operatorname{\mathsf {ZFC}}}}$.
Dual Borel Conjecture is consistent with ${{\operatorname{\mathsf {ZFC}}}}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We introduce a convolutional neural network for inferring a compact disentangled graphical description of objects from 2D images that can be used for volumetric reconstruction. The network comprises an encoder and a twin-tailed decoder. The encoder generates a disentangled *graphics code*. The first decoder generates a volume, and the second decoder reconstructs the input image using a novel training regime that allows the *graphics code* to learn a separate representation of the 3D object and a description of its lighting and pose conditions. We demonstrate this method by generating volumes and disentangled graphical descriptions from images and videos of faces and chairs.'
author:
- 'Edward Grant, Pushmeet Kohli, Marcel van Gerven'
bibliography:
- 'egbib.bib'
title: Deep Disentangled Representations for Volumetric Reconstruction
---
16SubNumber
Introduction
============
Images depicting natural objects are 2D representations of an underlying 3D structure from a specific viewpoint in specific lighting conditions.
This work demonstrates a method for recovering the underlying 3D geometry of an object depicted in a single 2D image or video. To accomplish this we first encode the image as a separate description of the shape and transformation properties of the input such as lighting and pose. The shape description is used to generate a volumetric representation that is interpretable by modern rendering software.
State of the art computer vision models perform recognition by learning hierarchical layers of feature detectors across overlapping sub-regions of the input space. Invariance to small transformations to the input is created by sub-sampling the image at various stages in the hierarchy.
In contrast, computer graphics models represent visual entities in a canonical form that is disentangled with respect to various realistic transformations in 3D, such as pose, scale and lighting conditions. 2D images can be rendered from the graphics code with the desired transformation properties.
A long standing hypothesis in computer vision is that vision is better accomplished by inferring such a disentangled graphical representation from 2D images. This process is known as ‘de-rendering’ and the field is known as ‘vision as inverse graphics’ [@yuille2006vision].
One obstacle to realising this aim is that the de-rendering problem is ill-posed. The same 2D image can be rendered from a variety of 3D objects. This uncertainty means that there is normally no analytical solution to de-rendering. There are however, solutions that are more or less likely, given an object class or the class of all natural objects.
Recent work in the field of vision as inverse graphics has produced a number of convolutional neural network models that accomplish de-rendering [@kulkarni2015deep; @tatarchenko2015single; @yang2015weakly]. Typically these models follow an encoding / decoding architecture. The encoder predicts a compact 3D graphical representation of the input. A control signal is applied corresponding with a known transformation to the input and a decoder renders the transformed image. We use a similar architecture. However, rather than rendering an image from the graphics code, we generate a full volumetric representation.
Unlike the disentangled graphics code generated by existing models, which is only renderable using a custom trained decoder, the volumetric representation generated by our model is easily converted to a polygon mesh or other professional quality 3D graphical format. This allows the object to be rendered at any scale and with other rendering techniques available in modern rendering software.
Related work
============
Several models have been developed that generate an disentangled representation given a 2D input, and output a new image subject to a transformation.
Kulkarni *et al*. proposed the Deep Convolutional Inverse Graphics Network (DC-IGN) trained using Stochastic Gradient Variational Bayes [@kulkarni2015deep]. This model encodes a factored latent representation of the input that is disentangled with respect to changes in azimuth, elevation and light source. A decoder renders the graphics code subject to the desired transformation as a 2D image. Training is performed with batches in which only a single transformation or the shape of the object are different. The activations of the graphics code layer chosen to represent the static parameters are clamped as the mean of the activations for that batch on the forward pass. On the backward pass the gradients for the corresponding nodes are set to their difference from this mean. The method is demonstrated by generating chairs and face images transformed with respect to azimuth, elevation and light source.
Tatarchenko *et al*. proposed a similar model that is trained in a fully supervised manner [@tatarchenko2015single]. The encoder takes a 2D image as input and generates a graphics code representing a canonical 3D object form. A signal is added to the code corresponding with a known transformation in 3D and the decoder renders a new image corresponding with that transformation. This method is also demonstrated by generating rotated images of cars and chairs.
Yang *et al*. demonstrated an encoder / decoder model similar to the above but utilize a recurrent structure to account for long-term dependencies in a sequence of transformations, allowing for realistic re-rendering of real face images from different azimuth angles [@yang2015weakly].
Spatial Transformer Networks (STN) allow for the spatial manipulation of images and data within a convolutional neural network [@jaderberg2015spatial]. The STN first generates a transformation matrix given an input, creates a grid of sampling points based on the transformation and outputs samples from the grid. The module is trained using back-propagation and transforms the input with an input dependent affine transformation. Since the output sample can be of arbitrary size, these modules have been used as an efficient down-sampling method in classification networks. STNs transform existing data by sampling but they are not generative, so cannot make predictions about occluded data, which is necessary when predicting 3D structure.
Girdhar *et al*. and Rezende *et al*. present methods for volumetric reconstructing from 2D images but do not generate disentangled representations [@girdhar2016learning; @rezende2016unsupervised].
The contribution of this work is an encoding / decoding model that generates a compact graphics code from 2D images and videos that is disentangled with respect to shape and the transformation parameters of the input, and that can also be used for volumetric reconstruction. To our knowledge this is the first work that generates a disentanlged graphical representation that can be used to reconstruct volumes from 2D images. In addition, we show that Spatial Transformer Networks can be used to replace max-pooling in the encoder as an efficient sampling method. We demonstrate this approach by generating a compact disentangled graphical representation from single 2D images and videos of faces and chairs in a variety of viewpoint and lighting conditions. This code is used to generate volumetric representations which are rendered from a variety of viewpoints to show their 3D structure.
Model
=====
Architecture
------------
As shown in Figure \[fig:network\], the network has one encoder, a *graphics code* layer and two decoders. The *graphics code* layer is separated into a *shape code* and a *transformation code*. The encoder takes as input an 80 $\times$ 80 pixel color image and generates the *graphics code* following a series of convolutions, point-wise randomized rectified linear units (RReLU) [@xu2015empirical], down-sampling Spatial Transformer Networks and max pooling. Batch normalization layers are used after each convolutional layer to speed up training and avoid problems with exploding and vanishing gradients [@ioffe2015batch].
![**Network architecture:** The network consists of an encoder (A), a volume decoder (B) and an image decoder (C). The encoder takes as input a 2D image and generates a 3D *graphics code* through a series of spatial convolutions, down-sampling Spatial Transformer Networks and max pooling layers. This code is split into a *shape code* and a *transformation code*. The volume decoder takes the *shape code* as input and generates a prediction of the volumetric contents of the input. The image decoder takes the *shape code* and the *transformation code* as input and reconstructs the input image.[]{data-label="fig:network"}](2D23D2tail){width="\textwidth"}
The two decoders are connected to the *graphics code* by switches so that the message from the *graphics code* is passed to either one of the decoders. The first decoder is the volume decoder. The volume decoder takes the *shape code* as input and generates an $80 \times 80 \times 80$ voxel volumetric prediction of the encoded shape. This is accomplished by a series of volumetric convolutions, point-wise RReLU and volumetric up-sampling. A parametric rectified linear unit (PReLU) [@he2015delving] is substituted for the RReLU in the output layer. This is done to avoid the saturation problems with rectified linear units early in training but allows for learning an activation threshold later in training, corresponding with the positive-valued output targets.
The second decoder reconstructs the input image with the correct pose and lighting, showing that pose and lighting parameters of the input are contained in the *graphics code*. The image decoder takes as input both the *shape code* and the *transformation code*, and generates a reconstruction of the original input image. This is accomplished by a series of spatial convolutions, point-wise RReLU, spatial up-sampling and point-wise PReLU in the final layer. During training, the backward pass from the image decoder to the *shape code* is blocked (see Figure
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Rock-scissors-paper game, as the simplest model of intransitive relation between competing agents, is a frequently quoted model to explain the stable diversity of competitors in the race of surviving. When increasing the number of competitors we may face a novel situation because beside the mentioned unidirectional predator-prey-like dominance a balanced or peer relation can emerge between some competitors. By utilizing this possibility in the present work we generalize a four-state predator-prey type model where we establish two groups of species labeled by even and odd numbers. In particular, we introduce different invasion probabilities between and within these groups, which results in a tunable intensity of bidirectional invasion among peer species. Our study reveals an exceptional richness of pattern formations where five quantitatively different phases are observed by varying solely the strength of the mentioned inner invasion. The related transition points can be identified with the help of appropriate order parameters based on the spatial autocorrelation decay, on the fraction of empty sites, and on the variance of the species density. Furthermore, the application of diverse, alliance-specific inner invasion rates for different groups may result in the extinction of the pair of species where this inner invasion is moderate. These observations highlight that beyond the well-known and intensively studied cyclic dominance there is an additional source of complexity of pattern formation that has not been explored earlier.'
author:
- 'D. Bazeia'
- 'B.F. de Oliveira'
- 'A. Szolnoki'
title: |
Invasion controlled pattern formation\
in a generalized multi-species predator-prey system
---
INTRODUCTION
============
To explain the diversity among competing species or states is a fundamental problem not only in biology, or ecology but also in social sciences [@chesson_ares00; @hauert_s02; @aguiar_n09]. One of the possible mechanisms that explains the stable coexistence of unequal species is the presence of intransitive relation or in other words cyclic dominance between competitors [@laird_e08; @traulsen_jtb12]. In game theory this relation can be well described by the so-called rock-scissors-paper game [@szolnoki_jrsif14]. Paper is cut by scissors, scissors are crushed by rock, and finally rock is wrapped by paper. In this way the circle ends and establishes the above described relation. In the absence of a superior competitor all the mentioned members can survive and hence diversity is preserved [@bazeia_epl18].
Interestingly, this relation is not a merely abstract model, but can be directly detected in several real-life systems [@kirkup_n04; @kelsic_n15], including microbes [@paquin_n83; @kerr_n02], social amoebas [@shibasaki_prsb18], or even plant communities [@lankau_s07; @cameron_jecol09]. Significant scientific efforts have been made in the last decade which clarified the possible consequences of different variations of the basic model [@szabo_pr07; @wang_wx_pre11; @szczesny_pre14; @szolnoki_njp15; @frey_pa10; @szolnoki_pre16; @valyi_15]. In spatially structured populations the topology of interaction graph is proved to be a decisive factor which determines whether an oscillatory state emerges or not [@masuda_prsb07; @szabo_jpa04; @masuda_jtb08]. Furthermore, the mobility of competing species is identified as an important factor to maintain diversity [@reichenbach_n07; @bazeia_epl17; @mobilia_g16; @armano_srep17; @avelino_pre18], but some research groups also underline the nontrivial role of mutations [@mobilia_jtb10; @park_c18; @park_c18c; @nagatani_jtb19]. Additionally, a recent work, obtained from off-lattice simulations, revealed the critical role of density on the original problem of maintaining diversity [@avelino_epl18]. It is worth noting that cyclic dominance can also emerge in systems where the values of payoff matrix, which characterizes the basic relation of different microscopic states or strategies, do not necessarily predict such interaction. Instead, this relation could be the result of a collective behavior due to the limited interactions with neighbors in a spatial system where effective multi-point interactions emerge [@szolnoki_pre10b; @dobramysl_jpa18; @szolnoki_njp14; @gao_l_srep15b; @roman_jtb16; @szolnoki_epl15].
Naturally, the number of competing species are not necessarily limited to three, but can be extended to four, five [@roman_jsm12; @lutz_jtb13; @vukov_pre13; @avelino_pla14; @rulquin_pre14; @intoy_jsm13] or even more species [@szabo_jpa05; @avelino_pre14; @szabo_pre08b; @brown_pre17; @avelino_pla17; @esmaeili_pre18]. This makes the food-web more complex where the relation between two members is not restricted to a unidirectional predator-prey type, but also a balanced, or bidirectional relation can also emerge. This chance allows new kind of solutions, including alliances or associations, to emerge [@szabo_jpa05; @szabo_pre08]. Beside the topological complexity of food-web an additional freedom is the heterogeneity of invasion rates between species. In some cases the latter fact alone is capable to change the final state significantly [@perc_pre07b; @masuda_jtb08; @he_q_pre10; @szolnoki_srep16b; @cazaubiel_jtb17; @liu_a_epl17].
In this work we follow this research avenue and generalize a previously introduced four-species model where every species has two preys in a cyclic manner [@avelino_pre12b]. As a result, some relations between species become unbiased or balanced because these peer species mutually invade each other. This fact allows us to distinguish the strengths of unidirectional and bidirectional invasions and establish a tunable parameter that characterizes the inner relations of peer species. Our key observation is the stationary pattern of the resulting evolutionary process can be varied intensively by tuning the inner invasion rate of peer species exclusively. The resulting phases can be distinguished quantitatively with the help of appropriate order parameters. These observations emphasize that not only the complex topology of a food-web, but also the varying invasion rates between related species can be the source of diverse patterns of the stationary states.
THE MODEL
=========
In the following we generalize a previously introduced cyclically dominated May-Leonard-type model [@frey_pa10] of four species [@avelino_pre12b]. Initially, empty sites, labeled by 0, and all competing species, labeled by $i=1 \dots 4$, are distributed uniformly on a $L \times L$ square grid where periodic boundary conditions are applied. At each time step a randomly chosen active individual interacts with one of the four nearest neighbor passive sites by executing the following elementary steps.
If the passive site is empty then the active individual reproduces by filling the empty site with probability $\mu$. When a motion step is applied then the active and passive individuals switch their positions with probability $m$. The last elementary step is the so-called predation when the active predator kills the passive prey and generates an empty site in the lattice.
Importantly, as an extension of the earlier introduced basic model [@avelino_pre12b], we distinguish different predation probabilities between species depending on whether their labels are odd or even. In particular, as Fig. \[def\] illustrates, an active $i$ player predates a passive $i+1$ species and generates an empty site with probability $p_1$. However, the predation between species $i$ and species $i+2$ happens with probability $p_2$. (Naturally labels are always considered cyclically to keep $i=1 \dots 4$ interval.) In this way we can distinguish predation strength between predator-prey pairs where invasion is unidirectional and between peer species where bidirectional invasions can happen. The members of latter pairs, like species 1 and 3, or species 2 and 4, are equally strong because they can mutually invade each other and keep a balanced relation, as it is stressed by dashed arrows in Fig. \[def\]. Interestingly, such a peer pair can form a defensive alliance against an external predator species that would dominate one of the members of the mentioned pair otherwise. Just to give an example, the invasion of species 2 toward species 3 can be avoided if species 1 is present and protects peer member species 3.
![Invasions between competing species. Solid arrows indicate the unidirectional invasions between primary predator-prey species which happen with probability $p_1$, while dashed arrows indicate bidirectional invasions between peer species that happen with probability $p_2 \leq p_1$.[]{data-label="def"}](fig1)
Summing up our model definition, the simulation algorithm can be given as follows. At each time step an active site and a neighboring passive site are chosen randomly. After we decide whether a mobility, a reproduction, or a predation elementary step is executed. Their relative weights are: $m=0.5$, $\mu=0.25$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we propose a new statistical inference method for massive data sets, which is very simple and efficient by combining divide-and-conquer method and empirical likelihood. Compared with two popular methods (the bag of little bootstrap and the subsampled double bootstrap), we make full use of data sets, and reduce the computation burden. Extensive numerical studies and real data analysis demonstrate the effectiveness and flexibility of our proposed method. Furthermore, the asymptotic property of our method is derived.'
author:
- ' Xuejun MA [^1] Shaochen WANG [^2] Wang ZHOU [^3]'
title: ' **Statistical inference in massive datasets by empirical likelihood** '
---
> [*Keywords*]{}: Bootstrap; divide-and-conquer; hypothesis test; empirical likelihood.
> [*MSC2010 subject classifications*]{}: Primary 62G10; secondary 62G05.
Introduction
============
With the rapid development of science and technologies, massive data can be collected at a large speed, especially in internet and financial fields. It is generally recognized that two major challenges in large-scale learning are estimation and inference due to large amount of computation.
For statistical inference on massive data sets, [@Kleiner2014] proposed the bag of little bootstrap (BLB) to assess the quality of estimators. However, they used only a small number of random subsets, and partial observations from each subset. This implies less efficiency in application. So, [@Sengupta2016] developed the subsampled double bootstrap (SDB) method which not noly saves cost computation, but also takes more information of full data than BLB. Compared with the traditional bootstrap (TB), BLB and SDB save the computation cost. However, BLB and SDB have some disadvantages. Similar to traditional bootstrap, they still sample from full dataset, and repeat the whole process many times. The computational cost is still expensive. On the other hand, they do not use the full data since about 63% of data points are contained in each resample.
In addition, [@Wang2018] proposed subsampling method to make inference for Logistic regression. Subsampling method was first proposed by [@Ma2015] for linear regression. Generally speaking, it is a two-step subsampling algorithm. The first step is to get the weight of each data point. In the second step, the weighted estimator is obtained by combining resample subset with subsampling weights. In order to get the optimal subsampling strategy, [@Wang2018] suggested two methods, minimum mean squared error (mMSE) and minimum variance-covariance (mVC). These methods make use of partial data, and rely on the weighted subsampling estimation. Although their efficiency of estimation is high, but their inference does not works well since the subsampling method aims at estimator in nature. Furthermore, one has to estimate the variance-covariance matrix.
In this paper, we propose combining divide-and-conquer (DAC) and empirical likelihood (EL). As we know, DAC is a very effective estimation method for massive data. Firstly, it split entire datasets into $K$ subsets, and each subset is analyzed separately. Secondly, we combine all subset results via average. [@Chen2014] called it “split-and-conquer", and applied it to the generalized linear model with sparse structure. [@Shi2018]) studied the M-estimators with cubic rate of convergence by DAC, and proved that its convergence rate is faster than the original M-estimator. We also refer to [@Zhang2013]. On the other hand, EL ([@Owen1988; @Owen1990; @Owen2001]) is a powerful nonparametric method to make inference on parameters of population without assuming the form of the underlying distribution, such as mean, quantiles and regression parameters. We will take advantage of DAC and EL. Compared with BLB and SDB, we not only take full data information, but also save the cost computation. Our method is very simple and efficient. It has two steps. In the first step, we split the sample into random subsets and the estimate of each subset is obtained. In the second step, the estimates are regarded as one sample from a population so that one can apply EL to this simplified sample.
The rest of this article is organized as follows. In Section \[sec2\], we explain our method in details, and establish its theoretical property. In Section \[sec3\], we assess the finite sample performance of proposed method via Monte Carlo simulations. A real data set is analyzed in Section \[sec4\]. All technical proofs of main results are postponed to Appendix.
Methodology {#sec2}
===========
Let $\mathcal{X}_{n}=\{X_{1}, \dots, X_{n}\}$ be a sample consisting of independent and identically distributed observations form some unknown $q$ dimensional distribution $F$. The parameter of interest is $\theta=\theta(F)\in {\mathbb{R}}^{p}$. Its estimator is ${\widehat}{\theta}_{n}={\widehat}{\theta}(\mathcal{X}_{n})$, which could be maximum likelihood estimator, M-estimator, sample correlation coefficient, U-statistics and many others. In this paper, we mainly focus on the inference of $\theta$. Here is our method.
We first divide the full data set into $K$ blocks randomly, say $\mathcal{X}_{1n_{1}},\dots, \mathcal{X}_{Kn_{K}}$, and then compute $\{ {\widehat}{\theta}_{1n_{1}}={\widehat}{\theta}(\mathcal{X}_{1n_{1}}), \dots {\widehat}{\theta}_{Kn_{K}}={\widehat}{\theta}(\mathcal{X}_{Kn_{K}})\}$. For simplicity, we assume $n_{j}=m$ for all $1\leq j\leq K$. The DAC estimator is defined by $$\widetilde{\theta}_{n}=\frac{1}{K}\sum_{j=1}^{K} {\widehat}{\theta}_{jm}.$$
Now, we discuss the asymptotic properties of $\widetilde{\theta}_n$. We assume that $p$ and $q$ are fixed and $K, m \to \infty$. Besides, we need the following assumptions.
\[assumption1\] $$\sqrt{m}({\widehat}{\theta}_{km}- \theta) = \frac{1}{\sqrt{m}}\sum_{i=1}^{m}\eta_{ki}+R_{k m},\quad k=1, \dots, K,$$ where $\eta_{ki}=(\eta_{ki1},\cdots,\eta_{kip})^\top$ and $R_{km}=(R_{km1},\cdots,R_{kmp})^\top$. Here $\eta_{k1}, \dots, \eta_{km}$ are independent and identically distributed vectors with zero mean, non-singular covariance matrix $\Sigma$ and ${\mathbb{E}}\|\eta_{k1}\|^4<\infty$. $R_{km}$ are the remainder terms, which satisfy $R_{km}=o_{p}(1)$.
\[assumption2\]
1. $R_n:=\frac{1}{\sqrt{K}}\sum_{k=1}^{K} R_{km} =o_{p}(1)$.
2. $\max_{1\leq k\leq K} \| R_{km}\|=o_{p}(m^{-\alpha})$ for some $\alpha>0$.
3. $K=O(m^{4\alpha})$.
Assumption \[assumption1\] is a commonly used condition. This is the Bahadur representation of ${\widehat}{\theta}_{n}$, which has very rich literatures. For example, [@He1996] studied the Bahadur representations for a general class of M-estimators. [@Arcones1996] explored the Bahadur representation of $L_{p}$ regression estimators. Assumption \[assumption2\] is about the rate convergence of the remainder term in the Bahadur representation, i.e., It implies that $$\sqrt{n}(\widetilde{\theta}_{n}- \theta) = \frac{1}{\sqrt{n}}\sum_{k=1}^K\sum_{i=1}^{n}\eta_{ki}+R_{n}.$$ This is a very mild condition.
\[theorem1\] Under Assumptions \[assumption1\]–\[assumption2\], we have $$\sqrt{n}\Big( \widetilde{\theta}_n -\theta \Big)\stackrel{d}{\longrightarrow}N(0,\Sigma),$$ as $m, K\to \infty$, where $\stackrel{d}{\longrightarrow}$ denotes convergence in distribution.
Theorem \[theorem1\] implies that if the usual estimator based on the whole sample has the asymptotic normal distribution, the DAC estimator $ \widetilde{\theta}_n$ has the same asymptotic distribution. However, the covariance matrix $\Sigma$ is usually unknown. One has to estimate it first when applying Theorem \[theorem1\] to make further statistical inference. Sometimes its estimator is hardly obtained. So we propose to use EL as follows.
Since the blocks are disjoint, ${\widehat}{\theta}_{1m}, \dots, {\widehat}{\theta}_{Km}$ are independent. We can regard them as one sample and apply EL to make inference on $\theta$. For notational convenience, let $Y_{km}={\sqrt{m}}{\widehat}{\theta}_{k m}$ and $\mu=\sqrt{m}\theta$. Hence, the empirical likelihood ratio for $\mu$ is given by $$\label{eq20}
\mathcal{R}(\mu)=\max\left\{ \prod_{k=1}^{K}K\omega_{k} ~\Big|~~\sum_{k=1}^{K}\omega_{k
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate the [*toroidal expanse*]{} of an embedded graph $G$, that is, the size of the largest toroidal grid contained in $G$ as a minor. In the course of this work we introduce a new embedding density parameter, the [*stretch*]{} of an embedded graph $G$, and use it to bound the toroidal expanse from above and from below within a constant factor depending only on the genus and the maximum degree. We also show that these parameters are tightly related to the planar [*crossing number*]{} of $G$. As a consequence of our bounds, we derive an efficient constant factor approximation algorithm for the toroidal expanse and for the crossing number of a surface-embedded graph with bounded maximum degree.'
author:
- 'Markus Chimani[^1]'
- 'Petr Hliněný [^2]'
- 'Gelasio Salazar[^3]'
title: 'Toroidal Grid Minors and Stretch in Embedded Graphs[^4] '
---
[**Keywords:**]{} Graph embeddings, compact surfaces, face-width, edge-width, toroidal grid, crossing number, stretch
[**AMS 2010 Subject Classification:**]{} 05C10, 05C62, 05C83, 05C85, 57M15, 68R10
Introduction {#sec:intro}
============
In their development of the Graph Minors theory towards the proof of Wagner’s Conjecture [@RoSeGMXX], Robertson and Seymour made extensive use of surface embeddings of graphs. Robertson and Seymour introduced parameters that measure the density of an embedding, and established results that are not only central to the Graph Minors theory, but are also of independent interest. We recall that the [*face-width*]{} $\fw(G)$ of a graph $G$ embedded in a surface $\Sigma$ is the smallest $r$ such that $\Sigma$ contains a noncontractible closed curve (a [*loop*]{}) that intersects $G$ in $r$ points.
\[thm:fw-minor\] For any graph $H$ embedded on a surface $\Sigma$, there exists a constant $c:=c(H)$ such that every graph $G$ that embeds in $\Sigma$ with face-width at least $c$ contains $H$ as a minor.
This theorem, and other related results, spurred great interest in understanding which structures are forced by imposing density conditions on graph embeddings. For instance, Thomassen [@Th94] and Yu [@Yu97] proved the existence of spanning trees with bounded degree for graphs embedded with large enough face-width. In the same paper, Yu showed that under strong enough connectivity conditions, $G$ is Hamiltonian if $G$ is a triangulation.
Large enough density, in the form of edge-width, also guarantees several nice coloring properties. We recall that the [*edge-width*]{} $\ew(G)$ of an embedded graph $G$ is the length of a shortest noncontractible cycle in $G$. Fisk and Mohar [@FM94] proved that there is a universal constant $c$ such that every graph $G$ embedded in a surface of Euler genus $g >0$ with edge-width at least $c\log{g}$ is $6$-colorable. Thomassen [@Th93] proved that larger (namely $2^{14g+6}$) edge-width guarantees $5$-colorability. More recently, DeVos, Kawarabayashi, and Mohar [@DKM08] proved that large enough edge-width actually guarantees $5$-choosability.
In a direction closer to our current interest, Fiedler et al. [@FHRR95] proved that if $G$ is embedded with face-width $r$, then it has $\floor{r/2}$ pairwise disjoint contractible cycles, all bounding discs containing a particular face. Brunet, Mohar, and Richter [@BMR96] showed that such a $G$ contains at least $\floor{(r-1)/2}$ pairwise disjoint, pairwise homotopic, non-separating (in $\Sigma$) cycles, and at least $\floor{(r-1)/8} -1$ pairwise disjoint, pairwise homotopic, separating, noncontractible cycles. We remark that throughout this paper, “homotopic” refers to “freely homotopic” (that is, not to “fixed point homotopic”).
For the particular case in which the host surface is the torus, Schrijver [@Sc93] unveiled a beautiful connection with the geometry of numbers and proved that $G$ has at least $\floor{3r/4}$ pairwise disjoint noncontractible cycles, and proved that the factor $3/4$ is best possible.
The [*toroidal $p \times q\>$-grid*]{} is the Cartesian product $C_p\Box C_q$ of the cycles of sizes $p$ and $q$. See Figure \[fig:torgrid\]. Using results and techniques from [@Sc93], de Graaf and Schrijver [@dS94] showed the following:
\[thm:deGS\] Let $G$ be a graph embedded in the torus with face-width $\fw(G)=r\ge 5$. Then $G$ contains the toroidal $\floor{2r/3} \times \floor{2r/3}\>$-grid as a minor.
De Graaf and Schrijver also proved that $\floor{2r/3}$ is best possible, by exhibiting (for each $r\ge 3$) a graph that embeds in the torus with face-width $r$ and that does not contain a toroidal -grid as a minor. As they observe, their result shows that $c=\ceil{3m/2}$ is the smallest value that applies in (Robertson-Seymour’s) Theorem \[thm:fw-minor\] for the case of $H=C_m\Box C_m$.
Our focus: toroidal expanse, stretch, and crossing number
---------------------------------------------------------
Along the lines of the aforementioned de Graaf-Schrijver result, our aim is to investigate the largest size (meaning the number of vertices) of a toroidal grid minor contained in a graph $G$ embedded in an arbitrary orientable surface of genus greater than zero. We do not restrict ourselves to square proportions of the grid and define this parameter as follows.
\[def:texpanse\] The [*toroidal expanse*]{} of a graph $G$, denoted by $\Tex(G)$, is the largest value of $p\cdot q$ over all integers $p,q\geq3$ such that $G$ contains a toroidal $p \times q\>$-grid as a minor. If $G$ does not contain $C_3\Box C_3$ as a minor, then let $\Tex(G)=0$.
Our interest is both in the structural and the algorithmic aspects of the toroidal expanse.
The “bound of nontriviality” $p,q\geq3$ required by Definition \[def:texpanse\] is natural in the view of toroidal embeddability —the degenerate cases $C_2\Box C_q$ are planar, while $C_p\Box C_q$ has orientable genus one for all $p,q\geq3$. It is not difficult to combine results from [@BMR96] and [@dS94] to show that for each positive integer $g>0$ there is a constant $c:=c(g)$ with the following property: if $G$ embeds in the orientable surface $\Sigma_g$ of genus $g$ with face-width $r$, then $G$ contains a toroidal $c\cdot r \times c\cdot r\>$-grid as a minor; that is, $\Tex(G) = \Omega(r^2)$.
On the other hand, it is very easy to come up with a sequence of graphs $G$ embedded in a fixed surface with face-width $r$ and arbitrarily large $\Tex(G)/ r^2$: it is achieved by a natural toroidal embedding of $C_r \Box C_{q}$ for arbitrarily large $q$. This inadequacy of face-width to estimate the toroidal expanse of an embedded graph is to be expected, due to the one-dimensional character of this parameter.
To this end, we introduce a new density parameter of embedded graphs that captures the truly two-dimensional character of our problem; the [*stretch of an embedded graph*]{} in Definition \[def:stretch\]. Using this tool, we unveil our main result—a tight two-way relationship between the toroidal expanse of a graph $G$ in an orientable surface and its [*crossing number*]{} $\crg(G)$ in the plane. We furthermore provide an approximation algorithm for both these numbers under an assumption of a sufficiently dense embedding. A simplified summary of the main results follows:
\[thm:main-overview\] Let $\Sigma$ be an orientable surface of fixed genus $g>0$, and let $\Delta$ be an integer. There exist constants $r_0,c_0,c_1,c_2>0$, depending only on $g$ and $\Delta$, such that the following holds: If $G$ is a graph of maximum degree $\Delta$ embedded in $\Sigma$ with face-width at least $r_0$, then
- $c_0\cdot\crg(G)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we propose a new primal-dual algorithm for minimizing $f(\vx)+g(\vx)+h(\vA\vx)$, where $f$, $g$, and $h$ are proper lower semi-continuous convex functions, $f$ is differentiable with a Lipschitz continuous gradient, and $\vA$ is a bounded linear operator. The proposed algorithm has some famous primal-dual algorithms for minimizing the sum of two functions as special cases. E.g., it reduces to the Chambolle-Pock algorithm when $f=0$ and the proximal alternating predictor-corrector when $g=0$. For the general convex case, we prove the convergence of this new algorithm in terms of the distance to a fixed point by showing that the iteration is a nonexpansive operator. In addition, we prove the $O(1/k)$ ergodic convergence rate in the primal-dual gap. With additional assumptions, we derive the linear convergence rate in terms of the distance to the fixed point. Comparing to other primal-dual algorithms for solving the same problem, this algorithm extends the range of acceptable parameters to ensure its convergence and has a smaller per-iteration cost. The numerical experiments show the efficiency of this algorithm.'
author:
- Ming Yan
bibliography:
- 'PM3O.bib'
date: 'Received: date / Accepted: date'
title: 'A new primal-dual algorithm for minimizing the sum of three functions with a linear operator[^1] '
---
Introduction
============
This paper focuses on minimizing the sum of three proper lower semi-continuous convex functions in the form of $$\begin{aligned}
\label{for:main_problem}
\vx^* \in \argmin_{\vx\in\cX} f(\vx) + g(\vx) + h\square l(\vA\vx),\end{aligned}$$ where $\cX$ and $\cS$ are two real Hilbert spaces, $h\square l:\cS\mapsto (-\infty,+\infty]$ is the infimal convolution defined as $h\square l(\vs)=\inf_{\vt}h(\vt)+l(\vs-\vt)$, $\vA:\cX\mapsto \cS$ is a bounded linear operator. $f:\cX\mapsto \vR$ and the conjugate function[^2] $l^*:\cS\mapsto\vR$ are differentiable with Lipschitz continuous gradients, and both $g$ and $h$ are proximal, that is, the proximal mappings of $g$ and $h$ defined as $$\begin{aligned}
\prox_{\lambda g}(\widetilde \vx)=(\vI+\lambda\partial g)^{-1}(\widetilde\vx):=\argmin_\vx~ \lambda g(\vx)+{1\over 2}\|\vx-\widetilde\vx\|^2\end{aligned}$$ have analytical solutions or can be computed efficiently. When $l(\vs)=\iota_{\{\vzero\}}(\vs)$ is the indicator function that returns zero when $\vs=\vzero$ and $+\infty$ otherwise, the infimal convolution $h\square l=h$, and the problem becomes $$\begin{aligned}
\vx^* \in \argmin_{\vx\in\cX} f(\vx) + g(\vx) + h(\vA\vx).\end{aligned}$$ A wide range of problems in image and signal processing, statistics and machine learning can be formulated into this form. Here, we give some examples.
[**Elastic net regularization [@zou_regularization_2005]:**]{} The elastic net combines the $\ell_1$ and $\ell_2$ penalties to overcome the limitations of both penalties. The optimization problem is $$\begin{aligned}
\textstyle \vx^*\in\argmin\limits_{\vx\in\vR^p}~ \mu_2\|\vx\|_2^2 + \mu_1\|\vx\|_1 + l(\vA\vx,\vb),\end{aligned}$$ where $\vA\in \vR^{n\times p}$, $\vb\in\vR^n$, and $l$ is the loss function, which may be nondifferentiable. The $\ell_2$ regularization term $\mu_2\|\vx\|_2^2$ is differentiable and has a Lipschitz continuous gradient.
[**Fused lasso [@tibshirani2005sparsity]:**]{} The fused lasso was proposed for group variable selection. Except the $\ell_1$ penalty, it includes a new penalty term for large changes with respect to the temporal or spatial structure such that the coefficients vary in a smooth fashion. The problem for fused lasso with the least squares loss is $$\begin{aligned}
\vx^*\in\argmin\limits_{\vx\in\vR^p} {1\over 2}\|\vA\vx-\vb\|_2^2 + \mu_1\|\vx\|_1 + \mu_2\|\vD\vx\|_1,\label{eqn:fusedlasso}\end{aligned}$$ where $\vA\in\vR^{n\times p}$, $\vb\in\vR^n$, and $$\begin{aligned}
\vD=\left[\begin{array}{ccccc}-1&1& & & \\ &-1&1& & \\ & & \dots & \dots & \\& & &-1&1\end{array}\right]\in\vR^{(p-1)\times p}.\end{aligned}$$
[**Image restoration with two regularizations:**]{} Many image processing problems have two or more regularizations. For instance, in computed tomography reconstruction, nonnegative constraint and total variation regularization are applied. The optimization problem can be formulated as $$\begin{aligned}
\vx^*\in\argmin\limits_{\vx\in\vR^n} {1\over 2}\|\vA\vx-\vb\|_2^2 + \iota_{C}(\vx) + \mu\|\vD\vx\|_1,\end{aligned}$$ where $\vx$ is the image to be reconstructed, $\vA\in \vR^{m\times n}$ is the linear forward projection operator that maps the image to the sinogram data, $\vb\in\vR^m$ is the measured sinogram data with noise, $\iota_{C}$ is the indicator function that returns zero if $\vx\in C$ (here, $C$ is the set of nonnegative vectors in $\vR^n$) and $+\infty$ otherwise, $\vD$ is a discrete gradient operator, and the last term is the (an)isotropic total variation regularization.
Before introducing algorithms for solving , we discuss special cases of with only two functions. When either $f$ or $g$ is missing, the problem reduces to the sum of two functions, and many splitting and proximal algorithms have been proposed and studied in the literature. Two famous groups of methods are Alternating Direction of Multiplier Method (ADMM) [@gabay1976dual; @glowinski1975approximation] and primal-dual algorithms [@PlayDual]. ADMM applied on a convex optimization problem was shown to be equivalent to Douglas-Rachford Splitting (DRS) applied on the dual problem by [@gabay1983chapter], and [@yan2014self] showed recently that it is also equivalent to DRS applied on the same primal problem. In fact, there are many different ways to reformulate a problem into a separable convex problem with linear constraints such that ADMM can be applied, and among these ways, some are equivalent. However, there will be always a subproblem involving $\vA$, and it may not be solved analytically depending on the properties of $\vA$ and the way ADMM is applied. On the other hand, primal-dual algorithms only need the operator $\vA$ and its adjoint operator $\vA^\top$[^3]. Thus, they have been applied in a lot of applications because the subproblems would be easy to solve if the proximal mappings for both $g$ and $h$ can be computed easily.
The primal-dual algorithms for two and three functions are reviewed by [@PlayDual] and specially for image processing problems by [@chambolle2016introduction]. When the differentiable function $f$ is missing, the primal-dual algorithm is Chambolle-Pock, (see e.g., [@pock2009algorithm; @esser2010general; @chambolle2011first]), while the primal-dual algorithm with $g$ missing (Primal-Dual Fixed-Point algorithm based on the Proximity Operator (PDFP$^2$O) or Proximal Alternating Predictor-Corrector (PAPC)) is proposed in [@loris2011generalization; @chen2013primal; @Drori2015209]. In order to solve problem with three functions, we can reformulate the problem and apply the primal-dual algorithms for two functions. E.g., we can let $\bar h([\vI;~\vA]\vx) = g(\vx)+h(\vA\vx)$ and apply PAPC or let $\bar g(\vx)=f(\vx)+g(\vx)$ and apply Chambolle-Pock. However, the first approach introduces more dual variables and may need more iterations to obtain the same accuracy. For the second approach, the proximal mapping of $\bar g$ may not be easy to compute, and the differentiability of $f$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We explore the behavior of collective nuclear excitations under a multi-parameter deformation of the Hamiltonian. The Hamiltonian matrix elements have the form $P(|H_{ij}|)\propto 1/\sqrt{|H_{ij}|}\exp(-|H_{ij}|/V)$, with a parametric correlation of the type $\log \langle H(x)H(y)\rangle\propto
-|x-y|$. The studies are done in both the regular and chaotic regimes of the Hamiltonian. Model independent predictions for a wide variety of correlation functions and distributions which depend on wavefunctions and energies are found from parametric random matrix theory and are compared to the nuclear excitations. We find that our universal predictions are observed in the nuclear states. Being a multi-parameter theory, we consider general paths in parameter space and find that universality can be effected by the topology of the parameter space. Specifically, Berry’s phase can modify short distance correlations, breaking certain universal predictions.
---
23.5cm 1ex -40pt = .5ex
YCTP-N11-95\
June 1995
[Universal Predictions for Statistical\
Nuclear Correlations]{}
[Dimitri Kusnezov[^1] and David Mitchell]{}\
\
[*Center for Theoretical Physics, Sloan Physics Laboratory,*]{}\
[*Yale University, New Haven, CT 06520-8120 USA*]{}
1.5 cm
[**PACS numbers:**]{} 21.60.Fw, 24.60.Lz, 21.10.Re
Introduction
============
The statistics of nuclear excitations has been explored from the shell model to collective models, with studies ranging from the relation of observed quantum fluctuations to those in random matrix models, to the connection with chaos using classical limits of the Hamiltonian[@general]-[@shriner]. The agreement of various spectral properties with random matrix predictions has shown that certain simplifying assumptions can be made concerning fluctuations in nuclei. Once random matrix theory can be justified, certain results follow immediately. These studies of chaos in nuclei stem from attempts to extract a simplified behavior from the complexity of nuclear excitations. In this respect, random matrix theory has provided invaluable assistance in developing simple methods to compute complex behaviors. In the past, aside from the studies of constant random matrices and the relation to chaos, these models have been given a parameter dependence to model correlations in various nuclear systems, from heavy ion collisions[@weid], high spin physics[@aberg] to large amplitude collective motion[@aurel]. Recently it has been shown that Hamiltonians which have a parametric dependence can exhibit universal behavior[@SA], that is, there can exist model independent quantities in a given theory, providing the Hamiltonian has certain random matrix properties. In this article we study a wide class of observables and develop universal predictions. We further show that parametric deformations of nuclear Hamiltonians can be readily modeled by a simple translationally invariant parametric random matrix theory, even though the Hamiltonian does not apriori look like a random matrix. We further justify the use of parametric random matrix theory for collective nuclear excitations.
Collective Nuclear States
=========================
We have chosen to model collective nuclear excitations in the framework of the Interacting Boson Model (IBM)[@franco] for several reasons. One of our main objectives is to explore and categorize types of model independent predictions that exist in parametric quantum theories which exhibit classical chaos. The IBM is ideally suited for this since the classical limit has been extensively studied in recent years using coherent states [@Joe], and the complete chaotic behavior is now known for every value of the parameters[@Niall]. Hence we can easily choose parametric variations in regions of strong or weak chaos, or in regular regimes of the parameter space. An additional advantage is that we can solve the quantum problem exactly. One might argue that collective states form only a subset of the real spectrum as the excitation energy increases, so that the use of the IBM is not reasonable. This is not crucial, however, since the IBM provides a solvable theory with known spectral properties, which can be compared to those of the Gaussian Orthogonal Ensemble (GOE) throughout its parameter range. Certainly a more realistic description of the spectrum would embody the same features. For example, when broken pair states are added to the IBM model space, the spectrum becomes more GOE, as the interactions in the Hamiltonian become more complicated[@francoa]. This is certainly the case as one attempts to construct more realistic Hamiltonians. And as we are showing how [*model independent*]{} quantities emerge, the model we use is really not so important. Hence we use a simple form of the IBM Hamiltonian, known as the consistent-Q form: $$\label{eq:qham}
\hat{H}= E_0 + c_1\hat{n}_{d} + c_2 {\bf \hat{Q}}^{\chi}\cdot
{\bf \hat{Q}}^{\chi} + c_3 {\bf \hat{L}}\cdot {\bf \hat{L}},$$ where $$\hat{n}_{d}=d^\dagger\cdot\tilde d\; , \qquad \hat L_\mu =
\sqrt{10}[d^\dagger\times\tilde d]^{(1)}_\mu\; ,\qquad
\hat{Q}^{\chi}_{\mu}=d_{\mu}^{\dagger}s+s^{\dagger}\tilde{d}_{\mu}+\chi
[d^{\dagger}\times \tilde{d}]^{(2)}_{\mu}.$$ The parameters $c_i$ are defined by $c_1=\eta/4$ and $c_2=(1-\eta)/4N_b$, where $N_b$ is the number of bosons. Since the Hamiltonian is diagonalized in a basis of fixed angular momentum $L$, the constant $c_3$ does not play any role, and is hence omitted. Except when stated otherwise, we will use $N_b=25$, which will give optimal statistics for the quantities we consider. The resulting dimensions for $J^\pi=0^+,2^+,4^+,10^+$ states are 65,117,165,211. In this parameterization, one has the following limits: (a) $\eta=1$ corresponds to vibrational or $U(5)$ nuclei, (b) $\eta=0$ and $\chi=-\sqrt{7}/2$ corresponds to rotational or $SU(3)$ nuclei, and (c) $\eta=\chi=0$ describes $\gamma-$soft or $O(6)$ nuclei.
The interpretation of the Hamiltonian in terms of shape variables $\beta$ and $\gamma$ is possible using coherent states, in the large $N$ limit of $H$. The energy surfaces for the Hamiltonian in Eq. (1) is [@Joe] $${\cal E}(\beta,\gamma;\eta,\chi) = \beta^2\frac{4-3\eta}{2} +
\beta^4(1-\eta)(\frac{\chi^2}{14}-1) + \beta^3\cos
3\gamma\sqrt{1-\frac{\beta^2}{2}}(1-\eta)\frac{2\chi}{\sqrt{7}}.$$ For a particular value of $\eta$ and $\chi$, the energy ${\cal E}$ can be minimized to determine the quantities $\beta$ and $\gamma$. $\beta$ and $\gamma$ in turn define a deformed nuclear mean field. This can be made explicit by re-expressing the Hamiltonian in terms of excitations in a deformed mean field using boson condensate techniques[@ami]. This allows the interpretation of correlations of observables at different values of $\eta$ and $\chi$ in terms of the shape variable $\beta$ and $\gamma$. Correlations in observables at different values of the parameters are then precisely the correlations between properties of the nucleus in the presence of different mean field configurations. We will consider the behavior of the properties of the Hamiltonian under very general parametric deformation $z=z(\eta,\chi)$. For paths which lie entirely within the chaotic regime of the parameter space, the universal predictions we explore are path independent (up to effects due to Berry’s phase which we explore in Sec. 5); correlations in a nucleus changing from rotational to vibrational or vibrational to $\gamma-$soft are the same when properly interpreted.
Distributions and Correlations of Nuclear Matrix Elements
---------------------------------------------------------
One of the results presented in this article is that parametric nuclear Hamiltonians can be modelled by correlated, parametric gaussian random matrices. Recall that a gaussian random matrix has a distribution of matrix elements of the gaussian form $P(H_{ij})\propto
\exp(-H_{ij}^2/2\gamma(1+\delta_{ij}))$, where $\gamma$ a constant related to the level density. To implement random matrix theory does not imply that the actual nuclear Hamiltonian (1) have gaussian matrix elements. We note that the distributions of matrix elements of the interacting boson model Hamiltonian are not gaussian. At any given value of $(\eta,\chi)$, we find the distribution of matrix elements obeys roughly[@flam] $$P_{ibm}(|H_{ij}|) \propto \frac{1}{\sqrt{|H_{ij}|}} e^{-|H_{ij}|/V}$$ where the strength $V$ depends on whether one is in a chaotic or regular regime. Typical results are shown in Fig. 1 for both regular (crosses) and chaotic (boxes) choices of the parameters, together with the behavior (4) (solid curves). In the chaotic parameter regimes of the model, $V$ is of order unity, while in the regular regions, it is much smaller. But both regular and chaotic regimes display the same functional form of the distribution, suggesting that the functional form is due to the structure of the Hamiltonian, rather than to the presence of chaos. Similar distribution functions have been seen in parity non-conservation studies of the compound
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The mass of the axion and its decay rate are known to depend only on the scale of Peccei-Quinn symmetry breaking, which is constrained by astrophysics and cosmology to be between $10^9$ and $10^{12}$ GeV. We propose a new mechanism such that this effective scale is preserved and yet the fundamental breaking scale of $U(1)_{PQ}$ is very small (a kind of inverse seesaw) in the context of large extra dimensions with an anomalous U(1) gauge symmetry in our brane. Unlike any other (invisible) axion model, there are now possible collider signatures in this scenario.'
---
plus 1pt ‘@=12 -0.5in 0.0in 0.0in 8.5in 6.5in
UCRHEP-T283\
July 2000
[**Low-Scale Axion from Large Extra Dimensions\
**]{}
Although CP violation has been observed in weak interactions [@cp1; @cp2] and it is required for an explanation of the baryon asymmetry of the universe [@asym], it becomes a problem in strong interactions. The reason is that the multiple vacua of quantum chromodynamics (QCD) connected by instantons [@insta] require the existence of the CP violating $\theta$ term [@theta] $${\cal L}_\theta = \theta_{QCD} {g_s^2 \over 32 \pi^2}
G_{\mu \nu}^a \widetilde G^{a \mu \nu} ,$$ where $g_s$ is the strong coupling constant, $G^a_{\mu \nu}$ is the gluonic field strength and $\tilde G^a_{\mu \nu}$ is its dual. Nonobservation of the electric dipole moment of the neutron [@edm] implies that $$\bar \theta = \theta_{QCD} - Arg ~Det ~M_u ~M_d < 10^{-10},$$ instead of the theoretically expected order of unity. In the above, $M_u$ and $M_d$ are the respective mass matrices of the charge 2/3 and $-1/3$ quarks of the standard model of particle interactions. This is commonly known as the strong CP problem.
The first and best motivated solution to the strong CP problem was proposed by Peccei and Quinn [@pq], in which the quarks acquire a dynamical phase from the spontaneous breaking of a new global symmetry \[$U(1)_{PQ}$\] and relaxes $\bar \theta$ to its natural minimun value of zero. As a result, there appears a Goldstone boson called the axion but it is not strictly massless [@ww] because it couples to two gluons (like the neutral pion) through the axial triangle anomaly [@anomal].
The scale of $U(1)_{PQ}$ breaking (which is conventionally identified with the axion decay constant $f_a$) determines the axion coupling to gluons, which is proportional to $1/f_a$. If $f_a$ is the electroweak symmetry breaking scale as originally proposed [@pq], then the model is already ruled out by laboratory experiments [@expt]. In fact, $f_a$ is now known to be constrained by astrophysical and cosmological arguments [@astro] to be between $10^9$ and $10^{12}$ GeV. Hence the axion must be an electroweak singlet or predominantly so. It may couple to the usual quarks and leptons through a suppressed mixing with the standard Higgs doublet [@dfsz], or it may couple only to other unknown colored fermions [@ksvz], or it may couple to gluinos [@dms] as well as all other supersymmetric particles.
Because the axion must necessarily mix with the $\pi$ and $\eta$ mesons, it must have a two-photon decay mode. This is the basis of all experimental attempts [@expt] to discover its existence. On the other hand, the accompanying new particles in all viable axion models to date are very heavy, i.e. of order $f_a$; hence they are completely inaccessible to experimental verification.
In the following we consider instead the possiblily that the $U(1)_{PQ}$ breaking scale is actually very small, but that $f_a$ is large because of a kind of inverse seesaw mechanism. We show how this scenario may be realized in the context of large extra dimensions with an anomalous U(1) gauge symmetry in our brane. The associated new physics now exists at around 1 TeV, with a number of interesting observable consequences at future colliders.
We assume a singlet scalar field $\chi$ with a nonzero PQ charge existing in the bulk of large extra dimensions [@extra]. The $shining$ [@distant] of this field in our brane is the source of spontaneous $U(1)_{PQ}$ breaking in our world (called a 3-brane). The idea is that $\chi$ gets a large vacuum expectation value (VEV) in a distant brane, but its effect on our brane is small because we are far away from it. (In the case of lepton number, this mechanism has been used recently to obtain small Majorana neutrino masses [@extnu].) To convert this small $\langle \chi \rangle$ to a large $f_a$, we need to assume an anomalous U(1) gauge symmetry in our brane at the TeV energy scale, as explained below.
In a theory of large extra dimensions with quantum gravity at the TeV scale, there is no large scale available for the axion. Since the behavior of Goldstone bosons depends not on the coupling but only on the scale of symmetry breaking in general, it is a problem which is not easily resolved [@others]. Here we find a new and novel solution to this apparent contradiction in the case where there is an anomalous U(1) gauge symmetry, which is of course well studied [@u1] as a possible manifestation of string theory near the string scale (now considered also at around a few TeV) and has well-known applications in quark and lepton Yukawa textures and supersymmetry breaking.
We extend the standard model of particle interactions to include an extra $U(1)_A$ gauge symmetry and an extra $U(1)_{PQ}$ global symmetry. All standard-model particles are trivial under these two new symmetries. We then introduce a new heavy quark singlet $\psi$ and two scalar singlets $\sigma$ and $\eta$ with $U(1)_A$ and $U(1)_{PQ}$ charges as shown in Table 1. All fields except $\chi$ are confined to our brane.
-------------------- ---------------------------------------- ---------- -------------
Fields $SU(3)_C \times SU(2)_L \times U(1)_Y$ $U(1)_A$ $U(1)_{PQ}$
$(u_i, d_i)_L$ (3,2,1/6) 0 0
$u_{iR}$ (3,1,2/3) 0 0
$d_{iR}$ (3,1,$-$1/3) 0 0
$(\nu_i, e_i)_L$ (1,2,$-$1/2) 0 0
$e_{iR} $ (1,1,$-$1) 0 0
$\psi_L$ (3,1,–1/3) 1 $k$
$\psi_R$ (3,1,–1/3) –1 $-k$
$(\phi^+, \phi^0)$ (1,2,1/2) 0 0
$\sigma$ (1,1,0) 2 $2k$
$\eta $ (1,1,0) 2 $2k-2$
$\chi$ (1,1,0) 0 2
-------------------- ---------------------------------------- ---------- -------------
: Peccei-Quinn charges of the fermions and scalars
Because of our chosen charge assignments, only the field $\sigma$ couples to the colored fermion $\psi$, i.e. $${\cal L}_Y = f \sigma \bar \psi_L \psi_R + h.c.$$ Hence it also couples to two gluons through the usual triangular loop. As $\sigma$ acquires a VEV, say $u$, of order 1 TeV, both $U(1)_A$ and $U(1)_{PQ}$ are broken, whereas the latter is broken by $\langle \chi \rangle = z$, and it induces a VEV also for $\eta$, i.e. $\langle \eta \rangle = w$. We will show in the following that given $z$ is small from its origin in the bulk, $w$ is also small. Now the longitudinal component of the $Z_A$ boson is mostly given by Im$\sigma$, so the axion is excluded to be mostly a linear combination of Im$\eta$ and Im$\chi$, but the latter two fields do not couple to the colored fermion $\psi$. As a result, the axion’s coupling to two gluons is now effectively $${1 \over f_a} = {w^2 \over u^2 \sqrt {w^2 + z^2}},$$ which can be thought of as a kind of inverse seesaw, i.e. the largeness of $
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We analyze 15,000 spectra of 29 stellar-mass black hole candidates collected over the 16-year mission lifetime of [[*RXTE*]{}]{} using a simple phenomenological model. As these black holes vary widely in luminosity and progress through a sequence of spectral states, which we broadly refer to as hard and soft, we focus on two spectral components: The Compton power law and the reflection spectrum it generates by illuminating the accretion disk. Our proxy for the strength of reflection is the equivalent width of the Fe-K line as measured with respect to the power law. A key distinction of our work is that for [*all*]{} states we estimate the continuum under the line by excluding the thermal disk component and using only the component that is responsible for fluorescing the Fe-K line, namely the Compton power law. We find that reflection is several times more pronounced ($\sim 3$) in soft compared to hard spectral states. This is most readily caused by the dilution of the Fe line amplitude from Compton scattering in the corona, which has a higher optical depth in hard states. Alternatively, this could be explained by a more compact corona in soft (compared to hard) states, which would result in a higher reflection fraction.'
author:
- 'James F. Steiner, Ronald A. Remillard, Javier A. García, Jeffrey E. McClintock'
title: |
Stronger Reflection from Black Hole Accretion Disks\
in Soft X-ray States
---
Introduction {#section:intro}
============
During the course of its 16-year mission, the [*Rossi X-ray Timing Explorer*]{} ([[*RXTE*]{}]{}) detected far more photons (30 billion in PCU-2 alone) from accreting black holes than any other X-ray observatory. The sample of black holes (BHs) targeted by [*RXTE*]{} is chiefly comprised of nearby stellar-mass systems. While the total Galactic population of stellar BHs is believed to be many millions, only a tiny subset of approximately 50 are known to us, namely those located in X-ray binaries. A wondrous property of BHs, their utter simplicity, is the essence of the famous no-hair theorem: Each BH in nature is fully described by just its mass and spin. Roughly half of the known stellar BHs have a dynamically-determined mass. The measured masses range from $\sim5-20~{M_\odot}$ [@Ozel_2010; @Reid_2014; @Laycock_2015; @Wu_2016]. Meanwhile, estimates of spin have been obtained for many of them during the past decade, principally by modeling either the thermal continuum emission of the accretion disk [e.g.; @Zhang; @MNS14], or the relativistically-broadened reflection spectrum [e.g.; @Fabian_1989; @Reynolds_2014]. Our focus is primarily on transient BH systems that cycle between a minuscule fraction of the Eddington limit upward to the limit itself. During an outburst, a transient BH progresses through a sequence of spectral-timing states, which are broadly termed “hard” or “soft,” based on a measure of X-ray hardness [@Fender_2004]. As a source evolves over the course of months and its hardness varies, sweeping changes occur in many properties of the system including the composition of its spectrum, the intensity of Fourier flicker noise, and the presence or absence of quasi-periodic oscillations and jets [e.g.; @Homan_2005; @RM06; @Heil_2015]. Stellar BHs emit a complex multicomponent X-ray spectrum. A [ *thermal*]{} blackbody-like component is produced in the very inner accretion disk. The disk is truncated at a radius ${R_{\rm in}}$ before reaching the event horizon. A hard [*power-law*]{} component results from Compton scattering of the thermal disk photons in hot coronal gas that veils the disk. The third principal component is a [*reflection*]{} spectrum generated by illumination of the cold disk ($kT\sim0.1-1$keV) by the power-law component. The reflection component is a rich mix of radiative recombination continua, absorption edges and fluorescent lines [@Ross_1993; @Garcia_Kallman_2010]. An analysis of these three interacting spectral components provides constraints on the source properties including geometry (e.g., on ${R_{\rm in}}$ and the scale of the corona). The relationships between these components across the full range of behavior displayed by accreting stellar BHs is the focus of this paper. Our results are based on an analysis for 29 stellar BHs (10 dynamically-confirmed BHs and 19 BH candidates) of all the data collected using [[*RXTE*]{}]{}’s prime detector unit (PCU-2), some 15,000 spectra in all, with a total net exposure time of 30Ms. Importantly, we recalibrate the data using our tool [pcacorr]{}, which greatly reduces the level of systematic error [@pcacorr]. Given the scope of our study, relativistic reflection models are too complex and computationally slow for our purposes [e.g.; [reflionx, xillver, relxill;]{} @reflionx; @relxill2]. We therefore employ a simplistic, phenomenological model and estimate the strength of the reflection spectrum by determining the equivalent width with respect to the Compton continuum of its most prominent reflection feature, namely the $6.4-7.0$ keV Fe-K line. The paper is organized as follows: In Section \[section:data\] we describe the data sample and our approach to modeling the data. Our results are presented in Section \[section:results\], followed by a discussion in Section \[section:disc\] and our conclusions in Section \[section:conc\].
![([*top*]{}): Hardness-intensity diagrams for all data and ([ *bottom:*]{}) for six well-known BHs with abundant data (where for reference the gray background shows all data). For reference, the count rate of the Crab Nebula is $\approx
2600$ s$^{-1}$. Note that a HID does not allow one to compare the luminosities of sources because the intensity is in detector units.[]{data-label="fig:qdiag"}](fig_tmp_testplot_f1master1){width="1\columnwidth"}
Data {#section:data}
====
The [[*RXTE*]{}]{} archive provides the premier database for the synoptic study of stellar BHs. We exclusively use the data collected by PCU-2, one of the five proportional counter detectors that comprise [[*RXTE*]{}]{}’s principal instrument, the Proportional Counter Array (PCA). Throughout the mission, PCU-2 was the unit that was most often active, and it had the most reliable and stable calibration [@Jahoda_2006; @Shaposhnikov_2012]. Its area and energy resolution were 1300cm$^2$ and $\approx18$% at 6 keV. Table 1 summarizes our data sample. During an outburst, a BH was typically observed daily over a period of months as it systematically brightened and subsequently dimmed by orders of magnitude. We homogenized the data by segmenting it into continuous 300–5000s intervals, each of which was used to produce an energy spectrum and a power-density spectrum (PDS). Energy spectra were analyzed ignoring the lowest 4 channels, an effective lower bound $\approx
2.8$ keV, and an upper bound of 45keV was adopted. The effects of detector dead time were corrected as described in @McClintock_2006. We obtained an absolute calibration of the flux using the the standard @Toor_Seward spectrum of the Crab Nebula; our slope and normalization corrections are $\Delta\Gamma = 0.01$ and $f_{\rm TS} = 1.097$ [@Steiner_2010]. We computed the rms power, a measure of the flicker noise, by integrating the PDS over the band 0.1–10Hz. An unprecedented sensitivity to faint spectral features is achieved by employing the calibration tool [pcacorr]{} [@pcacorr], which improves the quality of the PCA’s spectral calibration by roughly an order of magnitude and results in a data precision of $\sim 0.1\%$. We include this small systematic uncertainty as a fractional error on each channel when conducting our analysis using [XSPEC]{} [@XSPEC]. The considerable increase in sensitivity [pcacorr]{} delivers is crucial for estimating the strength of line features. All PCU-2 data for 29 black holes are plotted in a hardness-intensity diagram (HID) [@Fender_2004; @RM06] in the top panel of Figure \[fig:qdiag\]. The normalized hard color (or hardness ratio ${\rm HR}$) is the ratio of count rates in the energy bands indicated in the upper panel, and is described in @Peris_2016. The data are color-coded to show the level of rms flicker noise. As is well-known and is evident here from the vertical striation, rms noise correlates with spectral state [e.g.; @Heil_2015; @RM06], with hard states showing several-times stronger rms than soft states. The six small panels are HIDs for selected sources. Note that transient sources characteristically trace a loop in the hardness-intensity diagram (HID), but that the persistent source Cyg X–1 is confined to a relatively narrow region. The other selected source showing stunted HID evolution is GRS 1915+105, which is an unusual transient system that has been in a protracted state of outburst since 1992.
Spectral Modeling {#subsec:model}
-----------------
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The purpose of this paper is to develop a general existence theory for constrained minimization problems for functionals defined on function spaces on metric measure spaces $(\mathcal M, d, \mu)$. We apply this theory to functionals defined on metric graphs $\mathcal G$, in particular $L^2$-constrained minimization problems of the form $$E(u) = \frac{1}{2} a(u,u) - \frac{1}{q}\int_{\mathcal K} |u|^q \, \mathrm dx,$$ where $q>2$ and $a(\cdot, \cdot)$ is a suitable symmetric, sesquilinear form on some function space on $\mathcal G$ and $\mathcal K \subseteq \mathcal G$ is given. We show how the existence of solutions can be obtained via decomposition methods using spectral properties of the operator $A$ associated with the form $a(\cdot, \cdot)$ and discuss the spectral quantities involved. An example that we consider is the higher-order variant of the stationary NLS (nonlinear Schrödinger) energy functional with potential $V\in L^2+ L^\infty(\mathcal G)$ $$E^{(k)}(u)= \frac{1}{2} \int_{\mathcal G} |u^{(k)}|^2+ V(x) |u|^2 \, \mathrm dx - \frac{1}{p} \int_{\mathcal K} |u|^q \, \mathrm dx$$ defined on a class of higher-order Sobolev spaces $H^k(\mathcal G)$ that we introduce. When $\mathcal K$ is a bounded subgraph, one has localized nonlinearities, which we treat as a special case. When $k=1$ we also consider metric graphs with infinite edge set as well as magnetic potentials. Then the operator $A$ associated to the linear form is a Schrödinger operator, and in the $L^2$-subcritical case $2<q<6$, we obtain generalizations of existence results for the NLS functional as for instance obtained by Adami, Serra and Tilli \[JFA 271 (2016), 201-223\], and Cacciapuoti, Finco and Noja \[Nonlinearity 30 (2017), 3271–3303\], among others.'
address:
- 'University of Lisbon, Portugal'
- |
Grupo de Física Matemática\
Faculdade de Ciências da Universidade de Lisboa\
Campo Grande, Edifício C6\
P-1749-016 Lisboa, Portugal
author:
- Matthias Hofmann
title: An existence theory for nonlinear equations on metric graphs via energy methods
---
Introduction
============
In recent years, there has been a growth of interest in functionals on metric graphs $\mathcal G=(\mathcal V, \mathcal E)$ of the stationary NLS (Nonlinear Schrödinger) energy functional $$\label{eq:introminprobfunc}
E_{\text{NLS}}(u, \mathcal G)= \frac{1}{2} \int_{\mathcal G} |u'|^2\, \mathrm dx - \frac{\mu}{q} \int_{\mathcal G} |u|^q \, \mathrm dx, \qquad u\in H^1(\mathcal G),\; \|u\|_{L^2}^2 =1,\; q>2, \; \mu >0$$ and associated ground states of the stationary NLS energy functional, i.e. minimizers for the constrained minimization problem $$\label{eq:introminprob}
E_{\text{NLS}}(\mathcal G):= \inf_{\substack{u\in H^1(\mathcal G)\\ \|u\|_{L^2}^2=1}} E_{\text{NLS}}(u, \mathcal G), \qquad 2<q< 6.$$ Such minimizers are solutions to the stationary nonlinear Schrödinger equation on $\mathcal G$ given by $$\begin{cases}
-u'' + \lambda u = \mu |u|^{q-2} u \qquad \text{edgewise,}\vspace{.5em}\\
\begin{gathered}u \text{ is continuous on }\mathcal G \text{ and satisfies the Kirchhoff condition}\\
\sum_{e\in \mathcal E:e\succ \mathsf v} \frac{\partial u}{\partial \nu} \Big |_e(\mathsf v)=0, \qquad \forall \mathsf v\in \mathcal V,
\end{gathered}
\end{cases}$$ where we recall that $e\succ \mathsf v$ denotes the relation that the edge $e$ is adjacent to the vertex $\mathsf v\in \mathcal V$ and $\frac{\partial u}{\partial \nu}|_e(\mathsf v)$ denotes the inward pointing derivative at $\mathsf v$ towards the interior of the edge $e$. While in the simplest case of the real line, existence of minimizers in can be deduced by standard techniques, on general noncompact graphs existence results are not as easy to obtain due to the lack of a concept of translation invariance. In [@adami2015nls] it was shown on the one hand that under certain topological configurations the problem does not admit a minimizer; on the other, in a later paper the same authors derive an existence principle based on a comparison inequality:
\[thm:introast2016\] Let $\mathcal G$ be a noncompact metric graph with finitely many edges and $2<q<6$. Assume $$\label{eq:introadamiestimate}
E_{\text{NLS}}(\mathcal G) < E_{\text{NLS}}(\mathbb R),$$ then there exists a minimizer for $E_{\text{NLS}}(\mathcal G)$.
This result can be used to obtain existence results on concrete graphs $\mathcal G$ via construction of so called competitors, i.e. test functions $u\in H^1(\mathcal G)$ for which $E_{\text{NLS}}(u, \mathcal G) < E_{\text{NLS}}(\mathbb R)$. This allows to deduce existence of minimizers in certain situations as shown in [@tentarelli2016nls] and [@adami2017negative].
A variant of this problem with potential was considered in [@cacciapuoti2017ground] and [@cacciapuoti2018existence], where the energy functional was given by $$\label{eq:introfunctionaltocons}
E_{\text{NLS}}^{V}(u) = \frac{1}{2} \int_{\mathcal G} \left | u'\right |^2+ V|u|^2 \, \mathrm dx -\frac{\mu}{q} \int_{\mathcal G} |u|^q \, \mathrm dx, \qquad \|u\|_{L^2}^2=1.$$ In [@cacciapuoti2017ground] the existence of minimizers of was related to the existence of eigenvalues of the Schrödinger operator $-\Delta+V$ below the essential spectrum:
\[thm:introcfn2017\] Let $\mathcal G$ be a noncompact metric graph with finitely many edges and $V\in L^1+ L^\infty(\mathcal G)$ with $V_- \in L^r(\mathcal G)$ for $r\in[1,1+ \frac{2}{q-2}]$ and $2<q\le 6$. Assume $$\label{eq:introJameswants}
\inf \sigma(-\Delta+V) < \inf \sigma_{\text{ess}}(-\Delta+V).$$ Then there exists $\mu^*>0$ such that for $\mu \in (0, \mu^*)$ the functional is bounded below and the associated constrained minimization problem $$E_{\text{NLS}}^{V}:= \inf_{\substack{u\in H^1(\mathcal G)\\ \|u\|_{L^2}^2=1}} E_{\text{NLS}}^V(u)$$ admits a minimizer.
\[thm:introcac2018\] Let $\mathcal G$ be a noncompact metric graph with finitely many edges and $V\in L^1+L^\infty(\mathcal G)$ satisfying the assumptions in Theorem \[thm:introcfn2017\]. Let $$\Sigma_0 := \inf \sigma(-\Delta+V)<0, \qquad \gamma_q := \inf_{\substack{u\in H^1(\mathcal G)\\ \|u\|_{L^2}^2=1}}\frac{1}{2} \int_{\mathbb R} \left | u'\right |^2+ V|u|^2 \, \mathrm dx -\frac{1}{q} \int_{\mathcal G} |u|^q \, \mathrm dx<0.$$ Then we have existence of minimizers of $$E_{\text{NLS}}^{V}= \inf_{\substack{u\in H^1(\mathcal G)\\ \|u\|_{L^2}^2=1}} E_{\text{NLS}}^V(u)$$ for $0<\mu\le (\Sigma_0/\gamma_q)^{\frac{3}{2}-\frac{q}{4}}$.
Our goal in this paper is threefold. Firstly, we develop a general existence theory in a far more abstract setting which can be applied to a variety of problems as for example $E_{\text{NLS}}$ and $E_{\text{NLS}}^{V}$ but which is not limited to metric graphs. For example, the existence theory may be also applied to functionals defined on function spaces on combinatorial graphs or general domains in $\mathbb R^n$. We then use this existence theory to obtain generalizations of the results in [@adami2016threshold] and [@cacciapuoti2017ground] by considering
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The charmless bottom meson decays are systematically investigated based on an approximate six quark operator effective Hamiltonian from perturbative QCD. It is shown that within this framework the naive QCD factorization method provides a simple way to evaluate the hadronic matrix elements of two body mesonic decays. The singularities caused by on mass-shell quark propagator and gluon exchanging interaction are appropriately treated. Such a simple framework allows us to make theoretical predictions for the decay amplitudes with reasonable input parameters. The resulting theoretical predictions for all the branching ratios and $CP$ asymmetries in the charmless $B^0,\ B^+,\ B_s\to \pi\pi,\ \pi K,\
KK$ decays are found to be consistent with the current experimental data except for a few decay modes. The observed large branching ratio in $B\to \pi^0\pi^0$ decay remains a puzzle though the predicted branching ratio may be significantly improved by considering the large vertex corrections in the effective Wilson coefficients. More precise measurements of charmless bottom meson decays, especially on CP-violations in $B\to K K$ and $B_s\to
\pi\pi, \pi K, KK$ decay modes, will provide a useful test and guide us to a better understanding on perturbative and nonperturbative QCD.
author:
- 'Fang Su$^{\ast \dagger}$, Yue-Liang Wu$^{\ast}$, Yi-Bo Yang$^{\ast\ddagger}$ and Ci Zhuang$^{\ast}$'
title: |
QCD Factorization Based on Six-Quark Operator Effective Hamiltonian from Perturbative QCD and Charmless Bottom Meson Decays $B_{(s)}\to
\pi\pi,\pi K, KK$
---
Introduction
============
Hadronic B-meson decays play importance role not only for understanding the dynamical scheme of hadronic decays and testing the flavor structure of the Standard Model(SM), but also for probing the origin of CP violation and new physics signals beyond the SM. In particular, the precise measurement and systematic study for hadronic charmless B decays may provide a window for such purposes. The branching ratios of $B \rightarrow \pi \pi$ and $\pi K$ modes have been measured with a good accuracy[@HFAG] and a large direct CP violation has been established in $\pi^+ K^-$ mode [@HFAG]. The most severe discrepancies between the experimental data and theoretical predictions come from the unexpected large branch ratio of $B \to \pi^0\pi^0$ and some unclear CP violations in $B\to \pi^0 K$ decays, which are called $\pi\pi, \pi K$ puzzles[@pikpuzzle; @WZ]. Theoretically, to predict consistently those decays, it needs to deal with the short-distance contributions in a complete and systematic way from the high energy scale to a proper low energy scale at which the perturbative calculations remain reliable, and treat the long-distance contributions which contain the non-perturbative strong interactions involved in those decays. The main task is to reliably compute the hadronic matrix elements between the initial and final hadron states. Several novel methods based on the naive factorization approach (FA) and four quark operator effective Hamiltonian have been developed to evaluate the hadronic matrix elements, such as the QCD factorization approach (QCDF)[@Beneke:1999br], the perturbation QCD method (pQCD) [@Keum:2000ph], and the soft-collinear effective theory (SCET)[@Bauer:2000ew]. These methods have been widely used in analyzing hadronic B-meson decays and made great progresses in understanding the hadronic structure and properties of strong interactions. To understand the puzzles whether they are due to the unknown new physics or it is because of the lack of our knowledge on the hadronic properties of strong interactions, it still needs to investigate further the various approaches within the framework of QCD and to check the validity of assumptions and approximations made in the practical calculations.
The widely used theoretical framework of weak decays is based on the current-current four fermion operator effective Hamiltonian derived via operator product expansion and renormalization group evolution. In hadronic weak decays, the short-distance contributions of QCD are characterized by the Wilson coefficient functions of four quark operators and the long-distance contributions are in principle obtained by evaluating the hadronic matrix elements of four quark operators. The Wilson coefficient functions are in general calculated by perturbative QCD which is well developed, while the evaluation of hadronic matrix elements remains a hard task as it involves non-perturbative effects of QCD. To deepen our insights into the hadronic decays, we shall first reinvestigate the four quark operator effective Hamiltonian whether it is always suitable as a basic framework for all hadronic weak decays. In fact, for the mesonic two body decays of B meson, it concerns three quark-antiquark pairs once each meson is regarded as the quark-antiquark bound state at the quark level structure. This fact then naturally motivates us to consider six-quark operator effective Hamiltonian instead of four-quark operator effective Hamiltonian. Namely, we shall begin with six quark diagrams of weak decays with both W-boson exchange and gluon exchange, and derive formally the six-quark operator effective Hamiltonian based on operator product expansion and renormalization group evolution when including loop corrections of six quark diagrams. We shall show how this approach allows us to figure out what are the assumptions and approximations made in effective four quark operator approach, and how the simple QCD factorization scheme can reliably be applied to evaluate the hadronic matrix elements with the six quark operator effective Hamiltonian. For the infrared singularity caused by the gluon exchanging interaction when evaluating the hadronic matrix elements of effective six quark operators, it is shown to be simply treated by the introduction of a mass scale motivated from the gauge invariant loop regularization method [@LRC], where the energy scale $\mu_g$ is introduced to play the role of infrared cut-off energy scale without violating gauge invariance.
The paper is organized as follows. In section \[sec:sqeh\], after briefly reviewing the four quark operator effective Hamiltonian, we begin with the primary six quark diagrams with a single W-boson exchange and a single gluon exchange, and the corresponding initial six-quark operator. It is shown that a complete six quark operator effective Hamiltonian is in general necessary to include all contributions from both perturbative and non-perturbative QCD corrections, especially the non-pertubative QCD corrections at low energy scale $\mu < m_c\sim 1.5$ GeV could be sizable. To demonstrate how the six quark operator effective Hamiltonian provides a reliable framework for hadronic two body decays of B meson, we will focus, as a good approximation, on the dominant QCD loop diagrams of six quarks so as to avoid the tedious calculations. In section \[sec:QCDF\], it is demonstrated how the QCD factorization approach becomes a simple and natural tool to evaluate the hadronic matrix elements of mesonic two body decays based on the six quark operator effective Hamiltonian. In particular, the so-called factorizable and non-factorizable, emission and annihilation diagram contributions are automatically the consequences of QCD factorization for the hadronic matrix elements of effective six quark operators. The treatment on the singularities caused by the gluon exchanging interactions and the on mass-shell fermion propagator is presented in Section \[sec:TOD\]. In Section \[sec:Amplitude\], all the amplitudes of charmless bottom meson decays are completely obtained by using the QCD factorization approach based on the approximate six quark operator effective Hamiltonian. Our numerical results with appropriate input parameters are presented in section \[sec:nrcpe\], as a good approximation, the resulting predictions on branching ratios and CP violations of charmless bottom meson decays are much improved and also more closed to the current experimental data. The conclusions and remarks are given in last section. The detailed calculations involved in the evaluation of various decay amplitudes are presented in the Appendix.
Effective Hamiltonian of Six Quark Operators {#sec:sqeh}
============================================
Four Quark Operator Effective Hamiltonian
-----------------------------------------
Let us start from the four-quark effective operators in the effective weak Hamiltonian. The initial four quark operator due to weak interaction via W-boson exchange is given as follows for B decays $$O_{1}=(\bar{q}^u_{i}b_{i})_{V-A}(\bar{q}^d_{j}u_{j})_{V-A}, \qquad
q^u=u,\ c, \quad q^d = d,\ s$$ The complete set of four quark operators are obtained from QCD and QED corrections which contain the gluon exchange diagrams, strong penguin diagrams and electroweak penguin diagrams. The resulting effective Hamiltonian(for $b\to s$ transition) with four quark operators is known to be as follows $$\begin{aligned}
H_{\rm eff}\, =\, {G_F\over\sqrt{2}} \sum_{q=u,c}
\lambda_q^{s} \left[C_1(\mu)O_1^{(q)}(\mu) +C_2(\mu)O_2^{(q)}(\mu)+
\sum_{i=3}^{10}C_i(\mu)O_i(\mu)\right]+h.c.\;,\label{eq:hpk}\end{aligned}$$ with $\lambda_q^{s} = V_{qb}V^*_{qs}$ and $V_{ij}$ the CKM matrix elements, $C_i(\mu)$ the Wilson coefficient functions[@4qham] and $O_i(\mu)$ the four quark operators $$\begin{aligned}
\begin{array}{ll}
\displaystyle O_1^{(q)}\, =\,
(\bar{q}_ib_i)_{V-A}(\bar{s}_jq_j)_{V-A}\;, & \displaystyle
O_2^{(q)}\, =\,(\bar{s}_ib_i)_{V-A}(\bar
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Alain Chenciner & Hugo Jiménez-Pérez\
Observatoire de Paris, IMCCE (UMR 8028), ASD\
77, avenue Denfert-Rochereau, 75014 Paris, France\
`chenciner@imcce.fr, jimenez@imcce.fr`
title: 'Angular momentum and Horn’s problem'
---
We prove a conjecture made in [@C1]: given an $n$-body central configuration $X_0$ in the euclidean space $E$ of dimension $2p$, let $Im{\cal F}$ be the set of ordered real $p$-tuples $\{\nu_1,\nu_2,\cdots,\nu_p\}$ such that $\{\pm i\nu_1,\pm i\nu_2,\cdots,\pm i\nu_p\}$ is the spectrum of the angular momentum of some (periodic) relative equilibrium motion of $X_0$ in $E$. Then $Im {\cal F}$ is a convex polytope. The proof consists in showing that there exist two, generically $(p-1)$-dimensional, convex polytopes ${\cal P}_1$ and ${\cal P}_2$ in ${\ensuremath{\mathbb{R}}}^{p}$ such that ${\cal P}_1\subset Im{\cal F}\subset {\cal P}_2$ and that these two polytopes coincide.
${\cal P}_1$, introduced in [@C1], is the set of spectra corresponding to the hermitian structures $J$ on $E$ which are “adapted" to the symmetries of the inertia matrix $S_0$; it is associated with Horn’s problem for the sum of $p\times p$ real symmetric matrices with spectra $\sigma_-$ and $\sigma_+$ whose union is the spectrum of $S_0$.
${\cal P}_2$ is the orthogonal projection onto the set of “hermitian spectra" of the polytope ${\cal P}$ associated with Horn’s problem for the sum of $2p\times 2p$ real symmetric matrices having each the same spectrum as $S_0$.
The equality ${\cal P}_1={\cal P}_2$ follows directly from a deep combinatorial lemma, proved in [@FFLP], which characterizes those of the sums $C=A+B$ of two $2p\times 2p$ real symmetric matrices $A$ and $B$ with the same spectrum, which are hermitian for some hermitian structure.
Origin of the problem: $N$-body relative equilibria and their angular momenta
=============================================================================
We recall here the results of [@AC; @C1; @C2] which are needed in order to understand the mechanical origin of the purely algebraic conjecture solved in the present paper: given a configuration $x_0=(\vec r_1,\cdots,\vec r_N)\in E^N$ of $N$ punctual positive masses in the euclidean space $E$, a [*rigid motion*]{} of the configuration under Newton’s attraction is a motion in which the mutual distances $||\vec r_i-\vec r_j||$ between the bodies stay constant. It is proved in [@AC] (see also [@C2]) that such a motion is necessarily a [*relative equilibrium*]{}. This implies that the motion takes place in a space of even dimension $2p$, which can be supposed to coincide with $E$, and that, in a galilean frame fixing the center of mass at the origin, it is of the form $x(t)=(e^{\Omega t}\vec r_1,e^{\Omega t}\vec r_2,\cdots,e^{\Omega t}\vec r_N)$, where $\Omega$ is a $2p\times 2p$-antisymmetric endomorphism of the euclidean space $E$ which is non degenerate. Choosing an orthonormal basis where $\Omega$ is normalized, this amounts to saying that there exists a hermitian structure on the space $E$ of motion and an orthogonal decomposition $E\equiv{\ensuremath{\mathbb{C}}}^p={\ensuremath{\mathbb{C}}}^{k_1}\times\cdots\times{\ensuremath{\mathbb{C}}}^{k_r}$ such that $$x(t)=(x_1(t),\cdots,x_r(t))=(e^{i\omega_1t}x_1,\cdots,e^{i\omega_rt}x_r),$$ where $x_m$ is the orthogonal projection on ${\ensuremath{\mathbb{C}}}^{k_m}$ of the $N$-body configuration $x$ and the action of $e^{i\omega_mt}$ on $x_m$ is the diagonal action on each body of the projected configuration. Such quasi-periodic motions exist only for very special configurations, called [*balanced configurations*]{} (see [@AC; @C2] for their characterization). The most degenerate balanced configurations are the [*central configurations*]{} for which all the frequencies $\omega_i$ are the same; this means that $\Omega=\omega J$, with $J$ a hermitian structure on $E$, and the motion is $$x(t)=(\vec r_1(t),\cdots,\vec r_N(t))=e^{i\omega t}x_0=(e^{i\omega t}\vec r_1,\cdots,e^{i\omega t}\vec r_N)$$ in the hermitian space $E\equiv{\ensuremath{\mathbb{C}}}^{p}$; in particular, it is periodic. In a space of dimension at most 3, $E$ is necessarily of dimension 2 and the configuration of any relative equilibrium is central.
Given a configuration $x=(\vec r_1,\cdots,\vec r_N)$ and a configuration of velocities $y=\dot x=(\vec v_1,\cdots, \vec v_N)$, both with center of mass at the origin: $\sum_{k=1}^Nm_k\vec r_k=\sum_{k=1}^Nm_k\vec v_k=0$, the [*angular momentum*]{} of $(x,y)$ is the bivector ${\mathcal C}=\sum_{k=1}^Nm_k\vec r_k\wedge\vec v_k$. If we represent $x$ and $y$ by the $2p\times N$ matrices $X$ and $Y$ whose $i$th column are respectively made of the components $(r_{1i},\cdots,r_{2pi})$ and $(v_{1i},\cdots,v_{2pi})$ of $\vec r_i$ and $\vec v_i$ in an orthonormal basis of $E$ and if $M=\hbox{diag}(m_1,\cdots,m_N)$, this bivector is represented by the antisymmetric matrix [*(we use the french convention $^{t\!}X$ for the transposed of $X$)*]{} $$C=-XM^{t\!}Y+YM^{t\!\!}X\;\;\hbox{with coefficients}\;\; c_{ij}
=\sum_{k=1}^Nm_k(-r_{ik}v_{jk}+r_{jk}v_{ik}).$$ The dynamics of a solid body is determined by its [*inertia tensor*]{} (with respect to its center of mass), represented in the case of a point masses configuration $X$ by the symmetric matrix $$S=XM^{t\!\!}X\;\;\hbox{with coefficients}\;\;
s_{ij}=\sum_{k=1}^Nm_kr_{ik}r_{jk},$$ whose trace is the [*moment of inertia of the configuration $x$ with respect to its center of mass*]{}. In particular, the angular momentum of a relative equilibrium solution $X(t)=e^{t\Omega}X_0$ is represented by the antisymmetric matrix $C=S_0\Omega+\Omega S_0$, where $S_0=X_0M^{t\!\!}X_0$. Restricting to the case of central configurations, that is $\Omega=\omega J$, and making $\omega=1$, we consider in what follows the spectrum of $J$-skew-hermitian matrices of the form $S_0J+JS_0$ or, what amounts to the same, the spectrum of $J$-hermitian matrices[^1] of the form $J^{-1}S_0J+S_0$.
[*In the following, we identify $E$ with ${\ensuremath{\mathbb{R}}}^{2p}$ by the choice of some orthonormal basis. ${\ensuremath{\mathbb{R}}}^{2p}$ is endowed with its canonical basis $e_i=(0,\cdots,1,\cdots,0)$ and its canonical euclidean scalar product $x\cdot y=\sum_{i=1}^{2p}x_iy_i$; this allows identifying linear endomorphisms of $E={\ensuremath{\mathbb{R}}}^{2p}$ and $2p\times 2p$ matrices with real coefficients. When we say that $J$ is a hermitian structure, we mean that the standard euclidean structure is given and that $J$ is a complex structure which is orthogonal.*]{}
The frequency map
=================
We recall the definition, given in [@C1], of the [*frequency map*]{} ${\cal F}$ from the set of hermitian structures on ${\ensuremath{\mathbb{R}}}^{2p}$ to the positive Weyl chamber $W_p^+\subset {\ensuremath{\mathbb{R}}}^p$: given some $2p\times 2p$
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Jan-Peter Calliess$^{1}$ [^1] [^2]'
title: '**Lipschitz Optimisation for Lipschitz Interpolation$^*$** '
---
[^1]: \*This paper is an extended version of a conference paper that will appear in the Proceedings of the American Control Conference (ACC 2017).
[^2]: $^{1}$Jan-Peter Calliess is with the Engineering Department, University of Cambridge, UK. [jpc73@cam.ac.uk]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Let $h=\prod_{i=1}^{t}p_i^{s_i}$ be its decomposition into a product of powers of distinct primes, and $\mathbb{Z}_{h}$ be the residue class ring modulo $h$. Let $\mathbb{Z}_{h}^{n}$ be the $n$-dimensional row vector space over $\mathbb{Z}_{h}$. A generalized Grassmann graph for $\mathbb{Z}_{h}^n$, denoted by $G_r(m,n,\mathbb{Z}_{h})$ ($G_r$ for short), has all $m$-subspaces of $\mathbb{Z}_{h}^n$ as its vertices, and two distinct vertices are adjacent if their intersection is of dimension $>m-r$, where $2\leq r\leq m+1\leq n$. In this paper, we determine the clique number and geometric structures of maximum cliques of $G_r$. As a result, we obtain the Erdős-Ko-Rado theorem for $\mathbb{Z}_{h}^{n}$ and some bounds of the independence number of $G_r$.
[*AMS classification*]{}: 05C50, 05D05
[*Key words*]{}: Erdős-Ko-Rado theorem, Residue class ring, Grassmann graph, Clique number, Independence number
author:
- |
Jun Guo[^1]\
[Department of Mathematics, Langfang Normal University, Langfang 065000, China]{}
date:
title: '**Erdős-Ko-Rado theorem and generalized Grassmann graphs for vector spaces over residue class rings**'
---
Introduction
============
Let $\mathbb{Z}$ denote the integer ring. For $a,b,h\in \mathbb{Z}$, integers $a$ and $b$ are said to be [*congruent modulo*]{} $h$ if $h$ divides $a-b$, and denoted by $a\equiv b \mod h$. Let $h=\prod_{i=1}^{t}p_i^{s_i}$ be its decomposition into a product of powers of distinct primes. Let $\mathbb{Z}_{h}$ denote the residue class ring modulo $h$ and $\mathbb{Z}_{h}^\ast$ denote its unit group. Then $\mathbb{Z}_{h}$ is a principal ideal ring and $|\mathbb{Z}_{h}^\ast|=h\prod_{i=1}^t(1-p_i^{-1}).$ By [@Ireland], $\mathbb{Z}_{h}\cong\mathbb{Z}_{p_1^{s_1}}\oplus\mathbb{Z}_{p_2^{s_2}}\oplus\cdots\oplus\mathbb{Z}_{p_t^{s_t}}$ and $\mathbb{Z}_{h}^\ast\cong\mathbb{Z}_{p_1^{s_1}}^\ast\times\mathbb{Z}_{p_2^{s_2}}^\ast\times\cdots\times\mathbb{Z}_{p_t^{s_t}}^\ast$. Note that $(p_i),i=1,2,\ldots,t$, are all the maximal ideals of $\mathbb{Z}_{h}$. Write $J_{(\alpha_1,\alpha_2,\ldots,\alpha_t)}=(\prod_{i=1}^{t}p_i^{\alpha_i})$, where $0\leq \alpha_i\leq s_i$ for $i=1,2,\ldots,t$. For brevity, we write $J_{(\alpha_1)}$ as $J_{\alpha_1}$ if $t=1$. For $a\in\mathbb{Z}$, we also denote by $a$ the congruence class of $a$ modulo $h$.
For a subset $S$ of $\mathbb{Z}_{h}$, let $S^{m\times n}$ be the set of all $m\times n$ matrices over $S$, and $S^{n}=S^{1\times n}$. A matrix in $S^n$ is also called an $n$-dimensional row vector over $S$. Let $I_r$ ($I$ for short) be the $r\times r$ identity matrix, and $0_{m,n}$ ($0$ for short) the $m\times n$ zero matrix. Let $\hbox{diag}(A_1,A_2,\ldots, A_k)$ denote the block diagonal matrix whose blocks along the main diagonal are matrices $A_1,A_2,\ldots, A_k$. The set of $n\times n$ invertible matrices forms a group under matrix multiplication, called the [*general linear group*]{} of degree $n$ over $\mathbb{Z}_{h}$ and denoted by $G\!L_n(\mathbb{Z}_{h})$. Let ${}^t\!A$ denote the transpose matrix of a matrix $A$ and $\det(X)$ the determinant of a square matrix $X$ over $\mathbb{Z}_{h}$. For $X\in \mathbb{Z}_{h}^{n\times n}$, by Corollary 2.21 in [@Brown], $X\in G\!L_n(\mathbb{Z}_{h})$ if and only if $\det(X)\in\mathbb{Z}_h^\ast$.
For $A\in\mathbb{Z}_{h}^{m\times n}$ and $B\in\mathbb{Z}_{h}^{n\times m}$, if $AB=I_m$, we say that $A$ has a [*right inverse*]{} and $B$ is a right inverse of $A$. Similarly, if $AB=I_m$, then $B$ has a [*left inverse*]{} and $A$ is a left inverse of $B$. $\mathbb{Z}_{h}^{n}$ is called the $n$-dimensional row vector space over $\mathbb{Z}_{h}$. Let $\alpha_i\in\mathbb{Z}_{h}^{n}$ for $i=1,2,\ldots,m$. The vector subset $\{\alpha_1,\alpha_2,\ldots,\alpha_m\}$ is called [*unimodular*]{} if the matrix ${}^t({}^t\alpha_1, {}^t\alpha_2, \ldots, {}^t\alpha_m)$ has a right inverse. By Lemma \[lem2.9\] below, a matrix $A\in\mathbb{Z}_{h}^{m\times n}$ has a right inverse if and only if all row vectors of $A$ are linearly independent in $\mathbb{Z}_{h}^{n}$.
Let $V\subseteq\mathbb{Z}_{h}^{n}$ be a [*linear subset*]{} (i.e., a $\mathbb{Z}_{h}$-module). A [*largest unimodular vector subset*]{} of $V$ is a unimodular vector subset of $V$ which has maximum number of vectors. The dimension of $V$, denoted by $\dim(V)$, is the number of vectors in a largest unimodular vector subset of $V$. Clearly, $\dim(V)=0$ if and only if $V$ does not contain a unimodular vector. If a linear subset $X$ of $\mathbb{Z}_{h}^{n}$ has a unimodular basis with $m$ vectors, then $X$ is called an $m$-[*dimensional vector subspace*]{} ($m$-[*subspace*]{} for short) of $\mathbb{Z}_{h}^{n}$. Every $m$-subspace of $\mathbb{Z}_{h}^{n}$ is isomorphic to $\mathbb{Z}_{h}^{m}$. Applying Lemma \[lem2.9\] below, it is easy to prove that every basis of a subspace of $\mathbb{Z}_{h}^{n}$ can be extended to a basis of $\mathbb{Z}_{h}^{n}$. We define the $0$-subspace to be $\{0\}$.
The Erdős-Ko-Rado theorem [@Erdos; @Wilson] is a classical result in extremal set theory which obtained an upper bound on the cardinality of a family of $m$-subsets of a set that every pairwise intersection has cardinality at least $r$ and describes exactly which families meet this bound. The results on Erdős-Ko-Rado theorem have inspired much research [@Frankl; @Godsi2; @Huang-T; @Tanaka; @Vanhove]. Let $0\leq r\leq m\leq n$ and ${\mathbb{Z}_{h}^{n}\brack m}$ be the set of all $m$-subspaces of $\mathbb{Z}_{h}^{n}$. A family ${\cal F}\subseteq{\mathbb{Z}_{h}^{n}\brack m}$ is called $r$-[*intersecting*]{} if $\dim(A\cap B)\geq r$ for all $A,B\in{\cal F}$. When $t=1$, Huang et al. [@Huang3] obtained an upper bound on the cardinality of an $r$-intersecting family in ${\mathbb{Z}_{h}^{n}\brack m}$ and described exactly which families meet this bound.
Let $0\leq2r\leq2m=n$ and $I\subseteq[t]:=\{1,2,\ldots,t\}$. Suppose that $\alpha_i=s_i,\pi_i(x)=1$ if $i\in I$, and $\alpha_i=0,\pi_i(x)=0$ if $i\in[t]\setminus I$. Define $$\label{equanew1}
{\cal F}_{(\alpha
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
(BES Collaboration)\
\
M. Ablikim
- 'J. Z. Bai'
- 'Y. Ban'
- 'X. Cai'
- 'H. F. Chen'
- 'H. S. Chen'
- 'H. X. Chen'
- 'J. C. Chen'
- Jin Chen
- 'Y. B. Chen'
- 'Y. P. Chu'
- 'Y. S. Dai'
- 'L. Y. Diao'
- 'Z. Y. Deng'
- 'Q. F. Dong'
- 'S. X. Du'
- 'J. Fang'
- 'S. S. Fang[^1]'
- 'C. D. Fu'
- 'C. S. Gao'
- 'Y. N. Gao'
- 'S. D. Gu'
- 'Y. T. Gu'
- 'Y. N. Guo'
- 'Z. J. Guo[^2]'
- 'F. A. Harris'
- 'K. L. He'
- 'M. He'
- 'Y. K. Heng'
- 'J. Hou'
- 'H. M. Hu'
- 'J. H. Hu'
- 'T. Hu'
- 'G. S. Huang[^3]'
- 'X. T. Huang'
- 'X. B. Ji'
- 'X. S. Jiang'
- 'X. Y. Jiang'
- 'J. B. Jiao'
- 'D. P. Jin'
- 'S. Jin'
- 'Y. F. Lai'
- 'G. Li, [^4]'
- 'H. B. Li'
- 'J. Li'
- 'R. Y. Li'
- 'S. M. Li'
- 'W. D. Li'
- 'W. G. Li'
- 'X. L. Li'
- 'X. N. Li'
- 'X. Q. Li'
- 'Y. F. Liang'
- 'H. B. Liao'
- 'B. J. Liu'
- 'C. X. Liu'
- 'F. Liu'
- Fang Liu
- 'H. H. Liu'
- 'H. M. Liu'
- 'J. Liu[^5]'
- 'J. B. Liu'
- 'J. P. Liu'
- Jian Liu
- 'Q. Liu'
- 'R. G. Liu'
- 'Z. A. Liu'
- 'Y. C. Lou'
- 'F. Lu'
- 'G. R. Lu'
- 'J. G. Lu'
- 'C. L. Luo'
- 'F. C. Ma'
- 'H. L. Ma'
- 'L. L. Ma[^6]'
- 'Q. M. Ma'
- 'Z. P. Mao'
- 'X. H. Mo'
- 'J. Nie'
- 'S. L. Olsen'
- 'R. G. Ping'
- 'N. D. Qi'
- 'H. Qin'
- 'J. F. Qiu'
- 'Z. Y. Ren'
- 'G. Rong'
- 'X. D. Ruan'
- 'L. Y. Shan'
- 'L. Shang'
- 'C. P. Shen'
- 'D. L. Shen'
- 'X. Y. Shen'
- 'H. Y. Sheng'
- 'H. S. Sun'
- 'S. S. Sun'
- 'Y. Z. Sun'
- 'Z. J. Sun'
- 'X. Tang'
- 'G. L. Tong'
- 'G. S. Varner'
- 'D. Y. Wang[^7]'
- 'L. Wang'
- 'L. L. Wang'
- 'L. S. Wang'
- 'M. Wang'
- 'P. Wang'
- 'P. L. Wang'
- 'W. F. Wang[^8]'
- 'Y. F. Wang'
- 'Z. Wang'
- 'Z. Y. Wang'
- Zheng Wang
- 'C. L. Wei'
- 'D. H. Wei'
- 'Y. Weng'
- 'N. Wu'
- 'X. M. Xia'
- 'X. X. Xie'
- 'G. F. Xu'
- 'X. P. Xu'
- 'Y. Xu'
- 'M. L. Yan'
- 'H. X. Yang'
- 'Y. X. Yang'
- 'M. H. Ye'
- 'Y. X. Ye'
- 'G. W. Yu'
- 'C. Z. Yuan'
- 'Y. Yuan'
- 'S. L. Zang'
- 'Y. Zeng'
- 'B. X. Zhang'
- 'B. Y. Zhang'
- 'C. C. Zhang'
- 'D. H. Zhang'
- 'H. Q. Zhang'
- 'H. Y. Zhang'
- 'J. W. Zhang'
- 'J. Y. Zhang'
- 'S. H. Zhang'
- 'X. Y. Zhang'
- Yiyun Zhang
- 'Z. X. Zhang'
- 'Z. P. Zhang'
- 'D. X. Zhao'
- 'J. W. Zhao'
- 'M. G. Zhao'
- 'P. P. Zhao'
- 'W. R. Zhao'
- 'Z. G. Zhao[^9]'
- 'H. Q. Zheng'
- 'J. P. Zheng'
- 'Z. P. Zheng'
- 'L. Zhou'
- 'K. J. Zhu'
- 'Q. M. Zhu'
- 'Y. C. Zhu'
- 'Y. S. Zhu'
- 'Z. A. Zhu'
- 'B. A. Zhuang'
- 'X. A. Zhuang'
- 'B. S. Zou'
date: 'Received: date / Revised version: date'
title: '**Study of ${J/\psi}$ decaying into $\omega{p\bar{p}}$**'
---
Introduction
============
Decays of the $J/\psi$ meson are regarded as being well suited for searches for new types of hadrons and for systematic studies of light hadron spectroscopy. Recently, a number of new structures have been observed in $J/\psi$ decays. These include strong near-threshold mass enhancements in the $p{\bar{p}}$ invariant mass spectrum from ${J/\psi}\rightarrow\gamma p{\bar{p}}$ decays [@bes1860], the $p \bar \Lambda$ and $K^-\bar \Lambda$ threshold enhancements in the $p \bar \Lambda$ and $K^-\bar \Lambda$ mass spectra in $J/\psi \rightarrow p K^- \bar \Lambda$ decays [@pkl], the $\omega\phi$ resonance in the $\omega\phi$ mass spectrum in the double-OZI suppressed decay $J/\psi\to\gamma \omega\phi$ [@goph], and a new resonance, the $X(1835)$, in $J/\psi\to\gamma \pi
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Matthew Buican$^{\diamondsuit, 1}$ and Takahiro Nishinaka$^{\clubsuit, 2,3}$'
bibliography:
- 'chetdocbib.bib'
date: May 2017
title: |
On Irregular Singularity Wave Functions\
and Superconformal Indices
---
Introduction
============
Inspired by constructions of certain four-dimensional (4D) superconformal indices as correlators in 2D topological field theory (TFT) on a (punctured) Riemann surface ${\mathcal{C}}$ [@Gadde:2011ik], we proposed a generalization in [@Buican:2015ina] that leads to closed-form expressions for the Schur limit of the superconformal indices of two infinite sets of Argyres-Douglas (AD) theories that arise from twisted compactifications of the 6D $A_1$ $(2,0)$ theory on ${\mathcal{C}}$—the so-called $(A_1, A_{2n-3})$ and $(A_1, D_{2n})$ superconformal field theories (SCFTs).[^1] In addition to giving exact information about non-trivial sectors of these theories (the so-called Schur operators [@Gadde:2011uv; @Beem:2013sza]) and characterizing new states in 2D $SU(2)$ $q$-deformed Yang-Mills (see [@Cordes:1994fc] for a review and, e.g., [@deHaro:2006uvl; @Kimura:2008gs; @Szabo:2013vva] for other recent developments), these indices contain a surprise: they encode information about the ${\mathcal{N}}=2$ chiral operators[^2] parameterizing the Coulomb branch even though ${\mathcal{N}}=2$ chiral operators do not contribute directly to the Schur index [@Buican:2015hsa]. These results may point to the existence of a deeper structure at work in 4D ${\mathcal{N}}=2$ SCFTs (see [@Fredrickson:2017yka] for interesting recent progress in this direction). In fact, Coulomb branch physics is at the heart of a complementary approach to computing these indices via BPS state counting [@Cordova:2015nma] (building on results in [@Iqbal:2012xm]).
More recently, many papers have appeared that include generalizations to other classes of Argyres-Douglas theories[^3] and other limits of the superconformal index [@Buican:2015tda; @Song:2015wta; @Cecotti:2015lab; @Xie:2016evu] as well as to the full superconformal index [@Maruyoshi:2016tqk; @Maruyoshi:2016aim; @Agarwal:2016pjo] (and also to minimal interacting deformations of AD theories [@Xie:2016hny; @Buican:2016hnq]). However, many interesting Argyres-Douglas theories remain to be explored, and various aspects of the structure underlying these theories remain to be uncovered (see [@Cordova:2016uwk; @Cordova:2017ohl; @Cordova:2017mhb] for interesting recent progress).
In this paper, we generalize our discussion in [@Buican:2015ina] and propose the following simple wave functions for certain irregular punctures in $SU(N)$ $q$-deformed Yang-Mills theory (thus generalizing our earlier results from $N=2$ to all $N\ge2$) $$\begin{aligned}
{\widetilde}{f}_{R}^{(n)}(q;{\bf x}) &= \prod_{k=1}^\infty\left(\frac{1}{1-q^k}\right)^{N-1}q^{nC_2(R)}\text{Tr}_R\left[q^{-\frac{n}{2}F^{ij}h_ih_j}\prod_{i=1}^{N-1}(x_1\cdots x_i)^{h_i}\right]~,
\label{eq:wf1}\end{aligned}$$ where $n\ge2$ is an integer, $q$ is a fugacity, $R$ is an irreducible representation of $A_{N-1}$ with quadratic Casimir $C(R)$ and Cartans $h_i$ (in the Chevalley basis[^4]), ${\bf x}=(x_1,\cdots,x_{N-1})$ are flavor fugacities, and the factor $F^{ij}$ is the quadratic form matrix, i.e., the inverse of the Cartan matrix $$\begin{aligned}
\label{Cartinv}
F
= \frac{1}{N}\left[
\begin{array}{ccccc}
N-1 & N-2 & N-3 & \cdots & 1\\
N-2 & 2(N-2)& 2(N-3)& \cdots & 2\\
N-3 & 2(N-3) & 3(N-3) & \cdots & 3\\
\vdots &\vdots& \vdots& \ddots & \vdots \\
1 & 2& 3&\cdots& N-1\\
\end{array}
\right]~.\end{aligned}$$ In particular, this wave function can be used to construct Schur indices for the $(A_{N-1}, A_{N(n-1)-1})$ Argyres-Douglas theories $$\begin{aligned}
\mathcal{I}_{(A_{N-1},A_{N(n-1)-1})}(q;{\bf x}) = \sum_{R}C_R(q){\widetilde}{f}^{(n)}_R(q;{\bf x})~,
\label{eq:general} \end{aligned}$$ where the sum is taken over all irreducible representations of $A_{N-1}$, and the coefficients, $C_R$, take the form $$\begin{aligned}
\label{topological}
C_{R}(q) = \frac{\prod_{k=1}^{N-1}(1-q^k)^{N-k}}{(q;q)_\infty^{N-1}}\text{dim}_q R = \frac{\prod_{k=1}^{N-1}(1-q^k)^{N-k}}{(q;q)_\infty^{N-1}}\chi_R^{su(N)}(q^{-\frac{N-1}{2}},q^{-\frac{N-3}{2}},\cdots,q^{\frac{N-1}{2}}) ~,\end{aligned}$$ as conjectured in [@Gadde:2011ik].[^5] Here, our convention for the character is such that $\chi^{su(N)}_R(x_1,\cdots,x_{N}) \equiv \text{Tr}_{R} \left[\prod_{i=1}^{N-1}(x_1\cdots x_i)^{h_i}\right]$ with $x_{N} \equiv (x_1\cdots x_{N-1})^{-1}$.[^6]
Therefore, in addition to providing a description of new states in $SU(N)$ $q$-deformed Yang-Mills theory, our expression in can be used to construct closed-form expressions for Schur indices of a doubly infinite set of strongly interacting SCFTs. This proposal completes the construction of all such indices for Argyres-Douglas theories of type $(A_N, A_M)$. Indeed, these indices have not been previously constructed for theories of type $(A_{N-1}, A_{N(n-1)-1})$ with $N>2$ (with the exception of the $(A_3, A_3)$ and $(A_2, A_5)$ cases for which expressions involving integrals over gauge groups exist [@Buican:2015ina; @Buican:2015tda], but no simple sum of the type in has been found[^7]).
One interesting aspect of the $(A_{N-1}, A_{N(n-1)-1})$ theories with $N>2$ is that they typically have exactly marginal deformations (if $n=2$, there are $N-3$ such deformations, and, if $n>2$, there are $N-2$ exactly marginal deformations). While the index is an invariant of the resulting conformal manifolds, the $S$-duality groups (see the interesting recent discussion in [@Caorsi:2016ebt]) act on the index through discrete symmetries. Our compact expressions for the Schur indices make it possible to explore the discrete symmetries of the index efficiently.
Moreover, as we will see in detail, our formulae encode a highly non-trivial set of renormalization group (RG) flows that typically take conformal manifolds in the ultraviolet (UV) and often map them to products of conformal manifolds in the IR along with various isolated factors. While we leave a deeper exploration of such RG flows and the laws they obey to future work, we develop a simple monopole vev RG flow" formalism to study these flows in the theories related by mirror symmetry to the $S^1$ reductions of our AD theories of interest (we explain why the reduction along the circle commutes with the RG flow).
Another aspect of our proposal is that it immediately gives us an infinite set of new superconformal indices for free. Indeed, simply by including already-existing expressions for wave functions corresponding to an additional regular puncture in the $SU(N)$ $q$-deformed Yang-Mills theory, we generate Schur indices for infinitely many co-called type
|
{
"pile_set_name": "ArXiv"
}
|
-1truein 0truein = 10000 10.5pt 10.5pt
‘=11
addtoreset[equation]{}[section]{}
startsection[section]{}[1]{}[@]{}[3.5ex plus 1ex minus .2ex]{} [2.3ex plus .2ex]{}[****]{}
==========================================================================================
startsection[subsection]{}[2]{}[@]{}[2.3ex plus .2ex]{} [2.3ex plus .2ex]{}[****]{}
-----------------------------------------------------------------------------------
\#1\#2\#3[[*Mod. Phys. Lett.*]{} [**A\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Int. J. Mod. Phys.*]{} [**A\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Nuovo Cimento*]{} [**\#1A**]{} (\#2) \#3]{} \#1\#2\#3[[*Rept. Prog. Phys.*]{} [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Eur. Phys. Jour.*]{} [**C\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Astropart. Phys.*]{} [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*JHEP*]{} [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Nucl. Phys.*]{} [**B\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Phys. Lett.*]{} [**B\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Phys. Rev.*]{} [**D\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Phys. Rev. Lett.*]{} [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Phys. Rep.*]{} [**\#1**]{} (\#2) \#3]{}
=cmss10 =cmss10 at 7pt
Introduction
============
The Standard Model of particle physics accounts successfully for all subatomic observational data. The gauge charges of the Standard Model matter states suggest its embedding in $SO(10)$ Grand Unified Theory, which is broken to the Standard Model at the GUT or string scale. The $SO(10)$ unification picture is further supported by: the logarithmic evolution of the Standard Model gauge parameters; the proton longevity; and the suppression of left–handed neutrino masses. The heterotic–string [@gross] produces chiral $SO(10)$ representations in its perturbative spectrum, and is therefore the one suited to explore the $SO(10)$ GUTs structure underlying the Standard Model. Phenomenological studies of the heterotic–string have been pursued since the mid–eighties [@candelas], using a variety of world–sheet [@fff; @gepner; @bert] and target space techniques [@cy; @orbifolds].
The free fermionic construction of the heterotic–string in four dimensions produced a rich space of phenomenological three generation models. These models admit the underlying $SO(10)$ GUT embedding of the Standard Model spectrum. However, the $SO(10)$ symmetry is broken directly at the string level. The early studies of these models consisted of isolated examples that shared an underlying NAHE–base structure [@nahe]. Examples in which the $SO(10)$ symmetry is broken to the: flipped $SU(5)$ (FSU5) [@revamp]; $SO(6)\times SO(4)$ Heterotic String Pati–Salam Models (HSPSM) [@alr]; $SU(3)\times SU(2)\times U(1)^2$ Standard–like Models (SLM) [@slm]; $SU(3)\times SU(2)^2\times U(1)$ left–right symmetric (LRS) [@lrs]; and $SU(4)\times SU(2)\times U(1)$ (SU421) [@su421]; subgroups were studied. Among those the FSU5; SLM; HSPSM; LRS cases produced quasi–realistic three generation models, whereas the SU421 case did not produce any viable three generation model. The advantage of the SU421 models compared to the FSU5 and HSPSM is that they admit both the doublet–triplet, as well as the doublet–doublet spitting mechanism [@su421]. We also note the recent interest in SU421 models from purely phenomenological considerations [@wise].
The phenomenological free fermionic heterotic–string models are ${\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ orbifolds that are constructed at enhanced symmetry points in the moduli space [@Z2Z2Faraggi1994; @Z2Z2Kounnas1997]. Many of the phenomenological properties of the models are rooted in their underlying ${\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ structure [@recentreview]. In recent years systematic methods for the classification of symmetric ${\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ free fermionic orbifolds were developed in [@typeIIclassi] for type II superstrings and in refs. [@fknr; @fkr] for symmetric ${\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ heterotic–string orbifolds with $SO(10)$ GUT symmetry. The classification was extended in refs. [@acfkr; @cfr; @rizos] and [@frs] to string vacua in which the $SO(10)$ symmetry is broken to the $SO(6)\times SO(4)$ Pati–Salam and to the flipped $SU(5)$ subgroups, respectively. The Pati–Salam class of free fermionic vacua produced examples of three generation exophobic models in which exotic fractionally charged states only appear in the massive string spectrum [@acfkr; @cfr], whereas the flipped $SU(5)$ class of models did not produce exophobic models with an odd number of generations [@frs].
In this paper we discuss the classification for the class of SU421 heterotic–string models. We provide a general argument that breaking the $SO(10)$ symmetry to this subgroup cannot produce three chiral generations in the prevalent free fermionic construction which is based on symmetric ${\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ toroidal compactification with a ${\mathbb{Z}}_2 \times {\mathbb{Z}}_4$ fermionic boundary conditions that break the $SO(10)$ symmetry to $SU(4)\times SU(2)\times U(1)$.
$SU(4) \times SU(2) \times U(1)$ Phenomenology {#analysis}
==============================================
The field theory content of the $N=1$ supersymmetric $SU(4)_C\times SU(2)_L\times U(1)_R$ model[^1] was discussed in ref. [@su421]. The SU421 class of heterotic–string models differs from the HSPSM models in the breaking of $SU(2)_R\rightarrow U(1)_R$ directly at the string level. Similar to the HSPSM, the SU421 heterotic–string models admit the $SO(10)$ embedding and the chiral states are obtained from the spinorial [**16**]{} representations of $SO(10)$ which decomposes in the following way: F\_L\^[i]{} &=& ( 4 ,2, 0) = (3 ,2, [13]{}, 0) + (1,2, -[1]{}, 0) = [ud]{}\^i+[e]{}\^i,\[SU421fl\]\
U\_R\^[i]{} &=& ([4]{},1,-[12]{}) = ([3]{},1,-[13]{},-[12]{}) + (1,1,+[1]{},-[12]{}) = [u\^[c]{}]{}\^i+[N\^[c]{}]{}\^i,\[SU421ur\]\
D\_R\^[i]{} &=& ([4]{},1,+[12]{}) = ([3]{},1,-[13]{},+[12]{}) + (1,2,+[1]{},+[12]{}) = [d\^[c]{}]{}\^i+[e\^[c]{}]{}\^i. \[SU421dr\] The first and second equalities show the decomposition under $SU(4)_C\times SU(2)_L\times U(1)_R$ and $SU(3)_C\times SU(2)_L\times U(1)_{B-L}\times U(1)_R$, respectively. The electroweak $U(1)_Y$ current is given by U(1)\_Y=[12]{}U(1)\_[B-L]{}+U(1)\_R. \[ewu1current\] From eq. (\[SU421fl\]) we note that $F_L$ produces the quarks and leptons weak doublets, and that $U_R$ and $D_R$ produces the right–handed weak singlets. The two Higgs multiplets of the Minimal Supersymmetric Standard Model, $h^u$ and $h^d$, are given by, h\^d &=& ( 1 ,2,-[12]{}),\
h\^u &=& ( 1 ,2,+[12]{}). \[SU421mssmhigss\] The heavy Higgs states that are responsible for breaking $SU(4)_C\times U(1)_{R}$ gauge symmetry to the Standard Model groups $SU(3)\times U(1)_Y$ are
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'An efficient first principles method was developed to calculate spin transfer torques in layered system with noncollinear magnetization. The complete scattering wave function is determined by matching the wave function in the scattering region with the Bloch states in the leads. The spin transfer torques are obtained with aid of the scattering wave function. We applied our method to the ferromagnetic spin valve and found that the material (Co, Ni and Ni$_{80}$Fe$_{20}$) dependence of the spin transfer torques could be well understood by the Fermi surface. Ni has much longer spin injection penetration length than Co. Interfacial disorder is also considered. It is found that the spin transfer torques could be enhanced by the interfacial disorder in some system.'
author:
- Shuai Wang
- Yuan Xu
- Ke Xia
title: First principles study on the spin transfer torques
---
Introduction
============
Spin angular momentum can be transferred by the flowing electrons from one ferromagnetic (FM) material to another FM material, which is so-called spin transfer torques (STT) introduced by Slonczewski[@J.Slonc96] and Berger[@Berger96]. Those two seminal studies have shown that the dynamics of magnetization in FM material could be dominated by the spin torques carried by electric current. The excitation of coherent precession of magnetization and spin wave were predicted. The STT was soon identified in the experiments[@experiment] by clear observation of the magnetization switching in FM spin valve, which excites great interests in experiment and theory[sun00,zhangsc98,Waintal,Brataas\_circuit,MDstiles02,stiles02,Edwards05,PMHaney]{}.
The theories[@Waintal; @Brataas_circuit; @MDstiles02; @stiles02] combining the quantum treatment of the interface scattering and the Boltzmann-like treatment of the bulk scattering work reasonable well with the experiments of metallic system. However, recent experiments on the tunnelling system[Fuchs]{} and magnetic domain wall[@exp_domainwall] call for a full quantum treatment of the whole system. Edwards * et.al.,*[Edwards05]{} obtained the torques of spin valve in the empirical tight-binding frame and Haney *et.al.,*[@PMHaney] calculated the torques in the similar structure with nonequilibrium Green’s function (NEGF) based on LCAO basis.
Both semiclassical and quantum mechanical study show that the STT is most significant near the nonmagnet(NM)$|$FM interfaces in the spin valve. Up to now, only a few studies have addressed the material dependence of spin torque, which could be an important issue as the spin dependent transport is greatly affected by the electronic structure in FM[Maciej\_decay,mixing\_G\_Turek]{}. Furthermore, previous studies focused on ideal structure without considering the disorder at the FM$|$NM interface, which should exist in the realistic spin valve[@flip06].
The main aim of this paper is to formulate a method to calculate STT of a noncollinear magnetized system within the first principles frame. Differing from the previous Green function based work[@PMHaney], we obtained the complete scattering wave functions of the whole system[@Xia06]. The STT[@MDstiles02] is formulated in the tight-binding representation. Large system such as domain wall can be well treated in this framework[@tang06]. We apply our formulism to the Co$|$Cu$|$FM$|$Cu spin valve system with impurity scattering at the FM$|$NM interface. Our study shows that the STT can penetrate deep into the ferromagnetic materials for Ni, which is quite different from Co. It is also found that average torques are enhanced in the presence of interfacial disorder.
This paper is organized as following. In Sec. II, we present the details of the formalism for constructing the eigenmodes of the lead and computing the STT in spin valve. Note that not only the transmission and reflection coefficient are obtained but also the wave function in the scattering regime is obtained explicitly. In Section III, the method is used to calculate the conductance and STT in the systems of Co$|$Cu$|$FM$|$Cu(111), with FM is Co, Ni and Ni$_{80}$Fe$_{20}$, respectively. The effect of interfacial disorder is discussed. In Sec. IV, we summarize our results.
Theoretical model
=================
Let us focus on the spin transport and STT in the layered systems sketched in Fig.\[config\]. The scattering region $\mathbf{S}$, which is denoted by the layer index $1\leq I\leq N$, is sandwiched by left$\left( \mathbf{L}%
\right) $ and right$\left( \mathbf{R} \right) $ leads. For this device, there exists perfect lattice periodicity in the $X$-$Z$ plane. Particle current flows along $Y$ axis. In scattering region no periodicity is assumed along current direction. Here the atomic potentials were determined by the tight-binding linearized muffin-tin-orbital (TB-LMTO) surface Green’s function (SGF) method[@I.Turek97book]. When combined with the coherent potential approximation (CPA), this method allows the electronic structure, charge, and spin densities of layered materials with substitutional disorder to be calculated self-consistently with high efficiency. To model the noncollinear system in the spin valve, the rigid potential approximation is used. In this approximation, we rotate the potential of fixed magnet in spin space to construct the relative angle between the polarization directions of fixed magnet and free magnet, which is a good approximation as the two magnets are spaced far enough by a Cu layer.
![(color online) Sketch of the configuration used for current-induced switching. A scattering region is sandwiched by left-($\mathbf{L}$) and right-hand($\mathbf{R}$)leads which have translational symmetry and are partitioned into principle layers perpendicular to the transport direction. The scattering region contains $N$ principle layers but the structure and chemical composition are in principle arbitrary. The switching layer FM can be Co, Ni, Ni$_{80}$Fe$_{20}$.[]{data-label="config"}](config_1){width="8.6cm"}
Following previous work[@Xia06], we describe the theoretical frame developed with wave-function matching (WFM) based on TB-LMTO basis for studying the STT. In Sec. II A, we review the Hamiltonian and KKR equation for a device with noncollinear magnetization. The equation of motion (EOM) for layered system is extracted from KKR equation. In the Sec.II B, the boundary conditions of the EOM are formulated in terms of the Bloch states in the leads. In the Sec. II C, by solving the EOM in the scattering region with embedding potentials of the two leads, we obtain the complete scattering wave function of the scattering region. In the Sec. II D and E, the particle current and spin current are formulated with those obtained scattering wave function expanded in TB-LMTO basis.
Hamiltonian and KKR equation
----------------------------
For layered systems, atoms can always be grouped into principle layers defined as to be so thick that the interactions between layers $I$ and $I\pm
2$ are negligible as shown in Fig.\[config\].
The EOM for $I$th principal layer can be written as $$\mathbf{H}_{I,I-1}\mathbf{a}_{I-1}+\left( \mathbf{H}-E\right) _{II}\mathbf{a}%
_{I}+\mathbf{H}_{I,I+1}\mathbf{a}_{I+1}=0, \label{eom}$$where $E$ is always set to the Fermi energy $E_{F}$ for the transport problem. Here, $\mathbf{a}_{I}$ is the a vector describing the amplitudes of the $I$th layer in terms of the localized orbital basis $\left\vert \mathbf{R%
}L\zeta \right\rangle $, where $\mathbf{R}$ is the site index and $L$ can be defined by $L\equiv (l,m)$. $l$ and $m$ are the azimuthal and magnetic quantum numbers respectively. $\zeta =\uparrow \left( \downarrow \right) $ denotes that the basis is eigenstate in spin space, which is parallel (antiparallel) to spin quantization axis.
To the first order approximation of the full LMTO Hamiltonian, a short-range TB-LMTO Hamiltonian in the $\alpha $ representation[J.Kudron00,Andersen85book]{} in the global coordinate system can be written as $$\begin{aligned}
\mathbf{H}_{\mathbf{R}L,\mathbf{R}^{\prime }L^{\prime }}^{\alpha } &=&U_{%
\mathbf{R}}\mathcal{\overline{C}}_{\mathbf{R}L}^{\alpha }U_{\mathbf{R}%
^{\prime }}^{\dagger }\delta _{\mathbf{R}^{\prime }L^{\prime }\mathbf{R}L}
\notag \label{hamiltonian} \\
&&+[U_{\mathbf{R}}\left( \overline{\Delta }_{\mathbf{R}L}^{\alpha }\right) ^{%
\frac{1}{2}}U_{\mathbf{R}}^{\dag }S_{\mathbf{R}L,\mathbf{R}^{\prime
}L^{\prime }}^{\alpha } \notag \\
&&\times U_{\mathbf{R}^{\prime }}\left( \overline{\Delta }_{\mathbf{R}%
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The coherence of electron spin qubits in semiconductor quantum dots suffers mostly from low-frequency noise. During the last decade, efforts have been devoted to mitigate such noise by material engineering, leading to substantial enhancement of the spin dephasing time for an idling qubit. However, the role of the environmental noise during spin manipulation, which determines the control fidelity, is less understood. We demonstrate an electron spin qubit whose coherence in the driven evolution is limited by high-frequency charge noise rather than the quasi-static noise inherent to any semiconductor device. We employed a feedback control technique to actively suppress the latter, demonstrating a $\pi$-flip gate fidelity as high as $99.04\pm 0.23\,\%$ in a gallium arsenide quantum dot. We show that the driven-evolution coherence is limited by the longitudinal noise at the Rabi frequency, whose spectrum resembles the $1/f$ noise observed in isotopically purified silicon qubits.'
author:
- Takashi Nakajima
- Akito Noiri
- Kento Kawasaki
- Jun Yoneda
- Peter Stano
- Shinichi Amaha
- Tomohiro Otsuka
- Kenta Takeda
- 'Matthieu R. Delbecq'
- Giles Allison
- Arne Ludwig
- 'Andreas D. Wieck'
- Daniel Loss
- Seigo Tarucha
title: 'Coherence of a driven electron spin qubit actively decoupled from quasi-static noise'
---
[^1]
[^2]
Introduction: Noise in Spin Qubits
==================================
Since electrical manipulation of a single spin was demonstrated in semiconductor quantum dots[@Koppens2006], enormous efforts have been devoted to improve spin coherence by controlling[@Foletti2009; @Bluhm2010] or eliminating[@Veldhorst2014; @Eng2015; @Yoneda2017] nuclear spins, a magnetic noise source inherent to the host material[@Coish2004; @Merkulov:2002ft; @Khaetskii:2002jw]. The progress is impressive: for example, dephasing times of $120\,\mu\text{s}$ in $^{28}$Si and $2\,\mu\text{s}$ in GaAs have been demonstrated[@Veldhorst2014; @Shulman2014]. It is natural to expect that prolonging the spin coherence also improves the qubit control fidelity. However, while the spin coherence is dominated by low-frequency (quasi-static) noise, control fidelity of a qubit is often impeded by noise at higher frequencies[@Martinis2003; @Ithier2005; @Lisenfeld2010; @Yoshihara2014]. The underlying relationship between the control fidelity and spin coherence remains elusive because there are different noise sources that could dominate in different frequency ranges, such as nuclear spin diffusion and charge fluctuators (see Fig. \[fig:noise\]). The former shows a $1/f^{\beta}$ spectrum with $3 > \beta > 1$ in GaAs[@Medford2012; @Malinowski2017a] and possibly in natural Si devices[@Kawakami2016], while the latter with $\beta \sim 1$ can dominate in $^{28}$Si devices[@Yoneda2017]. In general, the dominant noise source depends on the material and structure of the quantum dot device as well as the frequency range of interest.
![Example of noise power spectra for spin qubits with and without feedback. A typical noise spectrum composed of $1/f^2$ and $1/f$ noise is shown in a log-log plot (black). The feedback control acts like an active filter suppressing the low-frequency noise (red). Shown on the bottom are relevant frequencies with $\Delta t$ the feedback latency, $t$ the qubit evolution time at which the coherence is evaluated, and $f_\text{rabi}$ the Rabi frequency. \[fig:noise\]](Fig0){width="40.00000%"}
To understand the limits on the qubit control fidelity imposed by those different mechanisms, we build a feedback-controlled circuit which implements realtime Hamiltonian estimation[@Shulman2014]. It allows us to suppress the low-frequency noise[@Yang2019] and resolve the $1/f$ charge and nuclear spin noise at high frequencies. We analyze how the low-frequency and high-frequency parts of the noise compete with each other and discuss the limitations of the high-fidelity control.
Device and experimental setup
=============================
We use a triple quantum dot (TQD) device fabricated on a GaAs/AlGaAs heterostructure wafer. An electron is confined in each quantum dot (QD) by the electrostatic potentials induced by Ti/Au gate electrodes. A Co micromagnet is placed on the surface and magnetized by a magnetic field of $B_\text{ext}=1.01\,\text{T}$ applied in the $z$-direction (see Fig. \[fig:ramsey\]a), creating inhomogeneous magnetic field over the QD array. The single electron spin qubit reported in this work is located in the middle QD and manipulated by the electric-dipole spin resonance (EDSR)[@Pioro-Ladriere:2008kx; @Tokura:2006ir; @Yoneda2014]. It is initialized and measured using the ancilla electron spin in the right QD[@Noiri2016], see Fig. \[fig:ramsey\]b. An up-spin state of the qubit is prepared by initializing a doubly-occupied singlet ground state in the right QD and loading one of the electrons to the middle QD. The voltage ramp is chosen to be adiabatic with respect to the inter-dot tunnel gap and the local magnetic field difference between the two dots but non-adiabatic with respect to the hyperfine gap. The final state is read out by unloading an up-spin state to the right QD in the reverse process, while leaving a down-spin state blocked in the middle QD. The experiment is conducted in a dilution refrigerator with an electron temperature of $120\,\text{mK}$.
![Ramsey meausurement and feedback-control scheme of an electron spin qubit. (a) False-colored scanning electron micrograph image of the TQD device. An electron spin qubit in the middle QD (red arrow with a circle) is controlled by the EDSR where the spin is coupled to a microwave (MW) electric field via a stray magnetic field of the micromagnet deposited on the wafer surface[@Pioro-Ladriere:2008kx]. The right QD hosts an electron spin (blue arrow with a circle) used as a readout ancilla while the left QD hosts another electron which is unused and decoupled from the two spins. The energy detuning between the middle and the right QDs ($\varepsilon$) is gate-tunable and the QD electron occupancies are probed by a proximal single-electron transistor (SET)[@Barthel:2010fk]. (b) Schematic of the Ramsey measurement. Two electrons (qubit and ancilla) are initialized to a doubly-occupied singlet state in the right QD and an up-spin qubit is prepared by adiabatically loading one of the electrons to the middle QD[@Noiri2016]. Two $\pi/2$ microwave bursts, separated by time $t_\text{R}$, are applied (before and during these, off-resonant microwave bursts are optionally applied). The ancilla-spin state is not affected by the microwave bursts. The final state is read out by unloading an up-spin (anti-parallel to the ancilla) state from the middle QD while a down-spin (parallel to the ancilla) state remains blocked. (c) Up-spin probability $P_\uparrow$ as a function of $t_\text{R}$. The lower panel shows the Ramsey oscillations whose frequency varies with the laboratory time due to Overhauser field fluctuations. Each data point of $P_\uparrow$ is calculated from one hundred single-shot readout outcomes. The upper panel shows the trace obtained by averaging all the oscillations in the lower panel. The decay envelope gives the dephasing time of $T_{2}^{*}=28.4\,\text{ns}$, a value typical for electron spins in GaAs heterostructures. (d) Schematic of the feedback control loop for a spin qubit. Data of a Ramsey oscillation as shown in (c) are processed in a digital signal processing (DSP) hardware with programmable logic (FPGA) to estimate the frequency detuning $\delta f = f_\text{qubit} - {f^\text{est}_\text{qubit}}$ between the current qubit frequency $f_\text{qubit}$ and its previous estimate ${f^\text{est}_\text{qubit}}$ (“probe” step). The value of ${f^\text{est}_\text{qubit}}$ is updated to ${f^\text{est}_\text{qubit}}\mapsto {f^\text{est}_\text{qubit}}+ \delta f$ (“update” step), after which the target experiment follows (“target” step). In the ideal case, the subsequent qubit algorithms can be executed with a microwave frequency $f_\text{MW}$ matching $f_\text{qubit}$ exactly (by choosing $\Delta=0$). \[fig:ramsey\]](Fig1){width="80.00000%"}
We first perform a standard Rabi measurement[@Noiri2016] to roughly identify the Rabi frequency $f_\text{rabi}$ and the qubit resonance frequency $f_\text{qubit}=|g\mu_\text{B}B_\text{total}|/h$. Here $g$ is the electron $g$-factor
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper focuses on interior penalty discontinuous Galerkin methods for second order elliptic equations on very general polygonal or polyhedral meshes. The mesh can be composed of any polygons or polyhedra which satisfies certain shape regularity conditions characterized in a recent paper by two of the authors in [@WangYe2012]. Such general meshes have important application in computational sciences. The usual $H^1$ conforming finite element methods on such meshes are either very complicated or impossible to implement in practical computation. However, the interior penalty discontinuous Galerkin method provides a simple and effective alternative approach which is efficient and robust. This article provides a mathematical foundation for the use of interior penalty discontinuous Galerkin methods in general meshes.'
author:
- 'Mu Lin[^1]'
- 'Junping Wang[^2]'
- 'Yanqiu Wang[^3]'
- 'Xiu Ye[^4]'
title: Interior penalty discontinuous Galerkin method on very general polygonal and polyhedral meshes
---
discontinuous Galerkin, finite element, interior penalty, second-order elliptic equations, hybrid mesh.
65N15, 65N30.
Introduction
============
Most finite element methods are constructed on triangular and quadrilateral meshes, or on tetrahedral, hexahedral, prismatic, and pyramidal meshes. To extend the idea of the finite element method into meshes employing general polygonal and polyhedral elements, one immediately faces the problem of choosing suitable discrete spaces on general polygons and polyhedrons. This issue has rarely been addressed in the past, partly because it can usually be circumvented by dividing the polygon or polyhedron into sub-elements using only one or two basic shapes. However, allowing the use of general polygonal and polyhedral elements does provide more flexibility, especially for complex geometries or problems with certain physical constraints. One of such example is the modeling of composite microstructures in material sciences. A well-known solution to this problem is the Voronoi cell finite element method [@Ghosh94; @Ghosh95; @Ghosh04; @Moorthy98], in which the mesh is composed of polygons or polyhedrons representing the grained microstructure of the given material. The main difficulty of constructing conforming finite element methods on Voronoi meshes is that, the finite element space has to be carefully chosen so that it is continuous along interfaces. Although the constructions on triangles, quadrilaterals, or three-dimensional simplexes are straight forward, it is not easy for general polygons and polyhedrons. Probably the only practically used solution is the rational polynomial interpolants proposed by Wachspress [@Wachspress75], in which rational basis functions are defined using distances from several “nodes”. An important constraint in the construction of the Wachspress basis is that, the rational basis functions need to be piecewise linear along the boundary of every element, in order to ensure $H^1$ conformity of the finite element space. This not only limits the approximation order of the entire Wachspress finite element space, but also complicates the construction. The Wachspress element has gained a renewed interest recently [@Dasgupta03; @Dasgupta03b; @Sukumar06]. However, as we have pointed out above, its construction is complicated and usually requires the aid of computational algebraic systems such as Maple.
Another practically important issue is to define finite element methods on hybrid meshes. Hybrid meshes are frequently used nowadays. It can handle complicated geometries, and can sometimes reduce the total number of unknowns. Another possible reason for using the hybrid mesh is that, some engineers argue that in three-dimensions, a hexahedral mesh yields more accurate solution than a tetrahedral mesh for the same geometry [@Yamakawa03; @Yamakawa09], as partly verified by numerical experiments. However, pure hexahedral meshes lack the ability of handling complicated geometries. Hence a hybrid mesh becomes a welcomed compromise between accuracy and flexibility. For conforming finite element methods based on hybrid meshes, continuity requirements on interfaces must be satisfied. Such a coupling is straight-forward for the $H^1$-conforming finite elements on a triangular-quadrilateral hybrid mesh. However, for three-dimensional meshes, high order finite elements, or other complicated finite element spaces, it usually requires special treatments.
An alternative solution, that can address both issues mentioned above, is to use the weak Galerkin method proposed in [@WangYe2012]. The weak Galerkin method uses discontinuous piecewise polynomials inside each element and on the interfaces to approximate the variational solution. In [@WangYe2012], the authors have proved optimal convergence of the weak Galerkin method for the mixed formulation of second order elliptic equations on very general polygonal and polyhedral meshes. Most of the existing error analysis of finite element methods assume triangular, quadrilateral, or some commonly-seen three-dimensional meshes. To our knowledge, it is the first time that optimal convergence for the finite element solutions has been rigorously proved in [@WangYe2012] for general meshes of arbitrary polygons and polyhedrons.
The discontinuous Galerkin method imposes the interface continuity weakly, and is known to be able to handle non-conformal, hybrid meshes as well as a variety of basis functions. There have been many research works in this direction, for example, nodal discontinuous Galerkin methods [@Bergot10; @Cohen00; @Hesthaven00] for hyperbolic conservation laws. However, we would like to point out that so far there has been no theoretical analysis on the convergence rate of discontinuous Galerkin method, on very general polygonal or polyhedral meshes yet. Motivated by the work in [@WangYe2012], here we would like to fill the gap. The objective of this paper is to establish the theoretical analysis of the interior penalty discontinuous Galerkin method [@Arnold02] for elliptic equations on very general meshes and discrete spaces.
The paper is organized as follows. In Section 2, we briefly describe the interior penalty discontinuous Galerkin method in an abstract setting. In Section 3, several assumptions on the discrete spaces are listed, which form a minimum requirement for the well-posedness and the approximation property of the discrete formulation. Abstract error estimations are given. In Section 4, we discuss choices of meshes and discrete spaces that satisfy the assumptions given in Section 3. Finally, numerical results are presented in Section 5.
The model problem and the interior penalty method
=================================================
Consider the model problem $$\label{eq:ellipticeq}
\begin{cases}
-\Delta u=f\qquad &\mbox{in }\Omega,\\
u=0 &\mbox{on }\partial\Omega,
\end{cases}$$ where $\Omega\in\mathbb{R}^d(d=2,3)$ is a closed domain with Lipschitz continuous boundary, and $f\in L^2(\Omega)$.
For any subdomain $K\subset \Omega$ with Lipschitz continuous boundary, we use the standard definition of Sobolev spaces $H^s(K)$ with $s\ge 0$ (e.g., see [@adams; @ciarlet] for details). The associated inner product, norm, and seminorms in $H^s(K)$ are denoted by $(\cdot,\cdot)_{s,K}$, $\|\cdot\|_{s,K}$, and $|\cdot|_{s,K}$, respectively. When $s=0$, $H^0(K)$ coincides with the space of square integrable functions $L^2(K)$. In this case, the subscript $s$ is suppressed from the notation of norm, semi-norm, and inner products. Furthermore, the subscript $K$ is also suppressed when $K=\Omega$. Finally, all above notations can easily be extended to any $e\subset \partial K$. For the $L^2$ inner product on $e$, we usually denote it as $\langle\cdot,\cdot\rangle_{e}$ in stead of $(\cdot,\cdot)_{e}$, as it can be replaced by the duality pair when needed.
For simplicity, we assume that $\Omega$ satisfy certain conditions such that Equation (\[eq:ellipticeq\]) has at least $H^{r}$ regularity with $r>3/2$, that is, the solution to Equation (\[eq:ellipticeq\]) satisfies $u\in H^{r}(\Omega)$ and $$\label{eq:regularity}
\|u\|_r \le C_R \|f\|.$$ This assumption is standard in the practice of interior penalty discontinuous Galerkin methods, as it ensures that the exact solution $u$ also satisfies the discontinuous Galerkin formulation, and thus the a priori error estimation can be easily derived in a Lax-Milgram framework. However, such a regularity assumption is not necessary in the practice of interior penalty methods. A well-known technique, which was first proposed by Gudi [@Gudi10], is to use a posteriori error estimation to derive an a priori error estimation for the interior penalty method, with only minimum regularity requirement $u\in H^1(\Omega)$. We believe that the same technique applies for the general polygonal and polyhedral meshes, as long as a working a posteriori error estimation is available. However, here we choose to completely skip this issue, as it is not the main purpose of this paper.
Assume that for all set $K$ discussed in this paper, including $\Omega$ itself, the unit outward normal vector $\bn$ is defined almost everywhere on $\partial K$. Note this is true for all polygonal and polyhedral elements with Lipschitz continuous boundaries. Since the exact
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In the simplified setting of the Schwinger model we present a systematic study on the simulation of dynamical fermions by global accept/reject steps that take into account the fermion determinant. A family of exact algorithms is developed, which combine stochastic estimates of the determinant ratio with the exploitation of some exact extremal eigenvalues of the generalized problem defined by the ‘old’ and the ‘new’ Dirac operator. In this way an acceptable acceptance rate is achieved with large proposed steps and over a wide range of couplings and masses.'
author:
- |
Francesco Knechtli and Ulli Wolff[^1]\
Institut für Physik, Humboldt Universität\
Newtonstr. 15\
12489 Berlin, Germany
title: Dynamical fermions as a global correction
---
0.5 cm
HU-EP-03/12\
SFB/CCP-03-07
[^1]: e-mail: knechtli@physik.hu-berlin.de, uwolff@physik.hu-berlin.de
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Interface states in a 1-D photonic crystal heterostructure with multiple interfaces are examined. The heterostructure is a periodic network consisting of two different photonic crystals. In addition, the two crystals themselves are periodic, with one being made of alternating binary layers and the other being a quaternary crystal with a tunable layer. The second crystal can thus be smoothly transformed from one binary crystal to another. All individual photonic crystals in the superstructure have symmetric unit cells, as well as identical periods and optical path lengths. Therefore, as the tunable layer in the quaternary crystal expands, other layers will shrink. It is found that the behavior of the localized modes in the band gaps is dependent on whether there is an even or odd number of interfaces in the heterostructure. With certain sequences of all dielectric photonic crystals, topological states are shown to split in two, whereas for other heterostructures they are shown to vanish. Additional resonant modes appear depending on how many crystals are in the heterostructure. If the tunable layer is frequency dependent, the band gap can still support topological/resonant modes with some band gaps even supporting two separate groups.'
author:
- 'Nicholas J. Bianchi'
- 'Leonard M. Kahn'
title: 'Optical States in a 1-D Superlattice with Multiple Photonic Crystal Interfaces'
---
Introduction
============
A photonic crystal (PC) is a periodic array of dielectrics and/or conductors used to scatter light [@Yablonovitch; @John]. In a similar manner to how semiconductors control the passage of electrons, PCs possess passbands which allow photons in certain frequency ranges to propogate through the crystal and photonic band gaps (PBGs), which inhibit photon flow, producing regions of suppressed transmission. The existence of these pass and stop bands are governed by Bloch’s Thereom. Photonic heterostructure devices are comprised of multiple periodic components that can produce transmission properties and field localization not seen in isolated crystals [@Istrate1; @Istrate2]. Heterostructures with a single PC interface have been extensively studied. Examples of localized behavior are the surface or interface modes, also known as optical Tamm [@Tamm] states (OTSs). These modes can exist at a boundary only if their field amplitudes decay away as the distance from the boundary increases in either direction. This means the wavevectors must be imaginary. In the case of a PC, this occurs if the mode is trying to travel through a PBG. These modes have been found in a variety of photonic structures including 1-D [@Kavokin; @Vinogradov1; @Vinogradov2; @Gao] and 2-D [@Lin] PC interfaces, air-PC surfaces at oblique angles [@Feng], and PCs bordering media with a graded refractive index [@Zheng]. Tamm states have also been investigated in various systems containing a PC with a tunable cap layer adjacent to a uniform medium. Examples include PCs containing superconducting layers [@Abouti], systems containing metamaterials, both the PC layers [@Wang; @Barvestani] and the uniform medium [@Namdar], and systems with liquid crystal [@Hajian] and chiral [@Bashiri] cap layers. Note that in Ref. [@Feng], despite the PC being adjacent to a uniform medium with positive dielectric constant, surface modes can still form due to total internal reflection. The component of the wavevector parallel to the boundary, $k_\parallel$, is large enough to cause the normal component, $k_\bot$ to become imaginary. $$k_\bot = \sqrt{k^2 - k_\parallel^2} \label{wavevector}$$ A varient of OTSs are the Tamm plasmon-polaritons (TPP) formed at a boundary between a metal and a PC [@Kaliteevski; @Brand; @Zhou]. In order for a TPP to form, the condition, $$r_{\text{metal}}r_{\text{PC}}=1 \label{reflection}$$ must be satisfied. The reflection coefficent $r_{\text{metal}}$ describes the amplitude of the electric field, incident from the PC side of the interface, reflecting off the metallic surface. In the same manner, $r_{\text{PC}}$ describes the electric field amplitude from a wave incident from the metallic side reflecting off the PC surface of the interface. In the case described in Ref. [@Kaliteevski], the TPP is excited at a frequency below the plasma frequency of the metal, implying that $r_{\text{metal}}=-1$. Therefore, to ensure that Eq. \[reflection\] remains satisifed, $r_{\text{PC}}=-1$, implying that the higher index material in the PC should be adjacent to the metal. In Ref. [@Brand], the plasmon is produced above the plasma frequency. Since the permittivity of the metal is now positive, $r_{\text{metal}}$ flips sign. For the state to exist now, the sign of $r_{\text{PC}}$ must also flip, meaning that, in the PC, the low index material is adjacent to the material. Similar to Ref. [@Feng], the state is supported on the metallic side by total internal reflection.
If an interface is generated between two PCs with symmetric unit cells, localized states at the boundary can form that are governed by the bulk band structure of the two crystals. These states are referred to as topological interface states. Xiao *et.al.* [@Xiao] showed that their existence in a PBG can be predicted by ensuring that the imaginary parts of the surface impedances for the two crystals sum to zero in the selected gap. Their work established a relation between the sign of the impedance $Z$ for a PBG and sum of all Zak [@Zak] phases, $\theta^{\text{zak}}_m$, below the gap, where $m$ denote the (isolated) bands, $$\text{sign}(\text{Im}(Z^{(n)})) = (-1)^{n+l}\exp \left(i \sum_{m=1}^{n} \theta^{\text{zak}}_m \right) \label{Zak}$$ In Eq. , $n$ is the PBG where the impedance is calculated and $l$ denotes the number of points where two bands cross below band gap $n$. Due to the PC unit cells possessing inversion symmetry, all Zak phases can only take on the values of $\pi$ or $0$ [@Zak], and thus provide a useful measure for identifying topological states. Band gap $n$ contains a topological state if $Z_L+Z_R=0$, where the subscripts indicate the PCs to the left/right of the interface. Through control of $\theta^{\text{zak}}_m$, topological states have been demonstrated in both 1-D [@Choi; @Cai] and 2-D [@Yang] systems.
Heterostructures with multiple PC/PC or PC/metallic interfaces have more degrees of freedom due to the increased number of tunable parameters, as compared to a single interface system, leading to a much richer display of resonant states. Through the control of parameters within the heterostructure, several examples of coupling between resonant states have been demenstrated [@Zhou; @Fei; @Iorsh; @Durach; @Cox; @Hu]. As an extension to the work in Ref. [@Bianchi], the behavior of interface states is investigated in a heterostructure consisting of alternating binary and quaternary PCs. If the number of binary and quaternary crystals in the stucture are the same, then there is an odd number of interfaces. In this case if the first PC in the heterostructure is binary (quaternary), then, after the alternating pattern, the last will be quaternary (binary). The orginal topological state from the two crystal hetrostructure remains but is now accompanied by a sequence of resonant states on either side. The total number of states, including the original, is equal to the number of interfaces. For an even number of interfaces, there are two possible configurations. One possibility is to have the first and last PC of the structure be binary. In this case, it is found that the orginal topological state vanishes while the resonant states remain. The other case is to have the first and last crystals be quaternary. With only a single binary PC sandwiched between two quaternary PCs, the topological state splits. If more layers are added in this scenario, keeping the two ends quaternary, the split state is joined by resonant states.
Methods
=======
Our work was conducted using transfer matrix method (TMM) [@Yariv]. Keeping with Ref. [@Bianchi], all variables are made dimensionless for convenience. The lengths of the individual PC layers, $l_i$, are scaled to the unit cell period, $\Lambda$: $d_i=l_i/\Lambda$ and are such that $\Lambda$ and the optical path, $\Gamma$, for a unit cell are constant. In the heterostructure, shown as the middle image in Fig. \[PCM\], the periods for all the individual PCs are equal, as are the optical paths. The binary PCs are the gray regions and the quaternary PCs are the light blue regions. Since there is no fixed length scale, we set $\gamma=\Gamma/\Lambda$. For the quaternary PC, shown at the top of Fig. \[PCM\], the widths of layers $A$, in green, and $B$, in blue, can be expressed in terms of a free parameter, the width of the introduced layer, $d_C$, in orange, [@Bianchi],
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We demonstrate the absence of a DC Stark shift in an ytterbium optical lattice clock. Stray electric fields are suppressed through the introduction of an in-vacuum Faraday shield. Still, the effectiveness of the shielding must be experimentally assessed. Such diagnostics are accomplished by applying high voltage to six electrodes, which are grounded in normal operation to form part of the Faraday shield. Our measurements place a constraint on the DC Stark shift at the $10^{-20}$ level, in units of the clock frequency. Moreover, we discuss a potential source of error in strategies to precisely measure or cancel non-zero DC Stark shifts, attributed to field gradients coupled with the finite spatial extent of the lattice-trapped atoms. With this consideration, we find that Faraday shielding, complemented with experimental validation, provides both a practically appealing and effective solution to the problem of DC Stark shifts in optical lattice clocks.'
author:
- 'K. Beloy'
- 'X. Zhang'
- 'W. F. McGrew'
- 'N. Hinkley'
- 'T. H. Yoon'
- 'D. Nicolodi'
- 'R. J. Fasano'
- 'S. A. Schäffer'
- 'R. C. Brown'
- 'A. D. Ludlow'
title: 'Faraday-shielded, DC Stark-free optical lattice clock'
---
In the nearly seven decade-old quest to push the boundaries of atomic clock performance, and thus metrological capabilities in general, the elimination or precise evaluation of frequency shifts caused by external electromagnetic fields has been a persistent challenge [@Har52; @EssPar55]. In modern-day optical lattice clocks, AC Stark shifts due to lattice light and blackbody radiation are two prominent examples [@KatTakPal03; @LudBoyYe15]. DC Stark shifts, attributed to nearby electronics or patch charges on the clock apparatus, have been observed as large as $10^{-13}$ [@LodZawLor12] and pose a legitimate threat to state-of-the-art $10^{-18}$ clock performance (throughout, quoted shifts are understood to be in units of the clock frequency). Strategies to mitigate this threat include I) applying electric fields to measure and, if desired, cancel the stray-field shift [@LodZawLor12; @FalLemGre14; @BloNicWil14; @NicCamHut15; @NorCliMun17arXiv], or II) enclosing the atoms by equipotential conductive surfaces, furnishing them with a field-free environment [@BelHinPhi14; @UshTakDas14; @NemOhkTak16; @KolGroVog17]. DC Stark shifts have also been estimated from apparatus geometry and material properties [@PizThoRau17; @KimHeoLee17]. Recently, Rydberg atoms were demonstrated as an [*in situ*]{} probe of the stray field in an optical lattice clock [@BowHobHui17].
Here we identify a mechanism capable of compromising a Method I analysis, for which uncertainties at the $10^{-19}$ level have been reported. Using a simple model, we demonstrate how field gradients coupled with finite spatial extent of the lattice-trapped atoms can lead to appreciable clock error. Generally, the error scales with the measured stray-field shift. In principle, such error can be reduced by minimizing the stray field itself, which is precisely the objective of Method II. Unfortunately, practical constraints preclude surrounding the atoms with an ideal, continuous Faraday cage. Moreover, even conductive surfaces can acquire patch charges, a known concern for electrodes in ion clocks [@BerMilBer98]. Consequently, a residual shift may remain, and quantifying an upper bound may be challenging. Seemingly, an optimal solution combines the attributes of Methods I and II. We demonstrate this combined approach in an ytterbium optical lattice clock, with measurements confirming the absence of a stray-field shift at the $10^{-20}$ level.
Given a uniform static electric field $\mathbf{E}$, the clock acquires a frequency shift $\delta\nu=kE^2$, where $E=\left|\mathbf{E}\right|$ and $k$ is specific to the clock transition. Namely, $k\equiv-(\alpha_e-\alpha_g)/2h$, where $h$ is Planck’s constant and $\alpha_{g,e}$ are the static polarizabilities of the ground and excited clock states. To characterize blackbody radiation shifts, the coefficient $k$ has been accurately measured for both Yb and Sr clock transitions [@SheLemHin12; @MidFalLis12]. In practice, the lattice-trapped atoms have finite spatial extent, and the electric field may be nonuniform over this extent. Thus, a more complete representation of the clock shift is $\delta\nu=k\left\langle E^2\right\rangle$, where $\langle\cdots\rangle$ denotes an average over the atoms. Generally, $\mathbf{E}$ is composed of both stray and applied fields. Given some nonzero stray field, it is evident that a true null shift can only be achieved if the applied field identically cancels the stray field across the entire atomic extent.
To illustrate the role field gradients can play in Method I, we introduce a simple model that affords an analytical solution. The model is illustrated in Fig. \[Fig:modelquadcurve\](a) and amounts to a cylindrically symmetric boundary value problem for the fields. The vacuum apparatus is taken to be a hollow metallic cylinder sealed with glass windows. The cylinder is electrically grounded, while the windows carry uniformly distributed static charges $q_1$ and $q_2$ across their respective internal surfaces. The external surfaces are spanned by electrodes, to which opposite voltages $+V$ and $-V$ are applied. With the electrodes grounded ($V=0$), a stray field exists due to the charges. For $V\neq0$, the electrodes further introduce an applied field. A one-dimensional optical lattice aligned with the symmetry axis confines the atoms with negligible radial extent and Gaussian axial distribution $(2\pi s^2)^{-1/2}\exp\left(-z^2/2s^2\right)$, with $z$ being the distance from the center of the vacuum apparatus. The windows are separated by a distance $\ell$ and have diameter $d$, thickness $t$, and dielectric constant $\epsilon$. Expressions for the electric potential within the vacuum region can be found in the Supplemental Material (SM) [@SM].
![a) Section view of the clock model described in the text. b) Corresponding clock shift $\delta\nu(V)$, with the quantities $\delta\nu_0$, $\delta\nu^*$, and $\Delta\nu$ introduced in the text. For $k>0$ ($k<0$), the extremum is a minimum (maximum) and all three quantities are positive (negative). []{data-label="Fig:modelquadcurve"}](modelX.pdf){width="\linewidth"}
As demonstrated on a more general basis below, the clock shift has the functional form $$\delta\nu\left(V\right)=\delta\nu_0+aV+bV^2.
\label{Eq:dnuoneV}$$ The coefficients $a$ and $b$ are experimentally accessible parameters whose values may be determined by modulating $V$ and observing the clock response. Specifying the clock shift for any $V$ requires further knowledge of the stray-field shift $\delta\nu_0$. Towards this goal, we consider the extremum value of $\delta\nu\left(V\right)$, denoted $\delta\nu^*$. The stray-field shift $\delta\nu_0$, the extremum shift $\delta\nu^*$, and the difference between them $\Delta\nu\equiv \delta\nu_0-\delta\nu^*$ are depicted in Fig. \[Fig:modelquadcurve\](b). In contrast to $\delta\nu_0$ and $\delta\nu^*$, $\Delta\nu$ is accessible through modulation of $V$. Invoking elementary calculus with Eq. (\[Eq:dnuoneV\]), we find $\Delta\nu=a^2/4b$. Let us initially neglect the atomic extent, taking the limit $s\rightarrow0$. In this case, there exists a $V$ for which the applied field identically cancels the stray field at the atoms, resulting in a null clock shift. This necessarily coincides with the extremum of $\delta\nu\left(V\right)$, as any other $V$ yields a nonzero clock shift of definite sign (determined by $k$). This implies $\delta\nu^*=0$, and it follows that $\delta\nu_0$ may be inferred from $\Delta\nu$ according to $\delta\nu_0=\Delta\nu$.
The above reasoning breaks down for nonzero $s$, as we can no longer expect there to be a $V$ such that the applied field identically cancels the stray field over the entire atomic extent. Consequently, $\delta\nu^*$ plays the role of a frequency correction for the field gradients. We write $\delta\nu^*=\eta\Delta\nu$, motivated by the fact that $\delta\nu^*$ and $\Delta\nu$ scale similarly with the stray field. Namely, a uniform scaling of the stray charge leaves $\eta$ unchanged. The stray-field shift subsequently reads $\delta\nu_0=\left(1+\eta\right)\Delta\nu$. To leading order in $s$, we find $\eta=\zeta^2s^2/\mathcal{R}^2$, where $\mathcal{R}$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The INTEGRAL Burst Alert System (IBAS) is the software for real time detection of Gamma Ray Bursts (GRBs) and the rapid distribution of their coordinates. IBAS has been running almost continuously at the INTEGRAL Science Data Center since the beginning of the INTEGRAL mission, yielding up to now accurate localizations for 12 GRBs detected in the IBIS field of view. IBAS is able to provide error regions with radii as small as 3 arcminutes (90% c.l.) within a few tens of seconds of the GRB start. We present the current status of IBAS, review the results obtained for the GRBs localized so far, and briefly discuss future prospects for using the IBAS real time information on other classes of variable sources.'
author:
- 'S. Mereghetti'
- 'D. Götz'
- 'J. Borkowski'
- 'M. Beck'
- 'A. von Kienlin'
- 'N. Lund'
title: 'The INTEGRAL Burst Alert System: Results and Future Perspectives'
---
\[2001/04/25 1.1 (PWD)\]
Introduction
============
A new era in the study of Gamma-ray Bursts (GRBs) started with the *BeppoSAX* observations leading to the discovery of their X–ray, optical and radio afterglows [@costa; @vanpa; @frail]. The great progress which occurred in the last few years in our understanding of GRBs has been possible thanks to extensive multi-wavelength observations of these unpredictable and rapidly fading events. In this respect, a quick derivation and distribution of accurate sky positions for GRBs is crucial. Here we review the contribution in this field obtained during the first 18 months of the INTEGRAL mission. We concentrate on the GRBs observed within the field of view of the IBIS instrument [@ibis]. Bursts observed with the SPI Anticoincidence Shield (ACS) are described elsewhere in these proceedings [@acs].
Thanks to its 72 hours orbit, the INTEGRAL satellite is in continuous contact with the ground stations during the observations. This has allowed us to implement a ground-based software, the INTEGRAL Burst Alert System (IBAS), for the search in near real time of GRBs [@ibas]. The IBAS software and its current performances are briefly described in Section 2. In Section 3 we summarize the main results on the twelve GRBs observed to date in the field of view of the INTEGRAL instruments. Finally, in Section 4 we describe the IBAS capability to provide real time information also on other classes of transient sources.
IBAS description and performances
=================================
A detailed description of IBAS is given in @ibas. Here we briefly remind the most salient features of the system.
As mentioned above, the search for GRBs is done on ground, at the INTEGRAL Science Data Centre [@isdc]. In fact, no on-board triggering system is present on INTEGRAL and the operating modes of the instruments do not change when a GRB occurs. Since, under nominal conditions, the telemetry data reach the ISDC without important delays, the IBAS programs can run in near real time. Such a ground based system offers some advantages with respect to systems operating on board satellites, e.g. a larger computing power and more flexibility for software and hardware upgrades. In fact, in the course of the first year after the launch of INTEGRAL several changes and additions have been done to the IBAS programs. The current configuration is based on two different methods to look for GRBs in the data from the IBIS lower energy detector ISGRI [@isgri].
In the first method the overall counting rate is monitored to look for significant excesses with respect to a running average of the background, in a way similar to traditional triggering algorithms used on-board previous satellites. Several different energy ranges and integration times (from 2 ms to 5.12 s) are sampled in parallel. A rapid imaging analysis is performed only when a significant counting rate excess is detected. Imaging allows to eliminate many false triggers caused, e.g. by instrumental effects or background variations that do not produce a point source in the reconstructed sky images. The second method is entirely based on imaging. Images of the sky are continuously produced (integration times of 10, 20, 40 and 100 s) and compared with the previous ones to search for new sources.
The GRB positions derived by IBAS are delivered via Internet to all the interested users. For the GRBs detected with high significance, this is done immediately by the software which sends *Alert Packets* using the UDP transport protocol. In case of events with lower statistical significance, the alerts are sent only to the members of the IBAS Localization Team, who perform further analysis and, if the GRB is confirmed, can distribute its position with an *Off-line Alert Packet*.
-------------- ------------- ----------------------- ------------------- ------------
GRB Approximate Delay$^{a}$ in External delivery References
duration position distribution of IBAS
\[s\] internal/public *Alert Packets*
021125 25 –$^{b}$ / 0.9 days OFF @021125D
021219 6 10 s / 5 hr OFF @021219D
030131 150 21 s / 2 hr ON @030131D
030227 20 35 s / 48 min OFF @030227D
030320 50 12 s / 6 hr ON @030320D
030501 40 30 s / 30 s ON @030501D
031203 30 18 s / 18 s ON @031203D
040106 60 12 s / 12 s ON @040106D
040223 250 210 s / 210 s ON @040223D
040323 20 30 s / 30 s ON @040323D
040403 35 21 s / 21 s ON @040403D
040422 8 17 s / 17 s ON @040422D
\[tab:spec\]
-------------- ------------- ----------------------- ------------------- ------------
$^{a}$ Computed from the GRB start time.
$^{b}$ The IBAS *Detector Programs* were in idle mode owing to the limited telemetry allocation for IBIS/ISGRI during this observation.
The first two months of operations after the INTEGRAL launch were devoted to the optimization of the IBAS parameters. Some changes in the algorithms were also required to adapt them to the in-flight data characteristics. Delivery of the *Alert Packets* to the external clients started on January 17, 2003. Since then it has always been enabled, except during the first calibration campaign on the Crab Nebula (12-28 February 2003), and a few very short interruption (few hours) due to maintenance reasons.
Up to now (April 2004), twelve GRBs have been discovered in the field of view of IBIS. Figure \[fov\] shows their positions in the fields of view of the INTEGRAL instruments. All of them were at off-axis angles too large to be seen with the OMC and JEM-X instruments.
The time and accuracy performances of the IBAS localizations for these bursts are summarized in Table 1 and illustrated in Figs. \[delays\] and \[erboxall\]. Note that at the beginning of the mission the in-flight instrument misalignment was not calibrated yet. Therefore, error radii as large as 20$'$ or 30$'$ were given. The systematic uncertainties could be reduced in the following months, leading to smaller error regions.
The time delay in the distribution of coordinates results from the sum of several factors. First of all there is a delay on board the satellite, which is variable and depends on the instrument. For IBIS/ISGRI data the average delay is about 5 s. Signal propagation to the ground station is negligible (maximum $\sim$0.6 s), but some time is required before the data are received at the ISDC. This is on average 3 s when the ESA ground station in Redu (Belgium) is used, or 6 s when the NASA Goldstone ground station is used. The time to detect the GRB depends on the algorithm which triggers. The delay between the trigger time and the GRB onset is of course dependent on the intensity and time profile of the event. The IBAS simultaneous sampling in different timescales should ensure a small delay in most cases, however in practice a minimum of $\sim$3 s is required to accumulate an image with enough statistics. Finally, the conversion to sky coordinates, comparison with list of known variable sources, *Alert Packet* construction and delivery require less than about 2 s. Of course, the above numbers assume nominal condition, i.e. no telemetry gaps, no saturation of the allocated telemetry, no missing auxiliary data files, etc...
As can be seen in Fig. \[delays\] all the burst detected by IBAS after April 2003 had very small error regions distributed within a few tens of seconds, often while the gamma-ray emission was still visible. Such a combination of high speed and small error region was never achieved before. Note that the 210 s delay in the localization of GRB 040223 was due do the particular light curve shape of this burst lasting about 4 minutes and with the brightest peak at the end.
-------------- -------------------------- --------------------------- ---------------------
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'cite.bib'
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Consider a discrete-time linear time-invariant descriptor system $Ex(k+1)=Ax(k)$ for $k \in \mathbb Z_{+}$. In this paper, we tackle for the first time the problem of stabilizing such systems by computing a nearby regular index one stable system $\hat E x(k+1)= \hat A x(k)$ with $\text{rank}(\hat E)=r$. We reformulate this highly nonconvex problem into an equivalent optimization problem with a relatively simple feasible set onto which it is easy to project. This allows us to employ a block coordinate descent method to obtain a nearby regular index one stable system. We illustrate the effectiveness of the algorithm on several examples.'
author:
- 'Nicolas Gillis[^1] Michael Karow[^2] Punit Sharma[^3]'
bibliography:
- 'GilKS18b.bib'
title: 'A note on approximating the nearest stable discrete-time descriptor system with fixed rank'
---
**Keywords.** stability radius, linear discrete-time descriptor system, stability
Introduction
============
In [@OrbNV13; @GilKS18a], authors have tackled the problem of computing the nearest stable matrix in the discrete case, that is, given an unstable matrix $A$, find the smallest perturbation $\Delta_A$ with respect to Frobenius norm such that $\hat A = A+\Delta_A$ has all its eigenvalues inside the unit ball centred at the origin. In this paper, we aim to generalize the results in [@GilKS18a] for matrix pairs $(E,A)$, where $E,A\in \R^{n,n}$. The matrix pair $(E,A)$ is called *regular* if $\operatorname{det}(\lambda E-A)\neq 0$ for some $\lambda \in \mathbb C$, which we denote $\operatorname{det}(\lambda E-A) \not\equiv 0$, otherwise it is called *singular*. For a regular matrix pair $(E,A)$, the roots of the polynomial $\operatorname{det}(z E-A)$ are called *finite eigenvalues* of the pencil $zE-A$ or of the pair $(E,A)$. A regular pair $(E,A)$ has *$\infty$ as an eigenvalue* if $E$ is singular. A regular real matrix pair $(E,A)$ can be transformed to *Weierstraß canonical form* [@Gan59a], that is, there exist nonsingular matrices $W, T \in \C^{n,n}$ such that $$E=W{\left[\begin{array}{cc}}I_q& 0\\0&N{\end{array}\right]}T \quad \text{and}\quad A=W {\left[\begin{array}{cc}}J &0\\0&I_{n-q}{\end{array}\right]}T,$$ where $J \in \C^{q,q}$ is a matrix in *Jordan canonical form* associated with the $q$ finite eigenvalues of the pencil $z E-A$ and $N \in \C^{n-q,n-q}$ is a nilpotent matrix in Jordan canonical form corresponding to $n-q$ times the eigenvalue $\infty$. If $q < n$ and $N$ has degree of nilpotency $\nu \in \{1,2,\ldots\}$, that is, $N^{\nu}=0$ and $N^i \neq 0$ for $i=1,\ldots,\nu-1$, then $\nu$ is called the *index of the pair* $(E,A)$. If $E$ is nonsingular, then by convention the index is $\nu=0$; see for example [@Meh91; @Var95]. The matrix pair $(E,A) \in (\R^{n,n})^2$ is said to be *stable* (resp. *asymptotically stable*) if all the finite eigenvalues of $zE-A$ are in the closed (resp. open) unit ball and those on the unit circle are semisimple. The matrix pair $(E,A)$ is said to be *admissible* if it is regular, of index at most one, and stable.
The various distance problems for linear control systems is an important research topic in the numerical linear algebra community; for example, the distance to bounded realness [@AlaBKMM11], the robust stability problem [@Zho11], the stability radius problem for standard systems [@Bye88; @HinP86] and for descriptor systems [@ByeN93; @DuLM13], the nearest stable matrix problem for continuous-time systems [@OrbNV13; @GilS17; @MehMS17; @GugL17] and for discrete-time systems [@OrbNV13; @NesP17; @GP2018; @GilKS18a], the nearest continuous-time admissible descriptor system problem [@GilMS17], and the nearest positive real system problem [@GilS17b]. For a given unstable matrix pair $(E,A)$, the discrete-time nearest stable matrix pair problem is to solve the following optimization problem $$\label{mainprob}
\inf_{(\hat E,\hat A)\in\mathcal S^{n,n}} {\|E-\hat E\|}_F^2+{\|A-\hat A\|}_F^2,\tag{$\mathcal{P}$}$$ where $\mathcal S^{n,n}$ is the set of admissible pairs of size $n \times n$. This problem is the converse of stability radius problem for descriptor systems [@ByeN93; @DuLM13] and the discrete-time counter part of continuous-time nearest stable matrix pair problem [@GilMS17]. Such problems arise in systems identification where one needs to identify a stable matrix pair depending on observations [@OrbNV13; @GilS17]. This is a highly nonconvex optimization problem because the set $\mathcal S^{n,n}$ is unbounded, nonconvex and neither open nor closed. In fact, consider the matrix pair $$\label{eq:ex1}
(E,A)=\Bigg(
{\left[\begin{array}{ccc}}1&0&0\\0&0&0\\0&0&0 {\end{array}\right]},~{\left[\begin{array}{ccc}}1/2&0&2\\0&1&0\\0&0&1 {\end{array}\right]}\Bigg).$$ The pair $(E,A)$ is regular since $\text{det}(\lambda E-A)=\text{det}(\lambda -1/2)\not\equiv 0$, of index one, and stable with the only finite eigenvalue $\lambda_1=1/2$. Thus $(E,A) \in \mathcal S^{3,3}$. Let $$\label{eq:ex1perturb}
(\Delta_E,\Delta_A)=\Bigg(
{\left[\begin{array}{ccc}}0&0&0\\0&\epsilon_1&\epsilon_2\\0&0&0 {\end{array}\right]},{\left[\begin{array}{ccc}}0&0&0\\0&0&0\\0&0&-\delta {\end{array}\right]}\Bigg),$$ and consider the perturbed pair $(E+\Delta_E,A+\Delta_A)$. If we let $\delta=\epsilon_1=0$ and $\epsilon_2>0$, then the perturbed pair is still regular and stable as the only finite eigenvalue $\lambda_1=1/2$ belongs to the unit ball, but it is of index two. For $\epsilon_2=\delta=0$ and $0<\epsilon_1<1$, the perturbed pair is regular, of index one but has two finite eigenvalues $\lambda_1=1/2$ and $\lambda_2=1/\epsilon_1 >1$. This implies that the perturbed pair is unstable. This shows that $\mathcal S^{3,3}$ is not open. Similarly, if we let $\epsilon_1=\epsilon_2=0$ and $\delta >0$, then as $\delta \rightarrow 1$ the perturbed pair becomes non-regular. This shows that $\mathcal S^{3,3}$ is not closed. The nonconvexity of $\mathcal S^{n,n}$ follows by considering for example $$\label{eq:nonconvex}
\Sigma_1=\Big(I_2,\underbrace{{\left[\begin{array}{cc}}0.5 & 2\\ 0& 1 {\end{array}\right]}}_{A}\Big), \quad \Sigma_2=\Big(I_2,\underbrace{{\left[\begin{array}{cc}}0.5 & 0\\ -2& 1{\end{array}\right]}}_{B}\Big),$$ where $\Sigma_1,\Sigma_2 \in \mathcal S^{2,2}$, while $\gamma \Sigma_1 + (1-\gamma)\Sigma_2 \notin \mathcal S^{2,2}$ for $\gamma=\frac{1}{2}$, since $\frac{1}{2} \Sigma_1+\frac{1}{2} \Sigma_2$ has two eigenvalues 0.75$\pm$0.96$i$ outside the unit ball. Therefore it is in general difficult to work directly with the set $\mathcal S^{n,n}$. We explain in Section \[reform\] the difficulty in generalizing the results in [@GilKS18a] for problem .
In this paper, we consider instead a *rank-constrained nearest stable matrix pair problem*. For this, let $r (<n) \in \mathbb Z_{+
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The role played by a kinetic barrier originated by out-of-plane step edge diffusion, introduced in \[Leal *et al.*, [J. Phys. Condens. Matter **23**, 292201 (2011)](https://doi.org/10.1088/0953-8984/23/29/292201)\], is investigated in the Wolf-Villain and Das Sarma-Tamborenea models with short range diffusion. Using large-scale simulations, we observe that this barrier is sufficient to produce growth instability, forming quasiregular mounds in one and two dimensions. The characteristic surface length saturates quickly indicating a uncorrelated growth of the three-dimensional structures, which is also confirmed by a growth exponent $\beta=1/2$. The out-of-plane particle current shows a large reduction of the downward flux in the presence of the kinetic barrier enhancing, consequently, the net upward diffusion and the formation of three-dimensional self-assembled structures.'
address: 'Departamento de Estatística, Física e Matemática, Campus Alto Paraopeba, Universidade Federal de São João Del-Rei, 36420-000, Ouro Branco, MG, Brazil'
author:
- 'Anderson J. Pereira'
- 'Sidiney G. Alves'
- 'Silvio C. Ferreira'
title: 'Effects of a kinetic barrier on limited-mobility interface growth models'
---
=1
Introduction
============
A rich variety of morphologies can be observed during far-from-equilibrium growth processes and many of them with potential for technological applications [@michely2004islands; @Evans2006; @barabasi; @meakin]. Growth instability can induce three-dimensional mound-like patterns in different types of films such as metals [@Jorritsma; @Caspersen; @Han], inorganic [@Johnson; @Tadayyon] and organic [@Zorba; @Hlawacek] semiconductors materials to cite only a few examples. Such a growth instability has been mainly attributed to the presence of Ehrlich-Schwoebel (ES) step barriers [@Ehrlich; @Schwoebel] that reduce the rate with which atoms move downwardly on the edges of terraces leading to net uphill flows. Growth instabilities can also emerge from topologically induced uphill currents which depend on the crystalline structure [@Kanjanaput2010] or from fast diffusion on terrace edges [@Murty2003; @Pierre-Louis1999] among other mechanisms [@Evans2006; @michely2004islands]. The existence of ES barriers is supported by molecular dynamic simulations [@Yang].
{width="0.35\linewidth"} {width="0.35\linewidth"}
Discrete solid-on-solid (SOS) growth models constitute an important approach to investigate the dynamic of kinetic roughening and morphological properties of interfaces. The rules are easily implemented in a discrete space (lattices) rid of overhangs and bulk voids. The role played by ES barriers has been investigated in models with thermally activated diffusion [@Evans2006; @michely2004islands] being the Clark-Vvedenski (CV) model [@CV_PRL; @Clarke1988] one of the simplest examples, in which any surface adatom can move according to an Arrhenius diffusion coefficient $D\sim
\exp(-E/k_B T)$ [@barabasi] where $E$ is an energy activation barrier to be overcome in a diffusion hopping. An ES barrier can be included as an additional activation energy for diffusion at the edges of terraces [@Evans2006]. The effects of a step barrier of purely kinetic origin, namely simple diffusion, were investigated in an epitaxial growth model with thermally activated diffusion [@Leal_JPCM]. In this model, a particle performing an interlayer movement through steps with more than one monolayer has to diffuse along the columns, perpendicularly to the substrate, instead of attaching directly at the bottom or top of a terrace. This kinetic barrier reduces downhill currents and three-dimensional structures in the form of mounds are obtained at short-time scales even in the case of weak ES barriers where the conventional rule would not lead to mound formation.
Simple models with limited mobility can be used to investigate kinetic roughening [@barabasi; @meakin]. Wolf-Villain (WV) [@WV] and Das Sarma-Tamborenea (DT) [@DT] models, introduced to investigate molecular-beam-epitaxy (MBE) growth, are benchmarks of this class and have been intensively investigated [@Smilauer; @Milan; @Huang; @HaselwandterPRL; @HaselwandterPRE; @Sarma; @Punyindu; @wvbogo; @Xun; @Luis2019]. A variation of the CV model with limited mobility has been considered [@Aarao2010; @Aarao2013] and many features of the original model have been reproduced with this simplified version [@To2018]. Effects of a step barrier were investigated in both WV [@Rangdee] and DT [@DasSarma_SC] models introducing two additional probabilities for downward and upward interlayer diffusion with the former larger than the latter, and mound formation was observed in both models. WV and DT models without step barrier were investigated in several lattices [@Chatraphorn2001; @Kanjanaput2010] and it was found that the WV model can present topologically induced mound morphologies on some lattices but not in others while no clear evidence for three-dimensional structures was observed for DT. In one-dimension, it is widely accepted that both DT and WV models asymptotically produce self-affine surfaces belonging to nonlinear MBE [@Luis2019] and Edwards-Wilkinson [@Vvedensky] universality classes, respectively.
It was reported that a kinetic barrier alone does not induce mound morphologies in thermally activated CV-like models [@Leal_JPCM] but, instead, they exhibit kinetic roughening with exponents consistent with the nonlinear MBE universality class [@DT; @Villain; @LSarma]. Therefore, given the simplicity of limited-mobility growth models and the non-trivial effects of topologically induced uphill currents in DT and WV models, one would wonder how they respond to a barrier of purely kinetic origin. In order to fill this gap, we investigate WV and DT models with the introduction of the kinetic barrier proposed in Ref. [@Leal_JPCM]. We observed mounds in both models in 1+1 and 2+1 dimensions, being much more evident for WV model. The surface coarsening ceases quickly with the saturation of the characteristic surface length and regimes of uncorrelated mound growth are asymptotically observed. Analysis of the out-of-plane currents shows a large reduction of the downhill flux of particles, enhancing surface instabilities and mound formation.
The remaining of the paper is organized as follows. The model implementation details are presented in section \[sec:model\]. In section \[results\], we discuss the results obtained in the simulations. Our conclusions and some perspectives are drawn in the section \[conclusion\].
Models {#sec:model}
======
In all investigated models, the particles are randomly deposited on a $d$-dimensional lattice of linear size $L$ with periodic boundary conditions under the SOS condition. Results presented in this work correspond to regular chains in $d=1$ and square lattices in $d=2$. Other lattices were tested and the central conclusions remain unaltered. The height of the interface at site $i$ and time $t$ is represented by $h_i(t)$ and the initial condition is given by $h_i(0)=0$ such that the initial interface is flat.
In the WV model with a kinetic barrier [investigated in the present work]{}, the growth rule is implemented as follows. At each time step, a position $i$ is randomly chosen. A location $i'$ with the largest number of bonds that a new deposited adatom would have is determined within a set containing $i$ and its nearest-neighbors. If the initial position corresponds to the largest number of bonds ($i'\equiv i$), it is chosen as the deposition place and the simulation runs to the next step. In case of multiple options, one is chosen at random. Otherwise, the particle tries to diffuse to the neighbor $i'$ with a probability given by [@Leal_JPCM] $$\label{prob}
P_{\delta h}(i,i')=
\left\{
\begin{array}{cl}
1, & \textrm{if } ~ |\delta h|<2\\
\frac {1}{|\delta h|}, & \textrm{if } ~ |\delta h|\geq 2
\end{array}
\right.$$ where $\delta h = h_i-h_{i'}$. With probability $1 - P_{\delta h}(i,i')$ the particle remains at the site $i$. It is important to mention that Eq. is obtained [ assuming that the adatom first moves to top kink of the terrace and then start a unbiased one-dimensional random-walk normally to the initial substrate, stopping the movement if it either arrives at the bottom or return to top of the terrace. The result is the]{} solution of a non-directed one-dimensional random walk with
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'One of the most important properties influencing the chemical behavior of an element is the energy released with the addition of an extra electron to the neutral atom, referred to as the electron affinity (EA). Among the remaining elements with unknown EA is astatine, the purely radioactive element 85. Astatine is the heaviest naturally occurring halogen and its isotope $^{211}$At is remarkably well suited for targeted radionuclide therapy of cancer. With the At$^-$ anion being involved in many aspects of current astatine labelling protocols, the knowledge of the electron affinity of this element is of prime importance. In addition, the EA can be used to deduce other concepts such as the electronegativity, thereby further improving the understanding of astatine’s chemistry. Here, we report the first measurement of the EA for astatine to be **[2.41578(7)]{}** eV. This result is compared to state-of-the-art relativistic quantum mechanical calculations, which require incorporation of the electron-electron correlation effects on the highest possible level. The developed technique of laser-photodetachment spectroscopy of radioisotopes opens the path for future EA measurements of other radioelements such as polonium, and eventually super-heavy elements, which are produced at a one-atom-at-a-time rate.'
author:
- 'David Leimbach$^{1,2,3*}$, Julia Sundberg$^2$, Yangyang Guo$^4$, Rizwan Ahmed$^{5}$, Jochen Ballof$^{1,6}$, Lars Bengtsson$^2$, Ferran Boix Pamies$^1$, Anastasia Borschevsky$^4$, Katerina Chrysalidis$^{1,3}$, Ephraim Eliav$^{11}$, Dmitry Fedorov$^{7}$, Valentin Fedosseev$^1$, Oliver Forstner$^{8,9}$, Nicolas Galland$^{10}$, Ronald Fernando Garcia Ruiz$^1$, Camilo Granados$^1$, Reinhard Heinke$^3$, Karl Johnston$^1$, Agota Koszorus$^1$, Ulli Köster$^{13}$, Moa K. Kristiansson$^{14}$, Yuan Liu$^{15}$, Bruce Marsh$^1$, Pavel Molkanov$^{7}$, Lukáš F. Pašteka$^{12}$, Joao Pedro Ramos$^1$, Eric Renault$^{10}$, Mikael Reponen$^{16}$, Annie Ringvall-Moberg$^{1,2}$, Ralf Erik Rossel$^1$, Dominik Studer$^3$, Adam Vernon$^{17}$, Jessica Warbinek$^{2,3}$, Jakob Welander$^2$, Klaus Wendt$^3$, Shane Wilkins$^1$, Dag Hanstorp$^2$ and Sebastian Rothe$^1$'
bibliography:
- 'bib.bib'
title: The electron affinity of astatine
---
CERN, Geneva, Switzerland
Department of Physics, University of Gothenburg, Gothenburg, Sweden
Institut für Physik, Johannes Gutenberg-Universität, Mainz, Germany
Van Swinderen Institute for Particle Physics and Gravity, University of Groningen, Groningen, The Netherlands
National Centre for Physics (NCP), Islamabad, Pakistan
Institut für Kernchemie, Johannes Gutenberg-Universität, Mainz, Germany
Petersburg Nuclear Physics Institute - NRC KI, Gatchina, Russia
Institut für Optik und Quantenelektronik, Friedrich-Schiller-Universität Jena, Germany
Helmholtz-Institut Jena, Jena, Germany
CEISAM, Université de Nantes, CNRS, Nantes, France
School of Chemistry, Tel Aviv University, Tel Aviv, Israel
Department of Physical and Theoretical Chemistry & Laboratory for Advanced Materials, Faculty of Natural Sciences, Comenius University, Bratislava, Slovakia
Institut Laue-Langevin, Grenoble, France
Department of Physics, Stockholm University, Stockholm, Sweden
Physics Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
Department of Physics, University of Jyväskylä, Jyväskylä, Finland
School of Physics and Astronomy, The University of Manchester, Manchester, UK
Introduction {#introduction .unnumbered}
============
Chemistry is all about molecule formation through the creation or destruction of chemical bonds between atoms and relies on an in-depth understanding of the stability and properties of these molecules. Most of these properties can be traced back to the molecule’s constituents, the atoms. Thus, the intrinsic characteristics of chemical elements are of crucial importance in the formation of chemical bonds. The electron affinity (EA), one of the most fundamental atomic properties, is defined as the amount of energy released when an electron is added to a neutral atom in the gas phase. Large EA values characterize electronegative atoms, i.e. atoms that tend to attract shared electrons in chemical bonds. Hence, the EA informs about the subtle mechanisms in bond making between atoms, and it also reveals information about molecular properties such as the dipole moment or the molecular stability. Since the attraction from the nucleus is efficiently screened by the core electrons, the value of the EA is mainly determined by electron-electron correlation. Hence, negative ions are excellent systems to benchmark theoretical predictions that go beyond the independent particle model.\
The EA also enters into the definition of several concepts, notably the chemical potential within the purview of conceptual density functional theory (DFT), promoted by Robert G. Parr[@Parr1978], and the chemical hardness which is the core of the hard and soft acids and bases (HSAB) theory, introduced by Ralph G. Pearson in the early 1960s[@Pearson1963]. Robert S. Mulliken used the EA in combination with the ionization energy (IE), the minimum amount of energy required to remove an electron from an isolated neutral gaseous atom, to develop a scale for quantifying the electronegativity of the elements[@Mulliken1934]. The usefulness of these concepts for chemists, especially in the field of reactivity, has been amply demonstrated in recent decades[@Geerlings2003; @Chattaraj2006].\
The atomic IEs, which essentially are determined by the Coulomb attraction between the electrons and the nucleus, show a specific and well understood variation along the periodic table of elements. Starting from lowest values in the lower left corner at the heaviest alkalines, a mostly steady trend towards higher values is observed both towards ligther elements with similar chemical behaviour in one column and along rows to the right side of the chart with halogenes and noble gases, with only few exceptions. Conversely, the EAs display comparably strong irregularities and variations across the periodic table, as shown in Fig. \[fig: PT\].
A number of elements such as all the noble gases do not form stable negative ions at all, and thus have negative EAs. The group of elements with the largest EAs are the halogens. As in most other groups of elements, no monotonic trend is observed here when progressing along the rows of the periodic table, with chlorine exhibiting the largest EA () of all elements[@AndHauHot99; @Thorium].\
The EA of the heaviest naturally occurring elements in the halogen group, astatine, has not been measured to date. Indeed, little is known of the chemistry of this rare element: it is not only one of the rarest of all naturally occurring elements[@asimov], but the minute amounts that can be produced artificially prevent the use of conventional spectroscopic tools. For instance, while astatine was discovered in the 1940s[@Corson1940; @Thornton2019], it is only recently that the IE of astatine was measured through a sophisticated on-line laser-ionization spectroscopy experiment at CERN-ISOLDE[@Rothe_2013].\
However, the EA(At) has been predicted with various quantum mechanical methods[@SiFis18; @FinPet19; @Mitin2006Two-componentMethods; @LiZhaAnd12; @Borschevsky2015IonizationAt; @Sergentu2016; @ChaLiDon10]. Hence, an experimental determination of EA(At) is of fundamental interest, both to test sophisticated atomic theories and to gain precise knowledge about the chemical properties of this element. The measurement of the EA(At) is also of practical interest regarding the envisaged medical applications of astatine, since its chemical compounds are currently studied for use in cancer treatment: $^{211}$At, available in nanogram quantities only through synthetic production methods, is a most promising candidate for radiopharmaceutical applications via targeted alpha therapy (TAT)[@Zalutsky2011; @MulfordTAT; @teze:in2p3-01529705], due to its favorable half-life of about and its cumulative $\alpha$-particle emission yield of . However, in order to successfully develop efficient radiopharmaceuticals, a better understanding of astatine’s basic chemical properties is required[@Wilbur2013].\
The interest in the experimental determination of the EA notably lies in current labelling protocols that aim at binding astatine to tumor-targeting biomolecules: in many cases, the chemical reactions involve an aqueous astatine solution in which the astatide anion (At$^-$) readily forms. In addition, a current problem for the investigated $^{211}$At-radiopharmaceuticals is the significant *in vivo* de-labelling, releasing At$^-$ that could damage healthy tissues and organs of the patient[@teze:in2p3-01529705; @Vaidyanathan2008; @Wilbur2008]. In order to describe these reaction
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The size estimates approach for Electrical Impedance Tomography (EIT) allows for estimating the size (area or volume) of an unknown inclusion in an electrical conductor by means of one pair of boundary measurements of voltage and current. In this paper we show by numerical simulations how to obtain such bounds for practical application of the method. The computations are carried out both in a 2–D and a 3–D setting.'
address:
- '$^\vartriangle$ Dipartimento di Matematica e Informatica, Università degli Studi di Trieste, Trieste, Italy.'
- '$^\circ$ Dipartimento di Strutture, Università della Calabria, Rende (CS), Italy.'
- '$^\triangledown$ Dipartimento di Georisorse e Territorio, Università degli Studi di Udine, Udine, Italy.'
- '$^\star$ Dipartimento di Architettura e Pianificazione, Università degli Studi di Sassari, Alghero, Italy.'
date: 'December 22, 2006'
title: |
Computing volume bounds of inclusions\
by EIT measurements$^\ast$
---
Introduction {#sec:introduction}
============
EIT is aimed at imaging the internal conductivity of a body from current and voltage measurements taken at the boundary. It is well known, [@l:a88], [@l:m01], that, even in the ideal situation in which all possible boundary measurements are available, the correspondence *boundary data* $\rightarrow$ *conductivity* is highly (exponentially) unstable. As a consequence it is evident that, in practice, it is impossible to distinguish high resolution features of the interior from limited and noisy boundary data, [@l:av].
Motivated by applications, a line of investigation pursued by many authors, [@l:fr], [@l:frg], [@l:fri], [@l:ai], [@l:fks], [@l:aip], [@l:isak], [@l:isak-libro], has been the one of limiting the analysis to cases in which one seeks an unknown interior inclusion embedded in an otherwise known (may be even homogeneous) conductor, and whose conductivity is assumed to differ from the background.
Even in this restricted case, and even when full boundary data are available, the instability remains of exponential type [@l:dcr].
It is therefore reasonable to further restrict the goal and attempt to evaluate some parameters expressing the size (area, volume) of the inclusion, disregarding its precise location and shape, having at our disposal one pair of boundary measurements of voltage and current. This approach, which can be traced back to [@l:fr], has been well developed theoretically, [@l:ar98], [@l:kss], [@l:ars], [@l:amr03], see also [@l:ike] and [@l:amr04] for the analogous treatment in the linear elasticity framework. In order to describe such type of results we need first to introduce some notation.
We denote by $\Omega$ a bounded domain in ${\mathbb{R}}^n$, $n=2,3$, representing an electrical conductor. The boundary $\partial
\Omega$ of $\Omega$ is assumed of Lipschitz class, with constants $r_0$, $M_0$, that is the boundary can be locally represented as a graph of a Lipschitz continuous function with Lipschitz constant $M_0$ in some ball of radius $r_0$. When no inclusion is present in the conductor we assume that it is homogeneous and we pose its conductivity $\sigma(x)\equiv 1$. When the conductor contains an unknown inclusion $D$ of different conductivity, say $k>0$, $k
\neq 1$ the overall conductivity in the conductor will be given by $\sigma(x)=1+(k-1)\chi_D(x)$. Here and in what follows it is assumed that $D$ is strictly contained in $\Omega$. More precisely, for a given $d_0> 0$, $$\label{eq:2.condition_d0}
\textrm{dist}(D, \partial \Omega) \geq d_0.$$ Let $\varphi \in H^{- \frac{1}{2}}(\partial \Omega)$, $\int_{\partial \Omega} \varphi =0$, be an applied current density on $\partial \Omega$. The induced electrostatic potential $u \in
H^1(\Omega)$ is the solution of the Neumann problem $$\label{eq:2.Neumann_pbm_with_incl}
\left\{ \begin{array}{ll}
{\textrm{div}\,}((1+(k-1) \chi_D) \nabla u)=0, &
\mathrm{in}\ \Omega ,\\
& \\
\nabla u \cdot \nu= \varphi, &
\mathrm{on}\ \partial \Omega,
\end{array}\right.$$ where $\nu$ denotes the outer unit normal to $\partial \Omega$.
When $D$ is the empty set, that is when the inclusion is absent, the reference electrostatic potential $u_0 \in H^1(\Omega)$ satisfies the Neumann problem $$\label{eq:2.Neumann_pbm_without_incl}
\left\{ \begin{array}{ll}
\Delta u_0=0, &
\mathrm{in}\ \Omega ,\\
& \\
\nabla u_0 \cdot \nu= \varphi, &
\mathrm{on}\ \partial \Omega.
\end{array}\right.$$
In both cases and , the solutions $u$ and $u_0$ are determined up to an additive constant.
Let us denote by $W$, $W_0$ the powers required to maintain the current density $\varphi$ on $\partial \Omega$ when $D$ is present or it is absent, respectively. Namely $$\label{eq:2.def_W}
W=\int_{\partial \Omega} u \varphi = \int_{\Omega}(1+(k-1)\chi_D)|
\nabla u|^2,$$ $$\label{eq:2.def_W0}
W_0=\int_{\partial \Omega} u_0 \varphi = \int_{\Omega}|\nabla u_0|^2.$$ The size estimate approach developed in [@l:ar98], [@l:kss], [@l:ars], [@l:amr03], tells us that the measure $|D|$ of $D$ can be bounded from above and below in terms of the quantity $\left|\frac{W_0-W}{W_0}\right|$ which we call the normalized power gap. More precisely the following bounds hold, see [@l:amr03 Theorem 2.3].
\[theo:size-estim-EIT-general\] Let $D$ be any measurable subset of $\Omega$ satisfying . Under the above assumptions, if $k > 1$ we have $$\label{eq:2.size-estim-EIT-more-conduct}
\frac {1} {k-1} C^{+}_{1}
\frac{W_0-W}{W_0}
\leq
|D|
\leq
\left (
\frac{k}{k-1}
\right )^{ \frac{1}{p} }
C^{+}_{2}
\left (
\frac{W_0-W}{W_0}
\right )^{ \frac{1}{p} }.$$ If, conversely, $k < 1$, then we have $$\label{eq:2.size-estim-EIT-less-conduct}
\frac {k} {1-k} C^{-}_{1}
\frac{W-W_0}{W_0}
\leq
|D|
\leq
\left (
\frac{1}{1-k}
\right )^{ \frac{1}{p} }
C^{-}_{2}
\left (
\frac{W-W_0}{W_0}
\right )^{ \frac{1}{p} },$$ where $C^{+}_{1}$, $C^{-}_{1}$ only depend on $d_0$, $|\Omega|$, $r_0$, $M_0$, whereas $p>1$, $C^{+}_{2}$, $C^{-}_{2}$ only depend on the same quantities and, in addition, on the *frequency of $\varphi$* $$\label{eq:2.frequency}
F[\varphi] = \frac{\|\varphi \|_{H^{ -\frac{1}{2} }(\partial
\Omega)}}{\|\varphi \|_{H^{-1}(\partial \Omega)}}.$$
When it is a priori known that the inclusion $D$ is not too small (if it is at all present), a situation which often occurs in practical applications, stronger bounds apply.
\[theo:size-estim-EIT-fat-incl\] Under the above hypotheses, let us assume, in addition, that $$\label{eq:2.fat-inclusion}
|D| \geq m_0,$$ for a given positive constant $m_0$. If $k > 1$ we have $$\label{eq:2.size-est
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Consider a continuous word embedding model. Usually, the cosines between word vectors are used as a measure of similarity of words. These cosines do not change under orthogonal transformations of the embedding space. We demonstrate that, using some canonical orthogonal transformations from SVD, it is possible both to increase the meaning of some components and to make the components more stable under re-learning. We study the interpretability of components for publicly available models for the Russian language (RusVectōrēs, fastText, RDT).'
author:
- |
Alexey Zobnin\
National Research University Higher School of Economics,\
Faculty of Computer Science,\
azobnin@hse.ru
bibliography:
- 'paper.bib'
title: 'Rotations and Interpretability of Word Embeddings: the Case of the Russian Language'
---
Introduction
============
Word embeddings are frequently used in NLP tasks. In vector space models every word from the source corpus is represented by a dense vector in $\mathbb{R}^d$, where the typical dimension $d$ varies from tens to hundreds. Such embedding maps similar (in some sense) words to close vectors. These models are based on the so called distributional hypothesis: similar words tend to occur in similar contexts [@harris1954distributional]. Some models also use letter trigrams or additional word properties such as morphological tags.
There are two basic approaches to the construction of word embeddings. The first is count-based, or explicit [@levy2014linguistic; @dhillon2015eigenwords]. For every word-context pair some measure of their proximity (such as frequency or PMI) is calculated. Thus, every word obtains a sparse vector of high dimension. Further, the dimension is reduced using singular value decomposition (SVD) or non-negative sparse embedding (NNSE). It was shown that truncated SVD or NNSE captures latent meaning in such models [@landauer1997solution; @murphy2012learning]. That is why the components of embeddings in such models are already in some sense canonical. The second approach is predict-based, or implicit. Here the embeddings are constructed by a neural network. Popular models of this kind include word2vec [@mikolov2013efficient; @mikolov2013distributed] and fastText [@bojanowski2016enriching].
Consider a predict-based word embedding model. Usually in such models two kinds of vectors, both for words and contexts, are constructed. Let $N$ be the vocabulary size and $d$ be the dimension of embeddings. Let $W$ and $C$ be $N \times d$-matrices whose rows are word and context vectors. As a rule, the objectives of such models depend on the dot products of word and context vectors, i. e., on the elements of $WC^T$. In some models the optimization can be directly rewritten as a matrix factorization problem [@levy2014neural; @cotterell2017explaining]. This matrix remains unchanged under substitutions $W \mapsto W S, \quad C \mapsto C {S^{-1}}^T$ for any invertible $S$. Thus, when no other constraints are specified, there are infinitely many equivalent solutions [@fonarev2017riemannian].
Choosing a good, not necessarily orthogonal, post-processing transformation $S$ that improves quality in applied problems is itself interesting enough [@mu2017all]. However, only word vectors are typically used in practice, and context vectors are ignored. The cosine distance between word vectors is used as a similarity measure between words. These cosines will not change if and only if the transformation $S$ is orthogonal. Such transformations do not affect the quality of the model, but may elucidate the meaning of vectors’ components. Thus, the following problem arises: *what orthogonal transformation is the best one for describing the meaning of some (or all) components?*
It is believed that the meaning of the components of word vectors is hidden [@gladkova2016intrinsic]. But even if we determine the “meaning” of some component, we may loose it after re-training because of random initialization, thread synchronization issues, etc. Many researchers [@luo2015online; @ruseti2016using; @andrews2016compressing; @jang2017elucidating] ignore this fact and, say, work with vector components directly, and only some of them take basis rotations into account [@tsvetkov2016correlation]. We show that, generally, re-trained model differ from the source model by almost orthogonal transformation. This leads us to the following problem: *how one can choose the canonical coordinates for embeddings that are (almost) invariant with respect to re-training?*
We suggest using well-known plain old technique, namely, the singular value decomposition of the word matrix $W$. We study the principal components of different models for Russian language (RusVectōrēs, RDT, fastText, etc.), although the results are applicable for any language as well.
Related Work
============
Interpretability of the components have been extensively studied for topic models. In [@chang2009reading; @lau2014machine] two methods for estimating the coherence of topic models with manual tagging have been proposed: namely, word intrusion and topic intrusion. Automatic measures of coherence based on different similarities of words were proposed in [@aletras2013evaluating; @nikolenko2016topic]. But unlike topic models, these methods cannot be applied directly to word vectors. There are lots of new models where interpretability is either taken into account by design [@luo2015online] (modified skip-gram that produces non-negative entries), or is obtained automagically [@andrews2016compressing] (sparse autoencoding).
Lots of authors try to extract some predefined significant properties from vectors: [@jang2017elucidating] (for non-negative sparse embeddings), [@tsvetkov2016correlation] (using a CCA-based alignment between word vectors and manually-annotated linguistic resource), [@rothe2016word] (ultradense projections).
Singular vector decomposition is the core of count-based models. To our knowledge, the only paper where SVD was applied to predict-based word embedding matrices is [@mu2017all]. In [@arora2017simple] the first principal component is constructed for sentence embedding matrix (this component is excluded as the common one).
Word embeddings for Russian language were studied in [@kutuzov2015texts; @Kutuzov2015; @panchenko2015russe; @arefyev2015evaluating].
Theoretical Considerations
==========================
Singular value decomposition
----------------------------
Let $m \ge n$. Recall [@jolliffe2002principal] that a singular value decomposition (SVD) of an $m\times n$-matrix $M$ is a decomposition $M = U \Sigma V^T$, where $U$ is an an $m \times n$ matrix, $U^T U = I_{n}$, $\Sigma$ is a diagonal $n \times n$-matrix, and $V$ is an $n \times n$ orthogonal matrix. Diagonal elements of $\Sigma$ are non-negative and are called singular values. Columns of $U$ are eigenvectors of $M M^T$, and columns of $V$ are eigenvectors of $M^T M$. Squares of singular values are eigenvalues of these matrices. If all singular values are different and positive, then SVD is unique up to permutation of singular values and choosing the direction of singular vectors. Buf if some singular values coincide or equal zero, new degrees of freedom arise.
Invariance under re-training
----------------------------
Learning methods are usually not deterministic. The model re-trained with similar hyperparameters may have completely different components. Let ${M_1}$ and ${M_2}$ be the word matrices obtained after two separate trainings of the model. Let these embeddings be similar in the sense that cosine distances between words are almost the same, i. e., ${M_1}{M_1}^T \approx {M_2}{M_2}^T$. Suppose also that singular values of each ${M_i}$ are different and non-zero. Then one can show that ${M_1}$ and ${M_2}$ differ only by the (almost) orthogonal factor. Indeed, left singular vectors in SVD of ${M_i}$ are eigenvectors of ${M_i}{M_i}^T$. Hence, matrices $U$ and $\Sigma$ in SVD of ${M_1}$ and ${M_2}$ can be chosen the same. Thus, ${M_2}\approx {M_1}Q$, where $Q Q^T = I_d$. Here $Q$ can be chosen as $V_1 V_2^T$ where $V_i$ are matrices of right singular vectors in SVD of ${M_i}$.
Interpretability measures
-------------------------
One of traditional measures of interpretability in topic modeling looks as follows [@newman2010automatic; @lau2014machine]. For each component, $n$ most probable words are selected. Then for each pair of selected words some co-occurrence measure such as PMI is calculated. These values are averaged over all pairs of selected words and all components. The other approaches use human markup. Such measures need additional data, and it is difficult to study them algebraically. Also, unlike topic modeling, word embeddings are not probabilistic: both positive and negative values of coordinates should be considered.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove that the semantics of intuitionistic linear logic in vector spaces which uses cofree coalgebras to model the exponential is a model of differential linear logic. Thus, in this semantics, proof denotations have natural derivatives. We give several examples of these derivatives.'
author:
- 'James Clift, Daniel Murfet'
title: Cofree coalgebras and differential linear logic
---
ł[|]{}
Introduction
============
The idea of taking derivatives of programs is an old one [@paige §2] with many manifestations, including automatic differentiation of algorithms computing real-valued functions [@autodiff] and incremental computation [@incdiff]. However, these approaches are limited to restricted classes of computations, and it is only recently with the development of the differential $\lambda$-calculus by Ehrhard-Regnier [@difflambda] and its refinement differential linear logic [@blutecs; @ehrhard-survey] that derivatives have been defined for general higher-order programs. As with ordinary calculus, the aim of these theories is to assign to a program $P$ another program $\partial P$ (the derivative) which computes the change in the output of $P$ resulting from an infinitesimal change to its input. Here we give ourselves arbitrary $\mathbb{C}$-linear combinations of programs (meaning $\lambda$-terms or proofs in linear logic) as a starting point so that “small” changes to the input make sense.
This paper is about the semantics of differential linear logic, following [@blutecs]. The aim is to explain how the natural semantics of intuitionistic linear logic in vector spaces [@hyland; @murfet_ll] is already a model of differential linear logic. The key point is that tangent vectors and derivatives appear as soon as we introduce cofree coalgebras to model the exponential, which shows that the differential structure is intrinsic to the algebra of linear logic.
To see this, let ${\llbracket - \rrbracket}$ denote semantics in vector spaces and suppose we are given a proof $\pi$ in linear logic computing a function from inputs of type $A$ to outputs of type $B$: $$\varwidth{.9\textwidth}\centering\leavevmode
\AxiomC{$\pi$}
\noLine\UnaryInfC{$\vdots$}
\def\extraVskip{5pt}
\noLine\UnaryInfC{${!} A \vdash B$\,.}
\DisplayProof\endvarwidth$$ The space of inputs to ${\llbracket \pi \rrbracket}$ is ${\llbracket A \rrbracket}$, and a small change in the input starting from $P \in {\llbracket A \rrbracket}$ is a tangent vector $\nu$ at $P$, viewing ${\llbracket A \rrbracket}$ as a smooth manifold or a scheme. This is equivalent to the data of a linear map @C+2pc[ (\[\]/\^2)\^\* & [A ]{} ]{} where $\mathbb{C}[\varepsilon]/\varepsilon^2$ is the ring of dual numbers (this bijection is reviewed in Appendix \[section:tangent\_vectors\]). If ${\llbracket {!} A \rrbracket}$ is the universal cocommutative counital coalgebra mapping to ${\llbracket A \rrbracket}$ then there is a unique lifting of this linear map to a morphism of coalgebras \[eq:toucan\] @C+2pc[ (\[\]/\^2)\^\* & [ A ]{}. ]{} Similarly the linear map ${\llbracket \pi \rrbracket}: {\llbracket {!} A \rrbracket} {\longrightarrow}{\llbracket B \rrbracket}$ lifts to a morphism of coalgebras ${\llbracket {!} A \rrbracket} {\longrightarrow}{\llbracket {!} B \rrbracket}$ which may be composed with to give a morphism of coalgebras \[eq:toucan2\] @C+2pc[ (\[\]/\^2)\^\* & [ A ]{} & [ B ]{} ]{} which, in turn, defines a tangent vector at the point ${\llbracket \pi \rrbracket}\ket{\emptyset}_P \in {\llbracket B \rrbracket}$, where $\ket{\emptyset}_P$ is the point of ${\llbracket {!} A \rrbracket}$ corresponding to $P$. The tangent vector gives the infinitesimal variation of the output of $\pi$ on the input $P$, when the input is varied in the direction of $\nu$.
The formal statement is that for any algebraically closed field $k$ of characteristic zero the semantics of intuitionistic linear logic in $k$-vector spaces defined using cofree coalgebras is model of differential linear logic (Theorem \[main\_theorem\]). We refer to this as the *Sweedler semantics*, since the explicit description of this universal coalgebra is due to him [@sweedler; @murfet_ll]. The proof is elementary and we make no claim here to technical novelty; the link between the symmetric coalgebra and differential calculus is well-known. Perhaps our main contribution is to give several detailed examples showing how to compute these derivatives. We do this with the aim of reinforcing the fact that differentiating programs, even higher-order ones, is a natural thing to do.\
We conclude this introduction with a sketch of one such example and a comparison of our work to other semantics of differential linear logic. To elaborate a little more on the notation: for any type $A$ of linear logic (which for us has only connectives $\otimes, \multimap, !$) there is a vector space ${\llbracket A \rrbracket}$, and for any proof $\pi$ of $A \vdash B$ there is a linear map ${\llbracket \pi \rrbracket}: {\llbracket A \rrbracket} {\longrightarrow}{\llbracket B \rrbracket}$. In particular every proof $\xi$ of type $A$ has a denotation ${\llbracket \xi \rrbracket} \in {\llbracket A \rrbracket}$, and the promotion of $\xi$ has for its denotation a vector $\ket{\emptyset}_{{\llbracket \xi \rrbracket}} \in {\llbracket !A \rrbracket}$, see [@murfet_ll §5.3].
For any binary sequence $S \in \{0,1\}^*$ there is an encoding of $S$ as a proof $\underline{S}$ of type $$\textbf{bint}_A = {!}(A \multimap A) \multimap \big({!}(A \multimap A) \multimap (A \multimap A)\big)\,.$$ Repetition of sequences can be encoded as a proof $$\varwidth{.9\textwidth}\centering\leavevmode
\AxiomC{${\underline{\mathrm{repeat}}}$}
\noLine\UnaryInfC{$\vdots$}
\def\extraVskip{5pt}
\noLine\UnaryInfC{${!} \textbf{bint}_A \vdash \textbf{bint}_A$\,.}
\RightLabel{\scriptsize $\multimap R$}
\DisplayProof\endvarwidth$$ The denotation is a linear map ${\llbracket {!}\textbf{bint}_A \rrbracket} {\longrightarrow}{\llbracket \textbf{bint}_A \rrbracket}$ sending $\ket{\emptyset}_{{\llbracket \underline{S} \rrbracket}}$ to ${\llbracket \underline{SS} \rrbracket}$. The derivative of ${\underline{\mathrm{repeat}}}$ according to the theory of differential linear logic is another a proof $$\varwidth{.9\textwidth}\centering\leavevmode
\AxiomC{$\partial\, {\underline{\mathrm{repeat}}}$}
\noLine\UnaryInfC{$\vdots$}
\def\extraVskip{5pt}
\noLine\UnaryInfC{${!} \textbf{bint}_A, \textbf{bint}_A \vdash \textbf{bint}_A$\,}
\RightLabel{\scriptsize $\multimap R$}
\DisplayProof\endvarwidth$$ which can be derived from ${\underline{\mathrm{repeat}}}$ by new deduction rules called codereliction, cocontraction and coweakening (see Section \[section:coder\]). We prove in Section \[section:bint\] that the denotation of this derivative in the Sweedler semantics is the linear map $$\begin{gathered}
{\llbracket \partial\, {\underline{\mathrm{repeat}}} \rrbracket}: {\llbracket {!}\textbf{bint}_A \rrbracket} \otimes {\llbracket \textbf{bint}_A \rrbracket} {\longrightarrow}{\llbracket \textbf{bint}_A \rrbracket}\,,\\
\ket{\emptyset}_{{\llbracket \underline{S} \rrbracket}} \otimes {\llbracket \underline{T} \rrbracket} \longmapsto {\llbracket \underline{ST} \rrbracket} + {\llbracket \underline{TS} \rrbracket}\end{gathered}$$ whose value on the tensor $\ket{\emptyset}_{{\llbracket \underline{S} \rrbracket}} \otimes {\llbracket \underline{T} \rrbracket}$ we interpret as the derivative of the repeat program at the sequence $S$ in the direction of the sequence $T$. This can be justified informally by the following calculation using an infinitesimal $\varepsilon$ $$\begin{aligned}
(S + \varepsilon T)( S + \varepsilon T) = SS + \varepsilon( ST + TS ) + \varepsilon^2 TT,\end{aligned}$$ which says that varying the sequence infinitesimally
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present a model for the origin of the extended law of star formation in which the surface density of star formation ($\Sigma_{\rm SFR}$) depends not only on the local surface density of the gas ($\Sigma_{g}$), but also on the stellar surface density ($\Sigma_{*}$), the velocity dispersion of the stars, and on the scaling laws of turbulence in the gas. We compare our model with the spiral, face-on galaxy NGC 628 and show that the dependence of the star formation rate on the entire set of physical quantities for both gas and stars can help explain both the observed general trends in the $\Sigma_{g}-\Sigma_{\rm SFR}$ and $\Sigma_{*}-\Sigma_{\rm SFR}$ relations, but also, and equally important, the scatter in these relations at any value of $\Sigma_{g}$ and $\Sigma_{*}$. Our results point out to the crucial role played by existing stars along with the gaseous component in setting the conditions for large scale gravitational instabilities and star formation in galactic disks.'
author:
- |
Sami Dib$^{1}$[^1], Sacha Hony$^{2}$ Guillermo Blanc$^{3,4,5}$\
$^{1}$Universidad de Atacama, Copayapu 485, Copiapó, Chile\
$^{2}$Institut für Theoretische Astrophysik, Zentrum für Astronomie der Universität Heidelberg, Albert-Überle-Stra[ß]{}e 2, 69120 Heidelberg, Germany.\
$^{3}$Observatories of the Carnegie Institution for Science, 813 Santa Barbara St, Pasadena, CA, 91101, USA\
$^{4}$Departamento de Astronomía, Universidad de Chile, Camino del Observatorio 1515, Las Condes, Santiago, Chile\
$^{5}$Centro de Astrofísica y Tecnologías Afines (CATA), Camino del Observatorio 1515, Las Condes, Santiago, Chile\
date: 'Accepted XXX. Received XXX'
title: 'The extended law of star formation: the combined role of gas and stars'
---
\[firstpage\]
galaxies: star formation - galaxies: kinematics and dynamics - galaxies: stellar content - ISM: structure - galaxies: ISM - galaxies: evolution
INTRODUCTION {#motiv}
============
The star formation rate (SFR) is the quantity that describes how galaxies convert their gas reservoirs into stars per unit time. Quantifying the dependence of the SFR on the global properties of galaxies as well as on the local conditions within galaxies is essential towards understanding their observed properties and their dynamical and chemical evolution across cosmic time. Traditionally, observational studies have sought the correlation between the surface density of star formation ($\Sigma_{\rm SFR}$) and the surface density of the gas $\Sigma_{g}=\Sigma_{\ion{H}{i}}+\Sigma_{{\rm H_{2}}}$, where $\Sigma_{\ion{H}{i}}$ and $\Sigma_{{\rm H_{2}}}$ are the surface densities of the neutral and molecular hydrogen, respectively. The emerging picture from all of these works is that $\Sigma_{\rm SFR} \propto \Sigma_{g}^{n}$ with $n \approx 1.4$ (e.g., Schmidt 1959; Kennicutt 1998; Bigiel et al. 2008; Blanc et al. 2009). Other studies found that the surface density of star formation scales linearly or sub-linearly ($n \lesssim 1$) with the surface density of molecular hydrogen traced by CO lines or with the surface density of molecules that trace higher density gas such as HCN (e.g., Gao & Solomon 2004; Shetty et al. 2013; Liu et al. 2016). Several ideas have been proposed in order to explain the origin of the star formation scaling relations. The earliest scenarios proposed that stars form as a result of gravitational instabilities (GI) in the gaseous component of galactic disks over a timescale which is the local free-fall time of the gas and which is given by $t_{ff,g} \propto \rho_{g}^{-0.5}$, where $\rho_{g}$ is the local gas volume density. For a constant scale height of the disk, $\rho_{g} \propto \Sigma_{g}$ and thus $\Sigma_{\rm SFR} \propto \Sigma_{g}/t_{ff,g} \propto \Sigma_{g}^{1.5}$ (e.g., Madore 1977). Wong & Blitz (2002) argued that the value of the star formation law slope is related to the value of the molecular fraction $f_{{\rm H_{2}}}=\Sigma_{{\rm H_{2}}}/\Sigma_{g}$ and Blitz & Rosolowsky (2006) showed that $f_{{\rm H_{2}}}$ can be related to the pressure of the interstellar medium. It was also suggested that the value of $n$ is related to the width of the density probability distribution function of the interstellar gas and to the threshold density that is associated with the gas tracer (Tassis 2007; Wada & Norman 2007). Escala (2011) argued that a correlation exists between the largest mass-scale for structures not stabilised by rotation and the SFR. Other groups (e.g., Krumholz & McKee 2005; Padoan & Nordlund 2011; Hennebelle & Chabrier 2011; Federrath 2013; Kraljic et al. 2014) explored ideas based on the role of turbulent fragmentation in GMCs and in which the SFR is a function of the dynamical properties of the clouds. Meidt et al. (2013) argued that the star formation rate in molecular clouds in M51 may correlate with the intensity of the dynamical pressure the clouds are subjected to. The role of feedback coupled to turbulent fragmentation and its effects on the regulation of the SFR on galactic scales have been included in a number of models (e.g., Dopita 1985; Dopita & Ryder 1994; Dib et al. 2011a,b; Dib 2011a,b; Renaud et al. 2012; Dib et al. 2013; Orr et al. 2017).
It is however necessary to include stars in the treatment of GI on large scales in galactic disks, since in most disk galaxies, the stellar surface density is observed to be a factor $\approx 10-100$ larger than the gas surface density (e.g., Leroy et al. 2008). The role of existing stars in determining the development of gravitational instabilities has been investigated in a limited number of studies. Jog & Solomon (1984a,b) explored the characteristics of the gravitational instability in a two fluid medium (gas and stars) in which both components interact gravitationally with each other and are treated each as an isothermal gas with specific velocity dispersions. One of their main conclusions is that even when each fluid component is gravitationally stable, the joint fluid system may be gravitationally unstable. Rafikov (2001) expanded the study of Jog & Solomon to the case where the stars are treated as a collisionless component. Setting stars aside, Romeo et al. (2010) investigated the role of turbulent motions on the stability of galactic disks. They described interstellar turbulence using scaling laws that relate the size of a region to the gas surface density ($\Sigma_{g}$) and gas velocity dispersion ($\sigma_{g}$). Romeo & Agertz (2014) investigated the development of GI for various regimes of turbulence (i.e., different dependence of $\Sigma_{g}$ and $\sigma_{g}$ on the physical scale). In parallel, Romeo & Wiegert (2011) and Romeo & Falstad (2013) proposed a derivation of the effective Toomre $Q$ parameter (Toomre 1964) for multicomponent disks of stars and gas and taking into account the effects of disk thickness. Shadmehri & Khajenabi (2012) and Hoffman & Romeo (2012) coupled aspects of the analysis of Jog & Solomon (1984a) to that of Romeo et al. (2010) and investigated the linear growth rate of the GI in a gas+star galactic disk while at the same time accounting for the turbulent nature of the gas. On the observational side, Shi et al. (2011) showed that the scatter in the $\Sigma_{g}-\Sigma_{\rm SFR}$ relation may be reduced if $\Sigma_{\rm SFR}$ is a function that depends on both $\Sigma_{g}$ and $\Sigma_{*}$. When describing $\Sigma_{\rm SFR}$ as the product of two power law functions of the gas and stellar surface densities ($\Sigma_{\rm SFR} \propto \Sigma_{g}^{\alpha}~\Sigma_{*}^{\beta}$). They obtained $\alpha=0.8\pm0.01$ and $\beta=0.63\pm0.01$ from the combined measurements on sub-galactic scales (scales of $\approx 750$ pc) of 12 nearby galaxies, with a non-negligible galaxy-to-galaxy scatter when the data of each galaxy is fitted individually (see aslo Westfall et al. 2014). Rahmani et al. (2016) performed a similar study for the Andromeda galaxy, and showed that these exponents may well depend on the distance from the centre of the galaxy. It is important to mention that the description of the extended law of star formation as being the product of two power-laws (for gas and stars) is an empirical one, and possibly is an over-simplification of the physical processes that may be connecting the gas and stellar properties to the star formation rate.
However, in all of these above mentioned works, the origin of the dependence of the surface density of star formation on the local properties of the gas and stars has not been explicitly quantified.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In the thermal dark matter (DM) paradigm, primordial interactions between DM and Standard Model particles are responsible for the observed DM relic density. In [@boehm:2014MNRAS], we showed that weak-strength interactions between DM and radiation (photons or neutrinos) can erase small-scale density fluctuations, leading to a suppression of the matter power spectrum compared to the collisionless cold DM (CDM) model. This results in fewer DM subhaloes within Milky Way-like DM haloes, implying a reduction in the abundance of satellite galaxies. Here we use very high resolution $N$-body simulations to measure the dynamics of these subhaloes. We find that when interactions are included, the largest subhaloes are less concentrated than their counterparts in the collisionless CDM model and have rotation curves that match observational data, providing a new solution to the “too big to fail” problem.'
author:
- |
J. A. Schewtschenko,$^{1,2}$[^1] C. M. Baugh,$^{1}$ R. J. Wilkinson,$^{2}$ C. Bœhm,$^{2,3}$ S. Pascoli,$^{2}$ T. Sawala$^{4}$\
$^1$Institute for Computational Cosmology, Durham University, Durham DH1 3LE, UK\
$^2$Institute for Particle Physics Phenomenology, Durham University, Durham DH1 3LE, UK\
$^3$LAPTH, U. de Savoie, CNRS, BP 110, 74941 Annecy-Le-Vieux, France\
$^4$Department of Physics, University of Helsinki, Gustaf Hällströmin katu 2a, FI-00014 Helsinki, Finland
bibliography:
- 'IDM\_TBTF.bib'
title: 'Dark matter–radiation interactions: the structure of Milky Way satellite galaxies'
---
\[firstpage\]
astroparticle physics – dark matter – galaxies: haloes – large-scale structure of Universe.
Introduction {#sec:intro}
============
The cold dark matter (CDM) model has been remarkably successful at explaining measurements of the cosmic microwave background radiation and the large-scale structure of the Universe. However, in its simplest form, the model faces challenges on small scales; the most pressing of which are the “missing satellite” (@moore_dark_1999 [@Klypin:1999uc]) and “too big to fail” (@BoylanKolchin:2011de) problems. These discrepancies may indicate the need to consider a richer physics phenomenology in the dark sector, although they were first stated without the inclusion of baryonic physics.
The “missing satellite” problem refers to the overabundance of DM subhaloes in Milky Way (MW)-like DM haloes, compared to the observed number of MW satellite galaxies. This comparison between theory and observation requires a connection to be made between subhaloes and galaxies; in the absence of a good model for galaxy formation, this is most readily done using the halo circular velocity. Subsequent simulations that have taken into account baryonic physics suggest that a reduction in the efficiency of galaxy formation in low-mass DM haloes results in many of the excess subhaloes containing either no galaxy at all or a galaxy that is too faint to be observed (@Benson:2002 [@Somerville:2002; @Sawala:2014; @Sawala:2015]).
As the resolution of $N$-body simulations continued to improve, the “too big to fail” problem emerged (@BoylanKolchin:2011de). This concerns the largest subhaloes, which should be sufficiently massive that their ability to form a galaxy is not hampered by heating of the intergalactic medium by photo-ionising photons or heating of the interstellar medium by supernovae. Simulations of vanilla CDM showed that the largest subhaloes are more massive and denser than is inferred from measurements of the MW satellite rotation curves.
The severity of the small-scale problems can be reduced if one considers the mass of the MW, which impacts the selection of MW-like haloes in the simulations but remains difficult to determine (@Wang:2012 [@Cautun:2014dda; @Piffl:2014; @Wang:2015]). A range of alternatives to vanilla CDM have also been proposed e.g. warm DM (@schaeffer_silk), interacting DM (@Boehm:2000gq [@Boehm:2004th; @CyrRacine:2012fz; @Chu:2014lja]), self-interacting DM (@Spergel:1999mh [@Rocha:2012jg; @Vogelsberger:2014pda; @Buckley:2014PhRvD]), decaying DM (@Wang:2014ina) and late-forming DM (@Agarwal:2015). These “beyond CDM” models generally exhibit a cut-off in the linear matter power spectrum at small scales (high wavenumbers) that translates into a reduced number of low-mass DM haloes compared to collisionless CDM at late times.
Most numerical efforts so far to check whether such models could solve the small-scale problems have focussed on either warm DM or self-interacting DM. However, some works have studied the impact of DM scattering elastically with Standard Model particles in the early Universe; for example, with photons ($\boldsymbol{\gamma}$**CDM**) (@Boehm:2000gq [@boehm_interacting_2001; @Sigurdson:2004zp; @Boehm:2004th; @Dolgov:2013una; @Wilkinson:2013kia]), neutrinos ($\boldsymbol{\nu}$**CDM**) (@Boehm:2000gq [@boehm_interacting_2001; @Boehm:2004th; @Mangano:2006mp; @Serra:2009uu; @Wilkinson:2014ksa; @Escudero:2015yka]) and baryons (@Chen:2002yh [@Dvorkin:2013cea; @Aviles:2011ak]).
Such elastic scattering processes are intimately related to the DM annihilation mechanism in the early Universe and are thus directly connected to the DM relic abundance in scenarios where DM is a thermal weakly-interacting massive particle (WIMP). Therefore, rather than being viewed as exotica, interactions between DM and Standard Model particles should be considered as a more realistic realisation of the CDM model. Indeed, instead of assuming that CDM has no interactions beyond gravity, one can actually test this assumption by determining their impact on the linear matter power spectrum and ruling out values of the cross section that are in contradiction with observations. However, it should be noted that the strength of the scattering and annihilation cross sections can differ by several orders of magnitude, depending on the particle physics model.
The $\gamma$CDM and $\nu$CDM scenarios are characterised by the collisional damping of primordial fluctuations, which can lead to a suppression of small-scale power at late times. The collisional damping scale is determined by a single model-independent parameter: the ratio of the scattering cross section to the DM mass. The larger the ratio, the larger the suppression of the matter power spectrum. For simplicity, we assume that the scattering cross section is constant (i.e. temperature-independent), bearing in mind that temperature-dependence would give rise to the same effect but with a different value of the cross section today (@Wilkinson:2013kia [@Wilkinson:2014ksa]). In [@boehm:2014MNRAS], we confirmed that such models can provide an alternative solution to the missing satellite problem in the MW. Here we show that interacting DM could also solve the too big to fail problem[^2].
The paper is organised as follows. In Section \[sec:idmssp:sim\], we describe the setup of the $N$-body simulations that we use to study small structures. In Section \[sec:tbtf\], we investigate whether interacting DM can alleviate the too big to fail problem, using MW observations. Finally, we give our conclusions in Section \[sec:conc\].
Simulations {#sec:idmssp:sim}
===========
{width="90.00000%"}
While the CDM matter power spectrum predicts the existence of structures at all scales (down to earth mass haloes (@Diemand:2005Nature [@Springel:2008cc; @Angulo:2009hf])), interacting DM models predict a suppression of power below a characteristic damping scale that is determined by the ratio of the DM interaction cross section to the DM mass (@boehm_interacting_2001). For allowed $\gamma$CDM and $\nu$CDM models (@boehm:2014MNRAS), the suppression occurs for haloes with masses below $10^8-10^9~M_{\odot}$. Therefore, to study the distribution and properties of structures beyond the linear regime, it is essential to carry out high-resolution $N$-body simulations.
To reach the resolution required to model the dynamics of DM subhaloes within MW-mass DM haloes, we first identify Local Group (LG) candidates in an $N$-body simulation of a large cosmological volume, and then resimulate the region containing these haloes at much higher mass resolution in a “zoom” resimulation. We use the `DOVE` cosmological simulation to identify haloes for resimulation (the criteria used to select the haloes are listed below) [@Sawala:2014arXiv]. The `DOVE` simulation follows the hierarchical clustering of the mass
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the hydrodynamic forces acting on a finite-size impurity moving in a two-dimensional Bose-Einstein condensate at non-zero temperature. The condensate is modeled by the damped-Gross Pitaevskii (dGPE) equation and the impurity by a Gaussian repulsive potential giving the coupling to the condensate. The width of the Gaussian potential is equal to the coherence length, thus the impurity can only emit waves. Using linear perturbation analysis, we obtain analytical expressions corresponding to different hydrodynamic regimes which are then compared with direct numerical simulations of the dGPE equation and with the corresponding expressions for classical forces. For a non-steady flow, the impurity experiences a time-dependent force that, for small coupling, is dominated by the inertial effects from the condensate and can be expressed in terms of the local material derivative of the fluid velocity, in direct correspondence with the Maxey and Riley theory for the motion of a solid particle in a classical fluid. In the steady-state regime, the force is dominated by a self-induced drag. Unlike at zero temperature, where the drag force vanishes below a critical velocity, at finite temperatures, the drag force has a net contribution from the energy dissipated in the condensate through the thermal drag at all velocities of the impurity. At low velocities this term is similar to the Stokes’ drag in classical fluids. There is still a critical velocity above which the main drag pertains to energy dissipation by acoustic emissions. Above this speed, the drag behaves non-monotonically with impurity speed, reflecting the reorganization of fringes and wake around the particle.'
author:
- 'Jonas Rønning$^1$, Audun Skaugen$^2$, Emilio Hernández-García$^3$, Cristóbal López$^3$, Luiza Angheluta$^1$'
bibliography:
- 'ref-2.bib'
title: 'Classical analogies for the force acting on an impurity in a Bose-Einstein condensate'
---
Introduction
============
The motion of an impurity suspended in a quantum fluid depends on several key factors such as the superfluid nature and flow regime, as well as the size of the impurity and its interaction with the surrounding fluid [@winiecki2000motion; @wouters2010superfluidity; @astrakharchik2004motion; @shukla2016sticking; @pinsker2017gaussian]. Therefore, it is disputable whether the forces acting on an impurity in a quantum fluid should bear any resemblance to classical hydrodynamic forces. In the case of an impurity immersed in superfluid liquid helium, classical equations of motion and hydrodynamic forces are assumed a priori [@poole2005motion], since impurities are typically much larger than the coherence length and then quantum hydrodynamic effects like the quantum pressure can be neglected. For Bose-Einstein condensates (BEC) in dilute atomic gases, impurities can be neutral atoms [@chikkatur2000suppression], ion impurities [@zipkes2010trapped; @balewski2013coupling] or quasiparticles [@jorgensen2016observation]. The size of an impurity in a BEC is typically of the same order of magnitude or smaller than the coherence length, and quantum hydrodynamic effects cannot be ignored.
There are several theoretical and computational studies of the interaction force between an impurity and a BEC at zero absolute temperature, using different approaches depending on the nature of the particle and its interaction with the condensate. A microscopic approach is used to analyse the interaction of a rigid particle with a BEC by solving the Gross-Pitaevskii equation (GPE) for the condensate macroscopic wavefunction and using boundary conditions such that the condensate density vanishes at the particle boundary [@pham2005boundary]. This methodology allows to study complex phenomena such as vortex nucleation and flow instabilities, but it is more oriented to find the effects of an obstacle on the flow rather than the coupled particle-flow dynamics. In addition, the boundary condition introduces severe nonlinearities which can only be addressed numerically. At a more fundamental level of description, the impurity is treated as a quantum particle with its own wavefunction described by the Schrödinger equation and that is coupled with the GPE for the macroscopic wavefunction of the BEC [@berloff2000capture]. A more versatile model for the interaction of impurities with the BEC has been explored in several papers [@astrakharchik2004motion; @shukla2016sticking; @griffin2017vortex; @shukla2018particles; @pinsker2017gaussian]. Here, an additional repulsive interaction (a Gaussian or delta-function potential) is added to model scattering of the condensate particles with the impurity. The hydrodynamic force on the impurity is determined by this repulsion potential and the superfluid density through the Ehrenfest theorem. The strong-coupling limit of this repulsive potential would be equivalent to the rigid boundary-condition approach. Within this modeling approach, some works have studied the complex motion of particles interacting with vortices in the flow, and the indirect interactions between them arising from the presence of the fluid [@shukla2016sticking; @shukla2018particles]. Another line of research using this type of modeling focused mainly on the superfluidity criterion of a uniform BEC at zero temperature regime. Within the Bogoliubov perturbation analysis for a small impurity and weak interaction, analytical expressions can be derived for the steady-state force exerted by the superfluid as function of the constant velocity of the impurity [@astrakharchik2004motion; @roberts2006force; @sykes2009drag; @pinsker2017gaussian]. At zero temperature, this force vanishes below a critical velocity, the speed of long-wavelength sound waves, at least when we ignore the quantum fluctuations [@roberts2006force], and corresponds to the dissipationless motion. Above this velocity, there is a finite drag force and the motion of the impurity is damped by acoustic excitations. While this is a form of drag, in that the force opposes motion by dissipating energy, it is not the same as the classical Stokes’ drag in viscous fluids. Recent experiments probing superfluidity in a BEC are able to indirectly estimate the drag force by measuring the local heating rate in the vicinity of the moving laser beam and show that there is still a critical velocity even at non-zero temperatures and that the critical velocity is lower for a repulsive potential than for an attractive one [@singh2016probing].
In this paper, we study the forces exerted on an impurity moving in a two-dimensional BEC at finite temperature, using an approach similar to [@astrakharchik2004motion; @shukla2016sticking; @griffin2017vortex; @shukla2018particles; @pinsker2017gaussian], in which a repulsive Gaussian potential is used to describe the interaction of the particle with the BEC, but using a dissipative version of the GPE to model the fluid. Our aim is to bridge this microscopic approach with the phenomenological descriptions [@poole2005motion] that assume that the forces from the superfluid are the same as those from a classical fluid in the inviscid and irrotational case. As in the classical-fluid case, we find that the force is made of two contributions: One of them, dominant for very weak fluid-particle interaction, bears a rather complete analogy with the corresponding force in classical fluids (inertial or pressure-gradient force), which depends on local fluid acceleration and includes the so-called Faxén corrections arising from velocity inhomogeneities close to the particle position [@maxey1983equation]. The difference is that, in a classical fluid, these corrections arise from the finite size of the particle and vanish when the particle size becomes zero. In the BEC, Faxén-type corrections arise both from the particle size (modeled by the range of the particle repulsion potential) and from the BEC coherence length. As fluid-particle interaction becomes more important, a second contribution to the force becomes noticeable, which takes into account the drag on the particle arising from the perturbation of the flow produced by the presence of the particle. Thus it can be called a particle *self-induced* force. We are able to obtain explicit formulae for it in the case of constant-velocity motion of the particle in an otherwise homogeneous and steady BEC. This drag is a dissipative (damping) force due to viscous-like drag of the perturbed BEC with the thermal cloud. It occurs in addition to the drag due to acoustic excitations in the condensate that in the absence of dissipation occurs only above a critical velocity for the particle. Here, it can be compared with the corresponding force in classical fluids, namely the viscous Stokes drag. We find that, as the Stokes force, the self-induced dissipative drag is linear in the particle velocity for small velocities, and we obtain an expression for it also at arbitrary velocities.
The rest of the paper is structured as follows. In Sect. \[sec:model\], we discuss the general modeling setup and in Sect. \[sec:perturbation\] a perturbation analysis is used to derive the linearized equations for the perturbations in the wavefunction related to non-steady condensate flow and the particle repulsive potential. Subsections \[sec:inertial\] and \[sec:drag\] derive analytical expressions within perturbation theory for the two contributions to the force experienced by the particle. In Section \[sec:numerics\], we compare our theoretical predictions with numerical simulations of the dissipative GPE coupled to the impurity, and the final section summarizes our conclusions.
Modeling approach {#sec:model}
=================
We model the interaction between the impurity and a two-dimensional BEC through a Gaussian repulsive potential which can be reduced to a delta-function limit similar to previous studies [@astrakharchik2004motion; @pinsker2017gaussian]. The BEC itself, which is
|
{
"pile_set_name": "ArXiv"
}
|
---
address: |
Institute of Field Physics, Department of Physics and Astronomy,\
University of North Carolina, Chapel Hill, NC 27599-3255, USA\
E-mail: ng@physics.unc.edu
author:
- 'Y. JACK NG'
title: MAGNETIC CATALYSIS OF CHIRAL SYMMETRY BREAKING AND THE PAULI PROBLEM
---
=cmr8
1.5pt
\#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{}
Let me begin with a joke which some of you may have heard before. One of Wolfgang Pauli’s life-long dreams was to understand why the fine structure constant in electrodynamics is 1/137 (in the infrared regime). Pauli was also known to be a difficult person, very hard to please. As the joke goes, the first thing Pauli asked God after his death was to explain why $\alpha$ = 1/137. As God went on with His explanation, Pauli grew more and more dissatisfied. After five minutes, Pauli was seen storming out of Heaven’s Gates mumbling, “Ridiculous!”
Like Pauli, I also would like to understand why $\alpha$ = 1/137. To dignify this problem, I will call it the Pauli problem. It is possible that chiral symmetry breaking by an external field in QED may provide some insight on this old problem by giving a critical value of $\alpha$ close to 1/137.[@JBW] Admittedly, nothing close to that magic value has arisen in the results we have obtained so far [@Lee], but our analysis is not yet complete.
My interest in chiral symmetry breaking by an external field dates back a dozen years ago, to the time when a multiple correlated and narrow-peak structures in electron and positron spectra[@GSI] was observed in heavy-ion experiments at GSI. Kikuchi and I [@newphase] interpreted the $e^+e^-$ peaks as decay products of a new type of positronium, which is formed in a new QED phase induced by the electromagnetic fields of the colliding heavy ions. The theoretical underpinning of this scenario was provided by earlier works [@Mir] which indicated that QED might have a non-perturbative strong-coupling phase, characterized by spontaneous chiral symmetry breaking, in addition to the familar weak-coupling phase. The negative results in recent heavy-ion collision experiments at Argonne [@Argonne] have rendered our interpretation moot. Nevertheless, the problem of chiral symmetry breaking by an external field is still interesting as it may shed light on the Pauli problem, and as it provides an example of vacuum engineering by manipulating external fields to alter the symmetry properties of the vacuum. But more concretely, our study provides a new non-perturbative phenomenon in (3+1)-dimensional quantum field theories and a new method to study it.
First, what kind of external fields can induce chiral symmetry breaking in gauge theories?[@Ng] Lessons gained from studying the Nambu-Jona-Lasinio model [@KL] lead us to believe that uniform magnetic fields are prime candidates. To put our problem in as general a setting as possible, we want an approach that treat both the coupling and the external field non-perturbatively. The former criterion is met by using the Schwinger-Dyson equations (or equivalently, the Nambu-Bethe-Salpeter equations [@GMS]); the latter condition is satisfied by applying the strong-field techniques introduced by Schwinger and others.
Let us start with the motion of a massless fermion of charge $e$ in an external electromagnetic field. It is described by the Green’s function that satisfies the modified Dirac equation proposed by Schwinger: $$\gamma \cdot \Pi(x) G_A(x,y) + \int d^4x' M(x,x') G_A(x',y) =
\delta^{(4)}(x-y),
\label{Greeneq}$$ where $\Pi_\mu(x) = - i \partial_\mu - e A_\mu(x)$, and $M(x,x')$ is the mass operator $M$ in the coordinate representation. For a constant magnetic field of strength $H$, we may take $A_2 = Hx_1$ to be the only nonzero component of $A_\mu$. We adopt the method due to Ritus[@Ritus], which is based on the use of the eigenfunctions of the mass operator and the diagonalization of the latter. As shown by Ritus, $M$ is diagonal in the representation of the eigenfunctions $E_p(x)$ of the operator $(\gamma \cdot \Pi)^2$: $$- (\gamma \cdot \Pi)^2 E_p(x) = p^2 E_p(x).
\label{eigeneq}$$ The advantage of using this representation is obvious: $M$ can now be put in terms of its eigenvalues, so the problems arising from its dependence on the operator $\Pi$ can be avoided. In the chiral representation in which $\sigma_3$ and $\gamma_5$ are diagonal with eigenvalues $\sigma = \pm 1$ and $\chi = \pm 1$, respectively, the eigenfunctions $E_{p\sigma\chi}(x)$ take the form $$E_{p\sigma\chi}(x) = N {\rm e}^{i (p_0x^0 + p_2x^2 + p_3x^3)} D_n(\rho)
\omega_{\sigma\chi} \equiv \tilde{E}_{p\sigma\chi} \omega_{\sigma\chi},
\label{eigenfcn}$$ where $D_n(\rho)$ are the parabolic cylinder functions with indices $$n = n(k,\sigma) \equiv k + \frac{e H \sigma}{2 |e H|} - \frac{1}{2},
~~~~k = 0, 1, 2, ...,
\label{index}$$ and argument $\rho = \sqrt{2 |e H|} (x_1 - \frac{p_2}{e H})$. Note that $n = 0,~1,~2,~...~$. The normalization factor is $N = (4 \pi |eH|)^{1/4}/\sqrt{n!}$; $p$ stands for the set $(p_0, p_2, p_3, k)$; and $\omega_{\sigma\chi}$ are the bispinors of $\sigma_3$ and $\gamma_5$.
Following Ritus, we form the orthonormal and complete eigenfunction-matrices $E_p = {\rm diag}(\tilde{E}_{p11},~
\tilde{E}_{p-11},~\tilde{E}_{p1-1},~\tilde{E}_{p-1-1})$. They satisfy $$\gamma \cdot \Pi~E_p(x) = E_p(x)~\gamma \cdot \bar{p}$$ and $$M(x,x') E_p(x') = E_p(x) \delta^{(4)}(x-x') {\Sigma}_A(\bar{p}),
\label{masseigeneq}$$ where ${\Sigma}_A(\bar{p})$ represents the eigenvalues of the mass operator, and $\bar{p}_0 = p_0,~\bar{p}_1 = 0,~\bar{p}_2
= - {\rm sgn}(eH) \sqrt{2|eH|k},~\bar{p}_3 = p_3$. These properties of the $E_p(x)$ allow us to express the Green’s function in the $E_p$-representation as $(\bar{E}_p \equiv \gamma^0 E_p^\dagger \gamma^0)$ $$G_A(x,y) = \Sigma \!\!\!\!\!\! \int \frac{d^4p}{(2 \pi)^4} E_p(x) \frac{1}
{\gamma \cdot \bar{p} + {\Sigma}_A(\bar{p})} \bar{E}_p(y),
~~~\Sigma \!\!\!\!\!\! \int d^4p \equiv \sum_{k} \int dp_0 dp_2 dp_3.
\label{Greenfcn}$$
We work in the ladder quenched approximation. In terms of the notations: $\bar{p"}_{\!\!\!\!_0} = p_0 - q_0$, $\bar{p"}_{\!\!\!\!_1} = 0$, $\bar{p"}_{\!\!\!\!_2} = -~{\rm sgn}(eH) \sqrt{2|eH|k"}$, $\bar{p"}_{\!\!\!\!_3} = p_3 - q_3$, the Schwinger-Dyson equation takes the form $$\Sigma_A(\bar{p}) \simeq \frac{i e^2}{(2 \pi)^3} |eH| \int dq_0 dq_3
\int_0^\infty dr^2 {\rm e}^{- r^2} \frac{-2}{q^2} \frac
{\Sigma_A(\bar{p"})}{\bar{p"}^2 + \Sigma_A(\bar{p"})}
\label{fermass}$$ where $q^2 = - q_0^2 + q_3^2 + 2 |eH| r^2$ and $\bar{p"}^2 =
- (
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Sok Jérémy\
Ceremade, UMR 7534, Université Paris-Dauphine,\
Place du Maréchal de Lattre de Tassigny,\
75775 Paris Cedex 16, France.\
\
bibliography:
- 'bibliothese.bib'
title: '**The positronium and the dipositronium in a Hartee-Fock approximation of quantum electrodynamics**'
---
Introduction and main results
=============================
The Dirac operator
------------------
Relativistic quantum mechanics is based on the *Dirac operator* $D_0$, which is the Hamiltonian of the free electron. Its expression is [@Th]: $$\label{di_dirac_op}
D_0:=m_ec^2\beta-i\hbar c{\ensuremath{\displaystyle\sum}}_{j=1}^3\alpha_j \partial_{x_j}$$ where $m_e$ is the (bare) mass of the electron, $c$ the speed of light and $\hbar$ the reduced Planck constant and $\beta$ and the $\alpha_j$’s are $4\times 4$ matrices defined as follows: $$\label{di_beta_alpha}
\beta:=\begin{pmatrix}
\mathrm{Id}_{\mathbb{C}^2} & 0\\ 0 & -\mathrm{Id}_{\mathbb{C}^2}
\end{pmatrix},\ \alpha_j:= \begin{pmatrix}
0 & \sigma_j \\ \sigma_j & 0
\end{pmatrix},\ j\in\{1,2,3\}$$ $$\sigma_1:=\begin{pmatrix}
0 & 1\\ 1 & 0
\end{pmatrix},\ \sigma_2:=\begin{pmatrix}
0 & -i\\ i & 0
\end{pmatrix},\ \sigma_3\begin{pmatrix}
1 & 0 \\ -1 & 0
\end{pmatrix}.$$ The operator $D_0$ acts on the Hilbert space $ \mathfrak{H}$: $$\label{di_space_one_electron}
\mathfrak{H}:=L^2\big({\ensuremath{\mathbb{R}^3}},{\ensuremath{\mathbb{C}^4}}\big);$$it is self-adjoint on $\mathfrak{H}$ with domain $H^1({\ensuremath{\mathbb{R}^3}},{\ensuremath{\mathbb{C}^4}})$. Its spectrum is $\sigma(D_0)=(-\infty,m_ec^2]\cup[m_e c^2,+\infty)$, which leads to the existence of states with arbitrary small energy. Dirac postulated that all the negative energy states are already occupied by “virtual electrons”, with one electron in each state: by Pauli’s principle real electrons can only have a positive energy. In this interpretation the Dirac sea, composed by those negatively charged virtual electrons, constitutes a polarizable medium that reacts to the presence of an external field. This phenomenon is called the *vacuum polarization*.
After the transition of an electron of the Dirac sea from a negative energy state to a positive, there is a real electron with positive energy plus the absence of an electron in the Dirac sea. This hole can be interpreted as the addition of a particle with same mass, but opposite charge: the so-called positron. The existence of this particle was predicted by Dirac in 1931. Although firstly observed in 1929 independently by Skobeltsyn and Chung-Yao Chao, it was recognized in an experiment lead by Anderson in 1932.
Positronium and dipositronium
-----------------------------
The positronium is the bound state of an electron and a positron. This system was independently predicted by Anderson and Mohorovi$\check{\mathrm{c}}$ić in 1932 and 1934 and was experimentally observed for the first time in 1951 by Martin Deutsch.
It is unstable: depending on the relative spin states of the positron and electron, its average lifetime in vacuum is 125 ps (para-positronium) or 142 ns (ortho-positronium) [@karsh].
Here we are interested in positronium states in the Bogoliubov-Dirac-Fock (BDF) model.
In a previous paper we have proved the existence of a state that can be interpreted as the ortho-positronium. Our aim in this paper is to find another one that can be interpreted as the para-positronium and to find another state that can be interpreted as the dipositronium, the bound state of two electrons and two positrons. To find these states, we use symmetric properties of the Dirac operator.
Symmetries
----------
– Following Dirac’s ideas, the free vacuum is described by the negative part of the spectrum $\sigma(D_0)$: $$P^0_-=\chi_{(-\infty,0)}(D_0).$$ A correspondence between negative energy states and positron states is given by the *charge conjugation* ${\ensuremath{\mathrm{C}}}$ [@Th]. This is an antiunitary operator that maps $\mathrm{Ran}\,P^0_{-}$ onto $\mathrm{Ran}(1-P^0_{-})$. In our convention [@Th] it is defined by the formula: $$\label{di_chargeconj}
\forall\,\psi\in L^2({\ensuremath{\mathbb{R}^3}}),\ {\ensuremath{\mathrm{C}}}\psi(x)=i\beta\alpha_2\overline{\psi}(x),$$ where $\overline{\psi}$ denotes the usual complex conjugation. More precisely: $$\label{di_chargeconjprec}
{\ensuremath{\mathrm{C}}}\cdot \begin{pmatrix}\psi_1\\ \psi_2\\ \psi_2\\\psi_4\end{pmatrix}=\begin{pmatrix}\overline{\psi}_4\\ -\overline{\psi}_3\\ -\overline{\psi}_2\\\overline{\psi}_1\end{pmatrix}.$$ In our convention it is also an *involution*: ${\ensuremath{\mathrm{C}}}^2=\text{id}$. An important property is the following: $$\label{di_denspsi}
\forall\,\psi\in\,L^2,\forall\,x\in\mathbb{R}^3,\ |{\ensuremath{\mathrm{C}}}\psi(x)|^2=|\psi(x)|^2.$$ The Dirac operator anti-commutes with $D_0$, or equivalently there holds $$-{\ensuremath{\mathrm{C}}}D_0 {\ensuremath{\mathrm{C}}}^{-1}=-{\ensuremath{\mathrm{C}}}D_0{\ensuremath{\mathrm{C}}}=D_0.$$
– There exists another simple symmetry. We define $$\label{di_Isym}
{\ensuremath{\mathrm{I}_{\mathrm{s}}}}:=\begin{pmatrix}0 & -\mathrm{Id}_{\mathbb{C}^2}\\
\mathrm{Id}_{\mathbb{C}^2}& 0 \end{pmatrix}\in\mathbb{C}^{4\times 4}.$$ This operator is $-i$ the *time reversal operator* $\text{L}_T$ [@Th 2.5.7] in $\mathfrak{H}$, interpreted as a unitary reprsentation of the Poincar[é]{} group.
It acts on the spinor by simple multiplication, furthermore we have ${\ensuremath{\mathrm{I}_{\mathrm{s}}}}^2=-\mathrm{Id}$ and $${\ensuremath{\mathrm{I}_{\mathrm{s}}}}:\begin{array}{rcl}
\mathrm{Ran}\,P^0_-&\overset{\simeq}{\longrightarrow}& \mathrm{Ran}\,(1-P^0_-)\\
\psi(x)&\mapsto& {\ensuremath{\mathrm{I}_{\mathrm{s}}}}\psi(x)
\end{array}$$ Similarly we have $ -{\ensuremath{\mathrm{I}_{\mathrm{s}}}}D_0 {\ensuremath{\mathrm{I}_{\mathrm{s}}}}^{-1}={\ensuremath{\mathrm{I}_{\mathrm{s}}}}D_0 {\ensuremath{\mathrm{I}_{\mathrm{s}}}}= D_0.$
– To end this part we recall that $\mathbf{SU}(2)$ acts on $\mathfrak{H}$ [@Th]. Writing $\boldsymbol{\alpha}:=(\alpha_j)_{j=1}^3$ and $$\label{di_L,S}
\mathbf{p}:=-i\hbar\nabla,\ {\ensuremath{\mathbf{L}}}:=\mathbf{x}\wedge \mathbf{p},\ {\ensuremath{\mathbf{S}}}:=-\frac{i}{4}\boldsymbol{\alpha}\wedge \boldsymbol{\alpha}=\frac{1}{2}\begin{pmatrix}\boldsymbol{\sigma}&0\\ 0&\boldsymbol{\sigma} \end{pmatrix},$$ we define $$\label{di_J_moment}
{\ensuremath{\mathbf{J}}}:={\ensuremath{\mathbf{L}}}+{\ensuremath{\mathbf{S}}}.$$ The operator $\mathbf{L}$ is the angular momentum operator and $\mathbf{J}$ is the total angular momentum. From a geometrical point of view, $-i\mathbf{J}$ gives rise to a unitary representation of $\mathbf{SU}(2)$ in $\mathfrak{H}$ by the following formula: $$\left\{\begin{array}{l}e^{-i\theta\mathbf{J}\cdot{\ensuremath{\omega}}}\psi(x)=e^{-i\mathbf{S}\cdot{\ensuremath{\omega}}}\psi\big( \mathbf{R}^{-1}_{{\ensuremath{\omega}},\theta}\big),\\
\forall\theta\in[0,4\pi),\forall\psi\in
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we utilize coupled mode theory (CMT) to model the coupling of surface plasmon-polaritons (SPPs) between tri-layered corrugated thin films (CTF) structure coupler in the terahertz region. Employing the stimulated raman adiabatic passage (STIRAP) quantum control technique, we propose a novel directional coupler based on SPPs evolution in tri-layered CTF in some curved configuration. Our calculated results show that the SPPs can be completely transferred from the input to the output CTF waveguides, and even we consider SPPs propagation loss, the transfer rate is still above $70 \%$. The performance of our coupler is also robust that it is not sensitive to the geometry of device and wavelength of SPPs. As a result, our device can tolerate defect induced by fabrication and manipulate THz wave at broadband.'
author:
- Wei Huang
- Shan Yin
- Wentao Zhang
- Kaili Wang
- Yuting Zhang
- Jiaguang Han
title: 'Adiabatic following of terahertz surface plasmon-polaritons based on tri-layered corrugated thin film coupler'
---
Introduction
============
Terahertz (THz) radiation has drawn enormous attentions these years. Since many material responses are located at THz frequency, THz technologies can obtain unique spectral characteristics and abundant information about matters, which is widely used in spectroscopy [@Lee2009] and imaging [@Chan2007]. Naturally, the THz applications in information processing and transmission [@Ozbay2006; @Lee2010] are vital. On the other hand, with the rapid development of the network and popularization of portable terminals, the miniaturization of the integrated devices is an irresistible trend. THz technologies are promising to accelerate the next generation of communications [@Naeem2018; @Withayachumnankul2018; @Yu2016] due to the capabilities of high capacity and micro-size [@Koenig2013; @Ostmann2011]. To realize further integration, how to manipulate the electromagnetic (EM) waves in subwavelength scale is a key issue. Surface plasmon polaritons (SPPs) are the EM waves propagating along metal-dielectric interfaces with exponential decay in the direction perpendicular to the interfaces. The recent emerged SPPs-based elements, such as antennas [@Schnell2009; @Maguid2016], waveguides [@Sorger2011; @Ebbesen2008] and logic circuitry [@Ebbesen2008; @Cohen2013], demonstrated their potential application on the microscale and nanoscale chips since the wavelength of SPPs can be scaled down below diffraction limit [@Maier2007; @Gramotnev2010; @Kawata2009].
At terahertz regime, SPPs-based waveguides [@Zhang2017], couplers [@Ma2017] and coders [@Yin2018] have been investigated recently, which will make great contributions to the THz applications. Due to these advantages of SPPs at terahertz regime, completely transfer energies and information of THz SPPs is significant to implement compact device in THz regime. Two recent researches studying on the coupling of THz SPPs waveguides [@Liu2014; @Zhang2018] involved in coupled mode theory (CMT), which is a widely used theory in describing coupling between two optical waveguides, through the overlap of their evanescent electromagnetic fields [@Yariv1973; @Huang2014]. Base on this concept, if two thin films are close enough, the two evanescent fields of SPPs in each thin film have overlapping and SPPs can transfer from one thin film to another [@Liu2014; @Zhang2018]. In our paper, we employ and derive the CMT to describe the SPPs coupling between courrgated thin films structure.
However, the present stuctures of two parallel THz SPPs waveguides (e.g. ref. [@Liu2014; @Zhang2018]) require rigorous fabrication precision and only operate at specific excited frequency of THz waves, otherwise, the fidelity of device will drop rapidly. Most recently, to overcome this shortcoming, a remarkable paper applied coherent quantum control (stimulated raman adiabatic passage, short for STIRAP) into transferring the SPPs on the graphene sheets [@Huang20181]. STIRAP is the well-known three-level coherent quantum control, which provides completely transfer population from first state to third state, without any population remaining In intermediate state [@Vitanov20011; @Vitanov20012; @Vitanov2017]. Furthermore, it is shown that STIRAP is exceedingly robust against controlling parameters under perturbations. The SITRAP has already widely used in various domains, such as atomic molecular and optical physics [@Yale2016; @Huang2017], waveguide coupler [@Mrejen2015; @Longhi2007], graphene electronic and optical effect [@Huang20181; @Huang20182]. In this paper, we firstly introduce the STIRAP technique into the SPPs waveguide coupler at terahertz regime, to achieve very robust device against varying frequency of input THz waves and disturbances on the geometry parameters. We propose the tri-layered corrugated thin film coupler structure with some curved configuration and we substantiate that the performance of our coupler is also robust to the geometry of device and wavelength of SPPs. As a result, our device can tolerate defect induced by fabrication and manipulate THz wave at broadband, which is meaningful in developing THz functional devices.
Model
=====
We first consider terahertz radiation to excited surface plasmon-polaritons on the surface of the courrgated thin films structure. Assuming a slab courrgated thin films locates at $z=0$ at $x z$ plane, we illuminate the terahertz waves on the surface of the thin film to excited SPPs propagating along $x$ direction. In order to SPPs extend the propagation distance, it is remarkable to utilize courrgated structure cutting on the thin film [@Zhang2017; @Liu2014; @Zhang2018], as shown in Fig. 1, with cutting depth $h$, width $a$, period $d$ and thickness of thin film $t$. If we contemplate the mode profile of SPPs, electric field of SPPs has exponentially decay along with $y$ and $z$ directions outside the SPPs waveguide, as the evanescent field of electric field.
In this paper, we only study the coupling mechanism along $z$ direction. Therefore, SPPs’ electric fields of $x$ direction (SPPs propagation) and $z$ direction (SPPs’ evanescent field) are observed and we ignore the impacts of $y$ direction. Assume that we place two corrugated thin films at $z=g/2$ and $-g/2$ and these two films are parallel to $x y$ plane, where $g$ is the gap distance between two parallel thin films (see Fig. 1). The TM polarized SPPs modes are excited on one corrugated thin films. The electric fields can be described by $E_1 = (E_{1x}, 0, E_{1z}) e^{iqx} e^{-k_m |z-g/2|}$ and $E_2 = (E_{2x}, 0, E_{2z}) e^{iqx} e^{-k_m |z+g/2|}$. Here $k_m$ is the decay rate of evanescent field in the surrounding dielectric mediums, given by $k_m = \sqrt{ (\omega^2 \epsilon_m - q^2) /c^2}$ [@Saleh1991]. $\epsilon_m$ is the permittivity of medium material (we use silicon as surrounding mediums) and $\omega$ is the frequency of incident light in air. In addition, $q$ is the propagation constant of SPPs and it well depends on the geometry structure and frequency of incident terahertz light $\omega$ [@Zhang2017; @Ma2017; @Maier2006]. We can numerically solve it by dispersion equation, given by $q=\frac{\omega}{c}\sqrt{1+\frac{a^2}{d^2} \tan^2 \frac{\omega h}{c}}$ [@Maier2006].
{width="50.00000%"}
In our parallel coupling model, we take the notations $\Psi_1(x,z)$ ($\Psi_2(x,z)$) as the electric field of SPPs on the first (second) thin film, written as
$$\begin{aligned}
\Psi_1(x,z)= a_{1}(x) u_{1}(z) \exp(-i q x), \\
\Psi_2(x,z)= a_{2}(x) u_{2}(z) \exp(-i q x),
\end{aligned}$$
where $a_{1}(x)$ and $a_{2}(x)$ are the amplitudes of the modes with respect to SPPs on two thin film. Due to the extremely thickness of film ($t = 100$ nm) comparing to other geometry parameters, we can obtain the mode profiles of SPPs as $u_{1} = E_{1z} exp(- k_m |z-g/2|)$ and $u_{2} =
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The $U(1)$ Calogero-Sutherland Model with anti-periodic boundary condition is studied. This model is obtained by applying a vertical magnetic field perpendicular to the plane of one dimensional ring of particles. The trigonometric form of the Hamiltonian is recast by using a suitable similarity transformation. The transformed Hamiltonian is shown to be integrable by constructing a set of momentum operators which commutes with the Hamiltonian and amongst themselves. The function space of monomials of several variables remains invariant under the action of these operators. The above properties imply the quasi-solvability of the Hamiltonian under consideration.'
author:
- Arindam Chakraborty
- Subhankar Ray
- 'J. Shamanna'
date: 30 December 2006
title: 'Quasi-solvability of Calogero-Sutherland model with Anti-periodic Boundary Condition'
---
Introduction
============
The study of Calogero-Sutherland system has inspired significant research activity since the pioneering work of Calogero and Sutherland [@cal62; @suth71]. The integrability of the model has been studied for different root systems over the past few decades [@ols83]. A few of the classical and spin varieties of the model are found to be exactly solvable and the solutions in terms of their eigenvalues and eigenfunctions have been used extensively to describe physical properties of several condensed matter systems. The study of Calogero systems is also related to various other research areas in physics and mathematics, e.g., Yang-Mills theories [@gor94; @mina94], soliton theory [@poly95], random matrix model [@dyson62], multivariable orthogonal polynomials [@jack69], Selberg integral formula [@forr93], $W^{\infty}$ algebra [@hika93] etc.
This article investigates the Calogero-Sutherland Model (CSM) with anti-periodic boundary condition. The anti-periodic boundary condition is a special case of the general twisted boundary condition which arises when a one dimensional chain of particles is placed in a transverse magnetic field. A one dimensional chain of particles with a periodic boundary condition is topologically equivalent to a one dimensional ring. A particle transported adiabatically around this ring an integral number of times returns to the same point. In absence of a magnetic field this implies that the particle returns to the same quantum state. However, in the presence of a transverse magnetic field, one adiabatic transportation around the ring introduces a phase factor $\exp(i \phi)$. This is called a twisted boundary condition. When the phase factor is $\exp(i \phi) = -1$, it is called an anti-periodic boundary condition. Though the introduction of a magnetic field is physically important in this context, the model becomes mathematically more involved; and the CSM with anti-periodic boundary condition remains less extensively investigated.
The original version of the Calogero system incorporates long-range interaction by considering a two-body inverse square potential. The integrability of such systems was initially studied by Calogero and Perelomov [@cal75; @perel77] by means of Lax pair formulation. The integrability of CSM has since been investigated in a variety of ways [@ols83; @mina93; @berm97; @poly99].
The general form of CSM Hamiltonian is often represented by the following equation: \[hamilton\] H\_N=\_[j=1]{}\^N[\_j]{}\^2-(-1) \_ U(x\_[jk]{}\^-) The two-body potential, represented by $U(x_{jk}^-)$ is a long-range interaction in a chain of spinless nonrelativistic particles in one dimension. Here, $\lambda$ is a dimensionless interaction parameter, $x_j$ and $x_k$ denote the coordinates of the $j$-th and $k$-th particle respectively and $x_{jk}^-=x_j-x_k$. While studying the solvability of $A_{N-1}$-type Calogero model, the Hamiltonian is operated on a partially ordered state space of all symmetric polynomials of several variables. This results in an upper triangular representation of the Hamiltonian. The diagonal terms of this matrix are the eigenvalues of the Hamiltonian. The orthonormal eigenfunctions are expressed in terms of Jack symmetric polynomials [@jack69] which are very useful in determining the various physical properties of many particle systems with long-range interactions [@habook].
The search for an exact form of eigenfunction sometimes leads to partial diagonalization of the Hamiltonian [@tana05; @fin01]. Among the one dimensional systems with periodic boundary condition, several such quasi-solvable models exist. The eigenvalues and eigenfunctions for many of them have been obtained[@tur87; @ushbook]. The model with $sl(2)$ structure was first discovered by Turbiner and Ushveridze [@tur88]. It was also observed that the well known $N$ body Calogero-Sutherland models [@cal71; @suth71; @ruhl95] have similar Lie algebraic structure of $sl(N+1)$.
It may be noted that these models are in fact different generalizations of the classically integrable Inozemtsev model [@tana04; @ino83]. The common feature of these models with some underlying Lie algebraic structure is the existence of an invariant finite dimensional module of the associated Lie algebra. Post and Turbiner [@post95] studied a classification of linear differential operators of a single variable which have a finite dimensional invariant subspace spanned by monomials. One of the basic advantages of quasi-solvability is that, one can restrict the study to a finite dimensional submanifold of the full Hamiltonian. The finite dimensional matrix elements can be calculated by allowing the Hamiltonian to act on finite-dimensional subspaces of a Hilbert space on which it is originally defined. When the Hamiltonian operator preserves an infinite number of subsequences of such finite dimensional subspaces [@tana05] it becomes solvable. The exact solvability of a model is ensured when the closure property is imposed on the space on which the Hamiltonian is allowed to act.
In this article we study the integrability and solvability of a spinless non-relativistic Calogero-Sutherland model (CSM) with anti-periodic boundary condition. The two-body long-range interaction incorporating the anti-periodic boundary condition is derived. The Hamiltonian so obtained is reduced to a more apparent integrable form, using a similarity transformation. The integrability is then verified by constructing a set of mutually commuting momentum-like differential operators which further commute with the Hamiltonian. Finally, the concept of quasi-solvability is discussed for a model of many particle system. For CSM with anti-periodic boundary condition the quasi-solvability is studied by operating the Hamiltonian on a multivariable polynomial space [@fin01]. The momentum operators in the anti-periodic model remind us of the well known Dunkl operator [@dunk89; @che91] which resembles the Laplace-Beltrami-type operator acting on a symmetric Riemannian space. These operators are extensively used in the study of integrability and solvability of Calogero-Sutherland models. It is shown that these commuting momentum operators preserve the space spanned by all monomials of degree $n$, i.e., $\{\prod_i z_i^{\ell_i}\}$, where $\ell_i\geq 0$ and $\sum \ell_i = n$, $n$ being a non-negative integer. This property ensures the quasi-solvability of the Hamiltonian under study.
Trigonometric version of CSM Hamiltonian
========================================
Let us first consider the periodic CSM with inverse square long-range interaction in the absence of a magnetic field. The topological representation of a one dimensional chain of particles with periodic boundary condition is simply a circular ring. A particle when transported adiabatically around the ring an integral number of times, does not take up any phase factor, and so the eigenfunctions retain their initial form. Then the pairwise interaction summed around a unitarily equivalent circle of circumference $L$, an infinite number of times is given as, \[pair\_int\] \_[n=-]{}\^[+ ]{} = where, as shown in the figure, $x$ is the interparticle distance along the ring and $d(x)$ is the chord length. It is easy to verify that $d(x) = L/\pi \sin(\pi x/L)$.
Therefore, the potential $U(x)=(\pi^2/L^2)\sin^{-2}{x}$ is an inverse trigonometric function of the inter-particle distance $x$. The Hamiltonian with the above potential is given by, \[htrig1\] H\_N=\_[j=1]{}\^[N]{}[\_j]{}\^2-(-1) \_ where $x_{jk}^-= x_j-x_k$. Using standard trigonometric identity, making a change of variable $(\pi/2L)x_j \rightarrow x_j$ and rescaling the Hamiltonian $(4L^2/\pi^2) H_N \rightarrow H_N $, Eq. (\[htrig1\]) may be written as, \[htrig2\] H\_N=\_[j=1]{}\^[N]{}[\_j]{}\^2-(-1) \_ (+ ) . Let us now consider the anti-periodic case. When a magnetic field is introduced transverse to the one dimensional ring, a general twisted boundary condition arises. A particle transported adiabatically around the entire system $n$ number of times picks up a net phase $\exp{(in\phi)}$. The pairwise interaction summed around a unitarily equivalent circle of circumference $L$, an infinite number of times, is now given as, \[antipair\_int\] \_[n
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, the exact dynamics of open quantum systems in the presence of initial system-reservoir correlations is investigated for a photonic cavity system coupled to a general non-Markovian reservoir. The exact time-convolutionless master equation incorporating with initial system-reservoir correlations is obtained. The non-Markovian dynamics of the reservoir and the effects of the initial correlations are embedded into the time-dependent coefficients in the master equation. We show that the effects induced by the initial correlations play an important role in the non-Markovian dynamics of the cavity but they are washed out in the steady-state limit in the Markovian regime. Moreover, the initial two-photon correlation between the cavity and the reservoir can induce nontrivial squeezing dynamics to the cavity field.'
author:
- 'Hua-Tang Tan'
- 'Wei-Min Zhang'
title: 'Dynamics of open quantum systems with initial system-reservoir correlations'
---
= 10000
Introduction
============
The study of dynamics of open quantum systems continuously receives attentions because of its fundamental importance in quantum physics and also because of the rapid development of quantum technologies. Previous studies on the dynamics of open quantum systems mainly lie on the Lindblad-type master equation [@bm1; @bm2; @bm3], where the characteristic time of the environment is sufficiently shorter than that of the system such that the non-Markovian memory effect is negligible, and so does for initial system-reservoir correlations. However, the new development in ultrafast photonics, ultracold atomic physics, nanoscience and technology as well as quantum information science strongly suggests that the non-Markovian dynamics in ultrafast and ultrasmall open systems should play an important role, and the associated effects (including the initial system-reservoir correlations) should be fully taken into account. To this end, the more rigorous approach is demanded for the study of non-Markovian dynamics of open quantum systems incorporating with the initial system-reservoir correlations.
The exact description of open quantum systems has indeed been explored extensively in the literature, mainly focusing on quantum Brown motion based on Feynman-Vernon influence functional [@Fey63118; @Cal83587; @Haa852462; @Hu922843; @Hal962012; @Food01105020] and stochastic diffusion Schrödinger equation [@Str981699; @Str994909; @Yu04062107]. Extending the Feynman-Vernon influence functional to other open quantum systems has also achieved a great success recently, including the exact master equation for electron systems and the nonequilibrium quantum transport theory in various nanostructures [@Tu08235311; @Tu09631; @Jin10083013] and the exact master equation for micro- or nanocavities in photonic crystals and the quantum transport theory for photonic crystals [@Xio10012105; @Wu1018407; @Lei104570]. However, in most of these investigations, the system and the reservoir are often assumed to be initially uncorrelated with each other [@Leg871]. Realistically, it is possible and often unavoidable in experiments that the system and its environment are correlated closely at the beginning, especially for the cases of the system strongly coupled to the reservoir [@ee]. Various initial-correlation induced effects have been investigated in different open quantum systems [@src0; @src1; @src2; @src3; @src4; @src5; @src7; @src8; @src6; @src9; @src10]. For example, it has been recently shown that the initial correlations between a qubit and its environment can lead to the distance growth of two quantum states over its initial value [@src7; @src8]. It has also been demonstrated that the initial correlations have nontrivial differences in quantum tomography process [@src6]. Besides, it has been found that the initial system-reservoir correlations have significant effects on the entanglement in a two-qubit system [@src9; @src10].
In this paper, the dynamics of open quantum systems in the presence of initial system-reservoir correlations is investigated with a photonic cavity system coupled to a non-Markovian reservoir as a specific example. By solving the exact dynamics of the cavity system, the effects of the initial correlations are explicitly built into the equations of motion for the intensity and the two-photon correlation function of the cavity field. We then obtain the exact master equation incorporating with the initial correlations which induce new terms and also modify the time-dependent dissipation and fluctuation coefficients in the master equation. Taking a nanocavity coupled to a coupled resonator optical waveguide (serving as a structured reservoir) as an experimentally realizable system, we find that the effects of the initial correlations are fragile for a Markovian reservoir but play an important role in the non-Markovian regime. In fact, in the strong non-Markovian regime, the initial two-photon correlation between the cavity and the reservoir can induce oscillating squeezing dynamics in the cavity. But in the Markovian regime, the initial correlations will be washed out in the steady-state limit.
The rest of the paper is organized as follows. In Sec. II, the dynamics of open quantum systems with initial system-reservoir correlations is formulated for a photonic cavity system coupled to a general non-Markovian reservoir. In Sec. III, we construct the exact time-convolutionless master equation incorporating with the initial correlations, where the effects from the initial correlations are explicitly embedded into the time-dependent coefficients in the master equation. In Sec. IV, an experimentally realizable example is considered to analytically and numerically examine the influence of the initial correlations on the dynamics of open quantum systems. At last, a summary is given in Sec. V.
Non-Markovian dynamics with initial system-reservoir correlations
=================================================================
To be specific, we consider here a single-mode photonic cavity system coupled to a general non-Markovian reservoir, where the single-mode cavity system could be a nanocavity in nanostructures or photonic crystals, and the non-Markovian environment may be a structured photonic reservoir [@stru-reservoir]. The Hamiltonian of the system can be expressed as a Fano-type model of a localized state coupled with a continuum [@Fano611866]: $$\begin{aligned}
H=\omega_c a^\dag a+\sum_{k}\omega_k b_k^\dag b_k\ +\sum_kV_k (a
b_k^\dag +b_k a^\dag),\label{H1}\end{aligned}$$ where the first term is the Hamiltonian of the cavity field with frequency $\omega_c$, and $a^\dag$ and $a$ are the creation and annihilation operators of the cavity field; the second term describes a general non-Markovian reservoir which is modeled as a collection of infinite photonic modes, where $b_k^\dag$ and $b_k$ are the corresponding creation and annihilation operators of the $k$-th photonic mode with frequency $\omega_k$. The third term characterizes the system-reservoir coupling with the coupling strength $V_k$ between the cavity field and the $k$-th photonic mode. For convenience, we take $\hbar=1$ throughout the paper.
We shall use the equation of motion approach to solve the dynamics of the cavity system and the reservoir, from which the general initial correlations between the cavity and the reservoir can be fully taken into account. The time evolution of the cavity field operator $a(t)=e^{iHt}ae^{-iHt}$ and the reservoir field operators $b_k(t)=e^{iHt}b_ke^{-iHt}$ in the Heisenberg picture obey the equations of motion
$$\begin{aligned}
&\frac{d}{dt}a(t)=-i[a(t), H]=-i\omega_c a(t)-i\sum_k V_k b_k(t),\\
&\frac{d}{dt}b_k(t)=-i[b_k(t),H]=-i\omega_k b_k(t)-iV_k a(t).
\label{bk}\end{aligned}$$
Solving Eq. (\[bk\]) for $b_k(t)$ $$\begin{aligned}
b_k(t)=b_k(0)e^{-i\omega_k t}-iV_k\int_0^t d\tau
a(\tau)e^{-i\omega_k (t-\tau)},\end{aligned}$$ we obtain $$\begin{aligned}
\frac{d}{dt}a(t)=-i\omega_c a(t) -\int_0^t d\tau g(t-\tau)a(\tau)
\nonumber\\-i\sum_k V_k b_k(0)e^{-i\omega_k t}. \label{lat}\end{aligned}$$ Here, the memory kernel $g(\tau)=\sum_{k}|V_k|^2e^{-i\omega_k\tau}$ characterizes the non-Markovian dynamics of the reservoir. For a continuous reservoir spectrum, we have $g(\tau)=\int_{0}^\infty
\frac{d\omega}{2\pi}J(\omega)e^{-i\omega\tau}$, where $J(\omega)=2\pi
\varrho(\omega)|V(\omega)|^2$ is the spectral density of the reservoir, with $\varrho(\omega)$ being the density of states and $V(\omega)$ the coupling between
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We have investigated the quantum capacitance ($C_Q$) in functionalized graphene, modified with ad-atoms from different groups in the periodic table. Changes in the electronic band structure of graphene upon functionalization and subsequently the quantum capacitance ($C_Q$) of the modified graphene were systematically analyzed using density functional theory(DFT) calculations. We observed that the quantum capacitance can be enhanced significantly by means of controlled doping of N, Cl and P ad-atoms in the pristine graphene surface. These ad-atoms are behaving as magnetic impurities in the system, generates a localized density of states near the Fermi energy, which intern increases charge(electron/hole) carrier density in the system. As a result, a very high quantum capacitance was observed. Finally, the temperature dependent study of $C_Q$ for Cl and N functionalized graphene shows that the $C_Q$ remains very high in a wide range of temperature near the room temperature.'
address: 'Department of Physics, National Institute of Technology, Srinivasnagar, Surathkal, Mangalore Karnataka-575025, India'
author:
- 'Sruthi T & Kartick Tarafder'
title: Route to Achieving Enhanced Quantum Capacitance in Functionalized Graphene based Supercapacitor Electrodes
---
Introduction
============
Large scale generation of green energy from renewable energy sources is utmost necessary in the current scenario. Sunlight is the most viable renewable energy source present in our planet. However energy cannot be produced from the sun uniformly all the time through out the year in many parts of the glob. Therefore, an efficient storage for generated energy and its cost effective transportation is very essential. Hence the designing of efficient energy storage devices is one of the active area of research in green energy production. Supercapacitors based on two-dimensional materials would be a promising technology may provide conceivable alternative for the energy storage [@1]. As basic requirements, supercapacitor should have a very large ion-density, fast charging and discharging capacity with long life time. Two dimensional (2D) materials could play an important role to design an efficient supercapacitor electrodes. With a very high surface area, conductivity and mechanical robustness in 2D materials, specially functionalized graphenes could be the best choice for supercapacitor electrodes [@2; @3; @4]. The total capacitance ($C_{T}$) of a supercapacitor depends on two component [@5], namely the electrical double layer capacitance ($C_D$) and the quantum capacitance (${C_Q}$) such that .
$$\frac{1}{C_{T}}=\frac{1}{C_D} +\frac{1}{C_Q}$$
Insufficiency in either of them will reduce the total capacitance of the device. Thus electrode materials with sufficiently large quantum capacitance is an obligatory factor to obtain high energy density. The quantum capacitance part of electrodes depend on the electronic structure of the electrode materials[@6; @7]. In case of pristine graphene, the quantum capacitance is very small[@8]. However the capacitance can be enhance in graphene based electrode by introducing vacancy defect as well as doped with impurities in a control manner[@9]. Nitrogenation and chlorination of graphene could be an effective way to improve the quantum capacitance in the system [@10; @11]. Recently, Hirunsit [*et. al*]{}.[@12] studied the influence of Al, B, N and P doping on graphene electronic structures and change in quantum capacitance by using DFT calculations. Their report indicates that the $C_Q$ in the monolayer graphene changes substantially when doped with N and in presence of vacancy defect. Later, Song [*et al*]{}[@13] studied the quantum capacitance in ad-atom functionalized reduced graphene oxide and found a significant enhancement in $C_Q$. Therefore it is not difficult to realize from the recent studies that the quantum capacitance in graphene based electrodes can be improve significantly by means of an adequate functionalization. Several study of $C_Q$ on functionalized graphene have been recently reported, however, a systematic investigation of quantum capacitance in functionalized graphene considering various type of ad- atoms with a variable concentration and the basic theoretical understanding of their effect on the $C_Q$ is still lacking. In this present study we have used density functional theory calculations to investigate the quantum capacitance of different functionalized graphene in a systematic way. The functionalization has been done using ad-atoms from different groups in periodic table. The role of vacancy defects on electronic structure and its effect on quantum capacitance in functionalized graphene(FG) has also been carefully investigated.
Computational Method
====================
To accomplish our theoretical investigation of $C_Q$, we first obtain the accurate electronic structure of the doped graphene using plane wave based density functional theory calculations implemented in Vienna Ab-initio Simulation Package(VASP)[@14; @15; @16]. Projected augmented wave method[@17] was used to optimize the geometric structure of the functionalized graphene. The exchange correlation energy functionals were approximated using generalized gradient approximation with PBE parametrization[@18; @19]. A very high kinetic energy cut-off (>400eV) was used in all our calculations for the accurate results. In order to explore the effect of different functionalization on the quantum capacitance, calculations were done using 3$\times$3$\times$1 supercells of graphene unit cell, having 18 carbon atoms of graphene sheet (G18) with one functional group. The vacancy defected configurations were realized on a 5$\times$5$\times$1 supercell of graphene unit cell (50 C atoms of graphene, G50) with a variable concentration of vacancy in the range from 2 to 8 percent. A sufficiently large vacuum has been considered along the out of plane direction of graphene sheet (height$\textgreater$10Å) to avoid the interaction with periodic images. We used a 6$\times$6$\times$1 $\Gamma$ point pack of k-point mesh to sample the Brillouin zone for geometry optimization with 10$^{-6}$H tolerance in total energy for convergence. A denser 24$\times$24$\times$1 k-point grid was used for the precise extraction of electron density of states D(E) and atom projected density of states(PDOS).
The quantum capacitance of materials can be seen as the rate of change of excessive charges(ions) with respect to the change in applied potential[@20]. Therefore, it is directly related to the electronic energy configuration of the electrode materials and can defined as the derivative of the net charge on the substrate/electrode with respect to electrostatic potential. i.e. $$C{_Q}$ = $\frac{dQ}{d\phi}$$
where Q is the excessive charge on the electrode and $\phi$ is the chemical potential. The total charge is proportional to the weighted sum of the electronic density of states upto the Fermi level $E_F$. Due to an applied potential, the chemical potential will be shifted, the excessive charge on the electrode (Q) then can be expressed by an integral term associated to the electronic density of state D(E) and the Fermi$-$Dirac distribution function f(E) as
$$Q = e\int_{-\infty}^{+\infty} D(E)[f(E) - f(E - \phi)] dE$$
Therefore, when the density of states (DOS) is known, the quantum capacitance $C_Q$ of a channel at a finite temperature T can be calculated as
$$C{_Q} =\frac{dQ}{d\phi} = \frac{e^2}{4kT}\int_{-\infty}^{+\infty} D(E) Sech^{2}\left(\frac{E - {e\phi}}{2kT}\right) dE
\label{qc}$$
Here [*e*]{} is the electrons charge, $\phi$ is the chemical potential, [*D(E)*]{} is the DOS and $k$ is the Boltzmann constant. We therefore have estimated the $C{_Q}$ for all the system directly form the density of states.
Results and discussion
======================
It is evident from the expression of $C_Q$ in equation (\[qc\]) that the quantum capacitance is directly proportional to the density of state present near the Fermi energy. Since $(E-e\phi)$ represents the energy with respect to Fermi level and $Sech^{2} (x)$ rapidly goes to zero for $|x| > 0$, the states which are energetically far from the $E_F$ are not contribute much on $C_Q$ . The density of state near the Fermi level for a given material can be tune by means of an efficient chemical modification of the system using external ad-atoms or creating defects. This is also an effective way to control the type and concentration of charge carriers in the system. The electronic Energy levels of the parent material may also be modified/shift in these process. The change in electronic structure depends on dopant type, concentration and doped position in the sublattice. In this study we have considered atoms from different groups in periodic table with an increasing order of electronegativity such as K$<$ Na$<$ Al$<$ P$<$ N$<$ Cl, to functionalize the graphene. The stable adsorption position on graphene was estimated by placing ad-atoms in different possible adsorption sites and comparing the optimized total energies. The hollow position was found to be the most favourable position for ad-atoms like K, Na, Al, bridge position for P, N and top position for Cl ad-atoms respectively. The optimized structure of functionalized graphene with different adsorption positions are shown in Fig. \[FG-Adatom Optimized structure\]. The stability of the functionalized structure was investigated by estimating average
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present photometry with the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST) of stars in the Magellanic starburst galaxy NGC 4449. The galaxy has been imaged in the F435W (B), F555W (V) and F814W (I) broad-band filters, and in the F658N (H$\alpha$) narrow-band filter. Our photometry includes $\approx$ 300,000 objects in the (B, V) color-magnitude diagram (CMD) down to V $\la$ 28, and $\approx$ 400,000 objects in the (V, I) CMD, down to I $\la$ 27 . A subsample of $\approx$ 200,000 stars has been photometrized in all the three bands simultaneously. The features observed in the CMDs imply a variety of stellar ages up to at least 1 Gyr, and possibly as old as a Hubble time. The spatial variation of the CMD morphology and of the red giant branch colors point toward the presence of an age gradient: young and intermediate-age stars tend to be concentrated toward the galactic center, while old stars are present everywhere. The spatial variation in the average luminosity of carbon stars suggests that there is not a strong metallicity gradient ($\lesssim 0.2$ dex). Also, we detect an interesting resolved star cluster on the West side of the galaxy, surrounded by a symmetric tidal or spiral feature consisting of young stars. The positions of the stars in NGC 4449 younger than 10 Myr are strongly correlated with the H$\alpha$ emission. We derive the distance of NGC 4449 from the tip of the red giant branch to be ${\rm D=3.82 \pm 0.27}$ Mpc. This result is in agreement with the distance that we derive from the luminosity of the carbon stars.'
author:
- 'F. Annibali , A. Aloisi , J. Mack, M. Tosi , R.P. van der Marel, L. Angeretti, C. Leitherer, M. Sirianni'
title: 'Starbursts in the Local Universe: new HST/ACS observations of the irregular galaxy NGC 4449[^1]'
---
Introduction
============
Starbursts are short and intense episodes of star formation (SF) that usually occur in the central regions of galaxies and dominate their integrated light. The associated star-formation rates (SFR) are so high that the existing gas supply can sustain the stellar production only on timescales much shorter than a cosmic time ($\lesssim 1$ Gyr).
The importance of the starburst phenomenon in the context of cosmology and galaxy evolution has been dramatically boosted in recent years by deep imaging and spectroscopic surveys which have discovered star-forming galaxies at high redshift: a population of dusty and massive starbursts, with SFRs as high as $\sim$ 100 – 1000 M$_{\odot}$ yr$^{-1}$, has been unveiled in the submillimeter and millimeter wavelengths at z$>$2 [@blain02; @scott02] and star-forming galaxies at $z >$ 3 have been discovered with the Lyman break selection technique [@steidel96; @pet01] and through Lyman-$\alpha$ emission surveys (@rhoads, see also @lefevre05 for a more recent independent approach).
In the local Universe, starbursts are mostly found in dwarf irregular galaxies, and contribute $\sim$ 25% of the whole massive SF [@heck98]. Both observations and theoretical models [@larson78; @genz98; @ni86] show that strong starbursts are usually triggered by processes such as interaction or merging of galaxies, or by accretion of gas, which probably played an important role in the formation and evolution of galaxies at high redshift. Thus, nearby starbursts can serve as local analogs to primeval galaxies to test our ideas about SF, evolution of massive stars, and physics of the interstellar medium (ISM) in “extreme” environments. The high spatial resolution and high sensitivity of Hubble Space Telescope offer the possibility to study the evolution of nearby starbursts in details. This is fundamental in order to address many of the still open questions in cosmological astrophysics: What are the main characteristics of primeval galaxies? What is the nature of star-forming galaxies at high redshift? How important are accretion and merging processes in the formation and evolution of galaxies?
The Magellanic irregular galaxy NGC 4449 ($\alpha_{2000} =Ê12^h 28^m 11^{s}.9$, $\delta_{2000} =Ê+ 44^{\circ} 05^{'} 40^{"}$, $l=136.84$ and $b=72.4$), at a distance of $3.82 \pm 0.27$ Mpc (see Section 5), is one of the best studied and spectacular nearby starbursts. It has been observed across the whole electromagnetic spectrum and displays both interesting and uncommon properties. It is one of the most luminous and active irregular galaxies. Its integrated magnitude $M_B
= -18.2$ makes it $\approx$ 1.4 times as luminous as the Large Magellanic Cloud (LMC) [@hunter97]. @th87 estimated a current SFR of $\sim 1.5$ M$_{\odot}$ yr$^{-1}$. NGC 4449 is also the only local example of a global starburst, in the sense that the current SF is occurring throughout the galaxy [@hunter97]. This makes NGC 4449 more similar to Lyman break Galaxies (LBGs) at high redshift ($z \simeq3$), where the brightest regions of SF are embedded in a more diffuse nebulosity and dominate the integrated light also at optical wavelengths [@gi02].
Abundance estimates in NGC 4449 were derived in the HII regions by [@talent], [@hgr82] and [@mar97], and for NGC 4449 nucleus by @bok01. The published values are in good agreement with each other, and provide 12 + log(O/H) $\approx 8.31$. Adopting the oxygen solar abundance from @sun98, 12 + log(O/H)$_{\odot} =$ 8.83, we obtain \[O/H\] $=$ -0.52, i.e. NGC 4449 oxygen content is almost one third solar, as in the LMC. New solar abundance estimates, based on 3D hydrodynamic models of the solar atmosphere, accounting for departures from LTE, and on improved atomic and molecular data, provide 12 + log(O/H)$_{\odot} =$ 8.66 [@sun07]. However, the new lower abundances seem to be inconsistent with helioseismology data, unless the majority of the inputs needed to make the solar model are changed [@basu07]. Thus, we will adopt the old abundances from @sun98 throughout the paper. Radio observations of NGC 4449 have shown a very extended HI halo ($\sim 90$ kpc in diameter) which is a factor of $\sim 10$ larger than the optical diameter of the galaxy and appears to rotate in the opposite direction to the gas in the center [@baj94]. Hunter et al. (1998, 1999) have resolved this halo into a central disk-like feature and large gas streamers that wrap around the galaxy. Both the morphology and the dynamics of the HI gas suggest that NGC 4449 has undergone some interaction in the past. A gas-rich companion galaxy, DDO 125, at the projected distance of $\sim 40$ kpc, could have been involved [@theis].
NGC 4449 has numerous ($\sim 60$) star clusters with ages up to 1 Gyr [@gel01] and a young ($\sim$ 6-10 Myr) central cluster [@bok01], a prominent stellar bar which covers a large fraction of the optical body [@hun99], and a spherical distribution of older (3-5 Gyr) stars [@both96]. The galaxy has also been demonstrated to contain molecular clouds from CO observations [@ht96] and to have an infrared (10-150 $\micron$) luminosity of $2 \times
10^{43}$ erg s$^{-1}$ [@th87]. The ionized gas shows a very turbulent morphology with filaments, shells and bubbles which extend for several kpc (Hunter & Gallagher 1990, 1997). The kinematics of the HII regions within the galaxy is chaotic, again suggesting the possibility of a collision or merger [@va02]. Some 40% of the X-ray emission in NGC 4449 comes from hot gas with a complex morphology similar to that observed in H$\alpha$, implying an expanding super-bubble with a velocity of $\sim 220$ kms$^{-1}$ [@sum03].
All these observational data suggest that the late-type galaxy NGC 4449 may be changing as a result of an external perturbation, i.e., interaction or merger with another galaxy, or accretion of a gas cloud. A detailed study of the star-formation history (SFH) of this galaxy is fundamental in order to derive a coherent picture for its evolution, and understand the connection between possible merging/accretion processes and the global starburst. With the aim of inferring its SFH, we have observed NGC 4449 with the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST) in the F435W, F555W, F814W and F658N filters. In this paper we present the new data and the resulting color–magnitude diagrams (CMDs) (Sections 2, 3 and 4). We derive a new estimate of the distance modulus from the magnitude of the tip of the red giant branch (TRGB) and the average magnitude of the carbon stars in Section 5.
|
{
"pile_set_name": "ArXiv"
}
|
[**Where and How Do They Happen ?**]{}\
\
[**Elemér E Rosinger**]{}\
\
Department of Mathematics\
and Applied Mathematics\
University of Pretoria\
Pretoria\
0002 South Africa\
eerosinger@hotmail.com\
\
[**Abstract**]{}\
This is a two part paper. The first part, written somewhat earlier, presented standard processes which cannot so easily be accommodated within what are presently considered as physical type realms. The second part further elaborates on that fact. In particular, it is argued that quantum superposition and entanglement may better be understood in extensions of what we usually consider as physical type realms, realms which, as it happens, have so far never been defined precisely enough.\
\
[**Part I**]{}\
\
[**Abstract**]{}\
It has for ages been a rather constant feature of thinking in science to take it for granted that the respective thinking happens in realms which are totally outside and independent of all the other phenomena that constitute the objects of such thinking. The imposition of this divide on two levels may conflict with basic assumptions of Newtonian and Einsteinian mechanics, as well as with those in Quantum Mechanics. It also raises the question whether the realms in which thinking happens have no any other connection with the realms science deals with, except to host and allow scientific thinking.\
\
[**0. The Yet Undefined Physical Realms ...**]{}\
In the sequel, based on rather obvious and simple, even if so far seldom considered facts within, or related to Physics, we shall argue that what are usually assumed to be the Physical Realms may have to be extended. Such possible additional realms, however, are not along those infinitely many of Everett’s “many-worlds” view of Quantum Mechanics. Instead, they are suggesting a finite number of further physical type realms, thus they can be seen as a development of the classical Cartesian realm of “res extensa”.\
As for what Physical Realms may actually mean, or rather, Physics itself, here is a recent and quite appropriate view on that never yet clarified issue, \[8, pp. 153,154\] :
> “Physics is the study of those phenomena that are successfully treatable with well-specified and testable models.\
> For example, Physics treats atoms and simple molecules. Chemistry, on the other hand, deals with all molecules, most of whose electron distributions cannot be well specified. A physicist might study a well specified biological system, but the functioning of a complex organism lies in the domain of biologists.\
> Anything not successfully treatable with a well-specified and testable model is rather quickly defined out of Physics.”
It is quite clear in this spirit that, even if no one seems to care much about a more precise definition of Physics, and thus, of Physical Realms, phenomena such as human thinking, let alone, human consciousness or awareness, are not expected to concern Physics any time soon. Consequently, what for Descartes constituted “res cogitans”, that is, the realms of thinking, are supposed to remain in the splendour of their undisturbed solitude, as far as Physics is concerned. And then, anything that may be seen as remotely acceptable from a physical point of view, may be but a refinement, or rather, a structural enrichment of the Cartesian “res extensa”, that is, of the realms which, at least intuitively, are supposed to have to do with Physics.\
And yet, as seen in the sequel, the story is not quite that simple, not even from a strictly physical point of view ...\
\
[**1. Conflict with Newtonian Mechanics**]{}\
Instant action at arbitrary distance, such as in the case of gravitation, is one of the basic assumptions of Newtonian mechanics. This certainly does not appear to conflict with the fact that we can think instantly and simultaneously about phenomena which are no matter how far apart from one another in space or in time. However, absolute space is also a basic assumption of Newtonian Mechanics. And it is supposed to contain absolutely everything that may exist in Creation, be it in the past, present or future. Consequently, it is supposed to contain, among others, the physical body of the thinking scientist as well.\
Yet it is not equally clear whether it also contains scientific thinking itself which, traditionally, is assumed to be totally outside and independent of all phenomena under its consideration, therefore in particular, of the Newtonian absolute space, and also, of absolute time.\
And then the question arises :
> Where and how does such a scientific thinking take place or happen ?
[ ]{}\
[**2. A difference with Mathematics**]{}\
Mathematical thinking, especially in its modern and abstract variants, does not appear to need the assumption of any absolute space, or for that matter, absolute time. Such thinking may appear to unfold during appropriate local time intervals. However, when seen all in itself, and unrelated to the physical body of the respective mathematician, it is quite likely that such thinking has no location in any space, be it relative or absolute.\
\
[**3. Conflict with Einsteinian Mechanics**]{}\
In Einsteinian Mechanics a basic assumption is that there cannot be any propagation of action faster than light.\
Yet just like in the case we happen to think in terms of Newtonian Mechanics, our thinking in terms of Einsteinian Mechanics can again instantly and simultaneously be about phenomena no matter how far apart from one another in space or time.\
Consequently, the question arises :
> Given the mentioned relativistic limitation, how and where does such a thinking happen ?
[**4. Conflict with Quantum Mechanics**]{}\
Let us consider the classical EPR, or Einstein-Podolsky-Rosen entanglement phenomenon, and for simplicity, do so in the terms of quantum computation. For that purpose it suffices to consider double qubits, that is, elements of $\mathbb{C}^2 \bigotimes \mathbb{C}^2$, such as for instance the EPR pair\
(4.1) $ \begin{array}{l}
|~ \omega_{00} > ~=~ |~ 0, 0 > ~+~ |~ 1, 1 > ~=~ \\ \\
~~~=~ |~ 0 > \bigotimes |~ 0 > ~+~ |~ 1 > \bigotimes |~ 1 >
\, \in \mathbb{C}^2 \bigotimes \mathbb{C}^2
\end{array} $\
which is well known to be [*entangled*]{}, in other words, $|~ \omega_{00} > $ is [*not*]{} of the form\
$~~~~~~ ( \alpha |~ 0 > ~+~ \beta |~ 1 > ) \bigotimes ( \gamma |~ 0 > ~+~ \delta |~ 1 > )
\, \in \mathbb{C}^2 \bigotimes \mathbb{C}^2 $\
for any $\alpha, \beta, \gamma,\delta \in \mathbb{C}^2$.\
Here we can turn to the usual and rather picturesque description used in quantum computation, where two fictitious personages, Alice and Bob, are supposed to exchange information, be it of classical or quantum type.\
Alice and Bob can each take their respective qubit from the entangled, or EPR pair of qubits $|~ \omega_{00} >$, and then go away with it no matter how far apart from one another. And the two qubits thus separated in space will remain entangled, unless of course one or both of them get involved in further classical or quantum interactions. For clarity, however, we should note that the single qubits which, respectively, Alice and Bob take away with them from the EPR pair $|~ \omega_{00} >$ are neither one of the terms $|~ 0, 0 >$ or $|~ 1, 1 >$ in (4.1), since both these are themselves already pairs of qubits, thus they cannot be taken away as mere single qubits, either by Alice, or by Bob. Consequently, the single qubits which Alice and Bob take away with them cannot be described in any other form, except that which is implicit in (4.1).\
Now, after that short detour into the language of quantum computation, we can note that, according to Quantum Mechanics, the entanglement in the EPR double qubit $|~ \omega_{00} >$ implies that the states of the two qubits which compose it are correlated, no matter how far from one another Alice and Bob would be with them. Consequently, knowing the state of one of these two qubits can give information about the state of the other qubit. On the other hand, in view of General, or even Special Relativity, such a knowledge, say by Alice, cannot be communicated to Bob faster than the velocity of light.\
And yet, anybody who is familiar enough with Quantum Mechanics, can instantly know and understand all of the above, no matter how far away from one another Alice and Bob may be with their respective single but entangled qubits.\
So that, again, the question arises :
> How and where does such a thinking happen ?
[ ]{}\
[**5. Two, Among Other Possible Alternatives**]{}\
Let us first assume that scientific thinking does indeed happen in realms outside and independent of all the realms in which the variety of phenomena studied by scientific thinking takes place. Then the very existence of scientific thinking proves the existence of realms transcendental to those which at present are customarily the object of that scientific thinking.\
In this case, one may ask whether the realms in which scientific thinking happens have, indeed, no any other connection whatsoever with the realms
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We construct finite mass, asymptotically flat black hole solutions in $d=4$ Einstein–Yang-Mills theory augmented with higher order curvature terms of the gauge field. They possess non-Abelian hair in addition to Coulomb electric charge, and, below some non-zero critical temperature, they are thermodynamically preferred over the Reissner-Nordström solution. Our results indicate the existence of hairy non-Abelian black holes which are stable under linear, spherically symmetric perturbations.'
author:
- |
[Eugen Radu]{}$^{\dagger}$ and [D. H. Tchrakian]{}$^{\star \diamond }$\
$^{\dagger}$[Institut für Physik, Universität Oldenburg, D-26111 Oldenburg, Germany]{}\
$^{\star}$[Department of Computer Science, National University of Ireland Maynooth, Maynooth, Ireland]{}\
$^{\diamond}$[School of Theoretical Physics – DIAS, 10 Burlington Road, Dublin 4, Ireland ]{}
title: ' Stable black hole solutions with non-Abelian fields'
---
[** Introduction.– **]{} In recent years it has been realized that the electrically charged Reissner-Nordström (RN) black hole, when considered as solution of a more general theory, may become unstable to forming hair at low temperatures. This has lead to the discovery of some holographic models for condensed matter systems, and, in particular, to a gravitational description of superconductivity (see [@Horowitz:2010gk] for a review).
The case of Einstein-Yang-Mills (EYM) model with negative cosmological constant $\Lambda$ in $d=4$ spacetime dimensions provides an interesting illustration of these aspects. As shown in [@Gubser:2008zu], there is a second order phase transition between the RN–anti-de Sitter solutions, which are preferred at high temperatures, and symmetry breaking non-Abelian black holes, which are preferred at low temperatures. In [@Gubser:2008zu], $\Lambda$ plays an essential role; although electrically charged hairy black holes do exist also in a Minkowski spacetime background [@Galtsov:1991au], they have rather different properties as compared to the anti-de Sitter (AdS) solutions in [@Gubser:2008zu]. In particular they do not emerge as perturbation of the RN black holes, and, similar to the well-known $d=4$ asymptotically flat, purely magnetic EYM solutions [@Volkov:1998cc], are also perturbatively unstable.
However, one might take the view that in the strong coupling regime the ($\Lambda=0$) EYM theory is incomplete. Perhaps the simplest possibility to describe this situation is to supplement the action of the EYM model with higher order curvature terms, for both gravitational and gauge field sectors. As discussed $e.g.$ in [@Donets:1995ya], the inclusion of (string theory inspired-) corrections to the gravity action does not lead to qualitatively new features. By contrast, we will here argue that the situation is different [*vis a vis*]{} the inclusion of higher order Yang-Mills (YM) curvature terms. This possibility has been overlooked so far in the literature. The first relevant order in this case is the fourth, in which case the most general such density added to the Lagrangian consists of the four terms, $$\begin{aligned}
\label{Ls}
&&\mathcal{L}_s=
c_1 {\rm Tr}\left\{ F_{\mu\nu}F_{\rho\sigma} F^{\mu\nu}F^{\rho\sigma} \right \}
+
c_2 {\rm Tr}\left\{ F_{\mu\nu}F^{\mu\nu} F_{\rho\sigma}F^{\rho\sigma} \right \}
\\
&&{~~~~~~~~~}+
c_3 {\rm Tr}\left\{ F_{\tau\nu}F^{\mu\tau} F_{\mu\lambda}F^{\lambda\nu} \right \}
+c_4 {\rm Tr}\left\{ F_{\mu\nu}F^{\nu\rho} F_{\rho\lambda}F^{\lambda\mu} \right \},
\nonumber\end{aligned}$$ with some constant coefficients $c_i$. A particularly priviledged such combination, which we adopted here, is that with $c_1=c_2=-4 c_3,$ $c_4=0$. In that case, $\mathcal{L}_s$ features only the second power of any “velocity field” and is a causal density just like the Gauss-Bonnet term in gravity [@Zwiebach:1985uq] or the Skyrme [@Skyrme:1962vh] term of the $O(4)$ sigma model. With this specific choice of the constants $c_i$, the Lagrange density [(\[Ls\])]{} is nothing else than the trace of the square of the $4-$form curvature $F_{\mu\nu \rho \sigma}= \{F_{\mu [\nu},F_{\rho\sigma]} \}$. This is the second member of the YM hierarchy [@Tchrakian:1984gq], providing a natural generalization of the usual YM model. A convenient way to express this system is $ {\rm Tr}\left\{ (F_{\mu \nu} {}\tilde F^{\mu \nu})^2 \right \}$, where a tilde denotes the Hodge dual.
Notwithstanding our specific choice for the constants $c_i$ in (\[Ls\]), we have verified that for certain other choices, some salient features of the solutions discussed in this work, in particular the instability of the RN black hole, persist.
[** The model.– **]{} Ignoring for simplicity other possible corrections, we consider the following action for the model $$\begin{aligned}
\label{action}
S=\int d^4 x
\sqrt{-g}
\bigg [
\frac{1}{4}R-\frac{1}{2 }{\rm Tr}\left \{ F_{\mu\nu}F^{\mu\nu} \right\}
+\frac{3\tau}{2 }
{\rm Tr}\left\{ (F_{\mu \nu} {}\tilde F^{\mu \nu})^2 \right \}\bigg ],\end{aligned}$$ (here we have set $4\pi G/e^2=1$, such that the only parameter of the theory is $\tau$).
In what follows, we shall prove that the presence of the last term in (\[action\]) leads to an instability of the RN black hole, together with the occurance of stable black holes with non-Abelian hair outside the horizon. We shall restrict attention to the following spherically symmetric Ansatz: $$\begin{aligned}
\label{metric}
ds^2=\frac{dr^2}{N}+r^2(d\theta^2+\sin^2 \theta d\phi^2)-N\sigma^2dt^2,~\end{aligned}$$ where $N,\sigma$ are functions of $ r$ and $t$ in general. The minimal gauge group for which the superposition of a Coulomb field and a non-Abelian hair is not forbidden by the ’baldness’ theorems [@bald] is $SU(3)$. Then, as in the $\tau=0$ case in [@Galtsov:1991au], we shall restrict to an $SU(2)\times U(1)$ truncation of the $SU(3)$ group, the general spherically symmetric ansatz for the gauge potential being $$\begin{aligned}
\label{YMansatz}
A=
\bigg \{
(\nu T_3+U T_8) dr+
(w T_1+\tilde w T_2) d\theta
+\left( (w T_2-\tilde w T_1)\sin \theta +\cos \theta T_3 \right) d\phi
+(v T_3 +V T_8) dt
\bigg \}
,\end{aligned}$$ where $\nu,w,\tilde w,v$ and $U,V$ are functions of $(r,t)$ and $T_i$ are the standard generators of the $SU(3)$ Lie algebra.
For static solutions, one can set the functions $\nu,\tilde w, v$ and $U$ to zero without any loss of generality, resulting in the equations $$\begin{aligned}
\nonumber
&&m'=Nw'^2+\frac{(1-w^2)^2}{2r^2}+\frac{r^2 V'^2}{2\sigma^2}+\tau \frac{(1-w^2)^2V'^2}{r^2 \sigma^2},
~~~
\sigma'=\frac{2\sigma}{r}w'^2,
\\
\label{eqs}
&&w''+(\frac{N'}{N}+\frac{\sigma'}{\sigma})w'
+\frac{w(1-w^2)}{r^2 N}+\frac{2\tau (w^2-1)V'^2}{r^2N\sigma^2}=0,
\end{aligned}$$ together with the first integral for the electric potential, $$\begin{aligned}
\label{int-V}
V'=Q\frac{\sigma}{r^2}\left(1+\frac{2\tau(1-w^2)^2}{r^4}\right)^{-1},\end{aligned}$$ with $Q$ an arbitrary constant.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate the ionic Hubbard model on a triangular lattice at three-quarters filling. This model displays a subtle interplay between metallic and insulating phases and between charge and magnetic order. We find crossovers between Mott, charge transfer and covalent insulators and magnetic order with large moments that persist even when the charge transfer is weak. We discuss our findings in the context of recent experiments on the layered cobaltates A$_{0.5}$CoO$_2$ (A=K, Na).'
author:
- Jaime Merino
- 'B. J. Powell'
- 'Ross H. McKenzie'
title: ' Interplay of frustration, magnetism, charge ordering, and covalency in the ionic Hubbard model on the triangular lattice at three-quarters filling'
---
The competition between metallic and insulating states in strongly correlated materials leads to many novel behaviours. The Mott insulator occurs when a single band is half-filled and the on-site Coulomb repulsion, $U$, is much larger than hopping integral, $t$. A menagerie of strongly correlated states is found when a system is driven away from the Mott insulating state, either by doping, as in the cuprates [@Anderson-RVB], or reducing $U/t$, as in the organics [@organics-review]. Geometric frustration causes yet more novel physics in Mott systems [@organics-review]. Therefore the observation of strongly correlated phases in the triangular lattice compounds A$_{0.5}$CoO$_2$, where $A$ is K or Na [@ong-cava-science], has created intense interest.
An important model for investigating insulating states in correlated materials is the ionic Hubbard model. On a half-filled square lattice this model displays a crossover between Mott and band insulating states which has been analyzed with quantum Monte Carlo (QMC) [@bouadim], dynamical mean field theory (DMFT) and its cluster extensions [@kancharla]. However, except for the case of one dimension [@penc2], this model has not been studied away from half-filling [@penc1] and/or on geometrically frustrated lattices.
In this Letter we study the ionic Hubbard model on a triangular lattice at three-quarter filling. This Hamiltonian displays a subtle interplay between metallic and insulating phases and charge and magnetic order. It has regimes analogous to Mott, charge transfer [@Zaanen], and covalent insulators [@Sarma]. The study of this model is motivated in part by our recent proposal [@MPM] that it is an effective low-energy Hamiltonian for , at values of $x$ at which ordering of the sodium ions occurs.
The Hamiltonian for the ionic Hubbard model is $$H=-t\sum_{\langle ij\rangle\sigma} c^\dagger_{i\sigma} c_{j\sigma} +U \sum_i n_{i \uparrow} n_{i \downarrow}
+\sum_{i\sigma} \epsilon_i n_{i \sigma}, \label{ham}
\label{model}$$ where $c^{(\dagger)}_{i\sigma}$ anihilates (creates an electron with spin $\sigma$ at site $i$, $t$ is the hopping integral, $U$ is the effective Coulomb repulsion between electrons on the same site, and $\epsilon_i$ is a the site energy. We specialise to the case with two sublattices, A ($\epsilon_i=\Delta/2$) and B ($\epsilon_i=-\Delta/2$), consisting of alternating rows, with different site energies on the two sublattices (c.f., Fig. 15 of Ref. ). This is the lattice relevant to where the difference in site energies results from the ordering of the A-atoms [@williams-argyriou; @Na_ordering; @Na-expt].
Two limits of model (\[model\]) at $3/4$-filling may be easily understood. For non-interacting electrons, $U=0$, a metallic state occurs for all $\Delta$ as at least one-band crosses the Fermi energy. In the atomic limit $t=0$, and $U>\Delta$ one expects a charge transfer insulator with a charge gap of about $\Delta$ whereas for $U<\Delta$ a Mott insulator with charge gap of $U$ occurs. However, realistic parametrization of materials imply $U\gg\Delta$ and $\Delta \sim |t|$ [@foot-parms]; we will show below that in this parameter regime the model show very different behaviour from either of the limits discussed above. This interesting regime needs to be analyzed using non-perturbative and/or numerical techniques. Thus, we have performed Lanczos diagonalization calculations on 18 site clusters with periodic boundary conditions.
In Fig. \[fig:nanb\] we plot the charge transfer, $n_B-n_A$ as a function of $\Delta/|t|$ for several values of $U$. We also plot $n_B-n_A$ in two analytically tractable limits: the non-interacting limit, $U=0$ [@footnon]; and the strong coupling limit $U\gg\Delta\gg|t|$ [@footstrong]. Several interesting effects can be observed in this calculation. Firstly, the sign of $t$ strongly effects the degree of charge transfer on the triangular lattice. Secondly, charge transfer depends only weakly on $U$. Thirdly, regardless of the sign of $t$ or the magnitude of $U$, the charge transfer increases rather slowly as $\Delta$ increases.
\
The charge gap, i.e., the difference in the chemical potentials for electrons and holes, is $\Delta_c \equiv
E_0(N+1)+E_0(N-1)-2E_0(N)$, where $E_0(N)$ is the ground state energy for $N$ electrons. We plot the variation of $\Delta_c$ with $\Delta$ for various values of $U$ in Fig. \[fig:gap\]. $\Delta_c$ vanishes for $U=0$, however finite size effects mean that we cannot accurately calculate $\Delta_c$ for small $\Delta$. $\Delta_c =\Delta$ for $t=0$ and $U \gg
\Delta$; this result is reminiscent of a charge transfer insulator [@Zaanen]. Both perturbative [@footstronggap] and numerical results show that the charge gap depends on the sign of $t$ due to the different magnetic and electronic properties arising from the geometrical frustration of the triangular lattice. In contrast, on a square lattice, $\Delta_c$, does not depend on the sign of $t$.
\
In the limit, $\Delta\gg U\gg|t|$, the A and B sublattices are well separated in energy; the B sites are doubly occupied (i.e., the B-sublattice is a band insulator) and the A sublattice is half-filled and hence becomes a Mott insulator. If there were no hybridisation between that chains, one would find a metallic state for any finite charge transfer from the B-sites to the A-sites (self doping), even for $U\gg |t|$ as the A-chains are now electron-doped Mott insulators and the B-chains are hole-doped band insulators. However, Fig. \[fig:gap\] shows that the insulating regime of the model extends far beyond the well understood $n_B-n_A=1$ regime. This is because the real space interpretation is incorrect as hybridization between A and B chains is substantial. For $|t|\sim\Delta\ll U$ the system can remain insulating with a small gap \[${\cal O}(t)$\]. This state is analogous to a covalent insulator [@Sarma].
One expects that for $\Delta=0$ the ground state is metallic as there the system is $3/4$-filled. However, a small but finite $\Delta=0^+$ leads to a strongly nested Fermi surface for $t>0$ whereas for $t<0$ the Fermi surface rather featureless. Thus, rather different behaviors might be expected for different signs of $t$ even at weak coupling. At large $U$ our exact diagonalization results suggest that a gap may be present even for a small value of $\Delta/t$. However finite-size effects, inherent to the method, mean that it is not possible to resolve whether a gap opens at $\Delta=0$ or at some finite value of $\Delta$.
To test this covalent insulator interpretation in the $\Delta\sim|t|$ and large $U$ regime we have also calculated the spectral density, $A(\omega)$, c.f., Fig. \[fig:dos\]. There are three distinct contributions to the $A(\omega)$: at low energies there is a lower Hubbard band; just below the chemical potential ($\omega=\mu$) is a weakly correlated band; and just above $\omega=\mu$ is the upper Hubbard band. Furthermore, the large energy separation, much larger than the expected $U=15|t|$, between the lower and upper Hubbard bands is due to an upward (downward) shift of the upper (lower) Hubbard bands due to the strong hybridization. In contrast, in the strong coupling limit $A(\omega)$ has a much larger gap, ${\cal O}(\Delta)$, between the contributions from the weakly correlated band and the upper Hubbard band.
The magnetic moment associated with the possible antiferromagnetism, $m_ \nu =(3 \langle S^z_{i} S^z_{j} \rangle)^{1/2}$, where $\nu =A$ or $B$ and $S^z_{i}={1 \
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We discuss the underlying relativistic physics which causes neutron stars to compress and collapse in close binary systems as has recently been observed in numerical (3+1) dimensional general relativistic hydrodynamic simulations. We show that compression is driven by velocity-dependent relativistic hydrodynamic terms which increase the self gravity of the stars. They also produce fluid motion with respect to the corotating frame of the binary. We present numerical and analytic results which confirm that such terms are insignificant for uniform translation or when the hydrodynamics is constrained to rigid corotation. However, when the hydrodynamics is unconstrained, the neutron star fluid relaxes to a compressed nonsynchronized state of almost no net intrinsic spin with respect to a distant observer. We also show that tidal decompression effects are much less than the velocity-dependent compression terms. We discuss why several recent attempts to analyze this effect with constrained hydrodynamics or an analysis of tidal forces do not observe compression. We argue that an independent test of this must include unconstrained relativistic hydrodynamics to sufficiently high order that all relevant velocity-dependent terms and their possible cancellations are included.'
address:
- ' University of Notre Dame, Department of Physics, Notre Dame, IN 46556'
- ' University of California, Lawrence Livermore National Laboratory, Livermore, CA 94550'
- ' University of Notre Dame, Department of Physics, Notre Dame, IN 46556'
author:
- 'G. J. Mathews and P. Marronetti'
- 'J. R. Wilson'
title: 'RELATIVISTIC HYDRODYNAMICS IN CLOSE BINARY SYSTEMS: ANALYSIS OF NEUTRON-STAR COLLAPSE'
---
INTRODUCTION {#sec:level1}
============
The physical processes occurring during the last orbits of a neutron-star binary are currently a subject of intense interest [@wm95]-[@sbs98]. In part, this recent surge in interest stems from relativistic numerical hydrodynamic simulations in which it has been noted [@wm95; @wmm96; @mw97] that as the stars approach each other their interior density increases. Indeed, for an appropriate equation of state, our numerical simulations indicate that binary neutron stars collapse individually toward black holes many seconds prior to merger. This compression effect would have a significant impact on the anticipated gravity-wave signal from merging neutron stars. It could also provide an energy source for cosmological gamma-ray bursts [@mw97].
In view of the unexpected nature of this neutron star compression effect and its possible repercussions, as well as the extreme complexity of strong field general relativistic hydrodynamics, it is of course imperative that there be an independent confirmation of the existence of neutron star compression before one can be convinced of its operation in binary systems. In view of this it is of concern that the initial numerical results reported in [@wm95; @wmm96; @mw97] have been called into question. A number of recent papers [@lai; @rs96; @wiseman; @shibata; @lombardi; @lw96; @brady; @flanagan; @thorne97; @baumgarte; @sbs98] have not observed this effect in Newtonian tidal forces [@lai], first post-Newtonian (1PN) dynamics [@rs96; @wiseman; @shibata; @lombardi; @lw96; @sbs98], tidal expansions [@brady; @flanagan; @thorne97], or in binaries in which rigid corotation has been imposed [@baumgarte]. The purpose of this paper is to point out that none of these recent studies could or should have observed the compression effect which we observe in our calculations.
Moreover, this flurry of activity has caused some confusion as to the physics to which we attribute the effects observed in the numerical calculations. The present paper, therefore, summarizes our derivation of the physics which drives the collapse. We illustrate how such terms have been absent in some Newtonian or post-Newtonian approximations to the dynamics of the binary system. We also present numerical results and analytic expressions which demonstrate how the compression forces result in an orbiting dynamical system from the presence of fluid motion with respect to the corotating frame. As such, they could not appear in an an analysis of relativistic external tidal forces no matter how many orders are included in the tidal expansion parameter (e.g. [@flanagan; @thorne97]) unless self gravity from internal hydrodynamic motion is explicitly accounted for. The effect could also not arise in systems with uniform translation or rigid corotation.
The implication of the present study is that any attempt to confirm or deny the compression driving force requires an unconstrained, untruncated relativistic hydrodynamic treatment. At present, ours is still the only existing such calculation. Hence, despite claims to the contrary [@lai; @rs96; @wiseman; @shibata; @lombardi; @lw96; @brady; @flanagan; @thorne97; @baumgarte; @sbs98], the neutron star compression effect has not yet been independently tested.
Another confusing aspect surrounding the numerical results has been our choice of a conformally flat spatial three-metric for the solution of the field equations. Indeed, it has been speculated that this approximate gauge choice (in which the gravitational radiation is not explicitly manifest) may have somehow led to spurious results. A second purpose of this paper, therefore, is to emphasize that the compression driving terms are a completely general result from the relativistic hydrodynamic equations of motion. The advantages of the conformally flat condition are that the algebraic form of the compression driving terms is easier to identify and that the solutions to the field equations obtain a simple form. It does not appear to be the case, however, that the imposition of a conformally flat metric drives the compression. It has been nicely demonstrated in the work of Baumgarte et al. [@baumgarte] that conformal flatness does not necessarily lead to neutron-star compression.
The Spatially Conformally Flat Condition
========================================
There has been some confusion in the literature as to the uncertainties introduced by imposing a conformally flat condition (henceforth [*CFC*]{}) on the spatial three-metric. Therefore we summarize here some attempts which we and others have made to quantify the nature of this approximation.
The only existing strong field numerical relativistic hydrodynamics results in three unrestricted spatial dimensions to date have been derived in the context of the [*CFC*]{} as described in detail in [@wm95; @wmm96; @mw97].
We begin with the usual ADM (3+1) metric [@adm62; @york79] in which there is a slicing of the spacetime into a one-parameter family of three-dimensional hypersurfaces separated by differential displacements in a timelike coordinate, $$ds^2 = -(\alpha^2 - \beta_i\beta^i) dt^2 +
2 \beta_i dx^i dt + \gamma_{ij}dx^i dx^j~~,
\label{metric}$$ where we take Latin indices to run over spatial coordinates and Greek indices to run over four coordinates. We also utilize geometrized units ($G = c = 1$) unless otherwise noted. The scalar $\alpha$ is called the lapse function, $\beta_i$ is the shift vector, and $\gamma_{i j}$ is the spatial three metric.
In what follows, we make use of the general relation between the determinant of the four metric $g_{\alpha \beta}$ and the ADM metric coefficients $$det(g_{\alpha \beta} ) = - \alpha^2 det({\gamma_{i j}}) \equiv \alpha^2 \gamma^2~~,$$ where $\gamma \equiv \sqrt{-det(\gamma_{i j})}$.
The conformally flat metric condition simply expresses the three metric of Eq. (\[metric\]) as a position dependent conformal factor $\phi^4$ times a flat-space Kronecker delta $$\gamma_{i j} = \phi^4 \delta_{ij}~~.$$
It is common practice (e.g. [@evans85; @cook93; @brugmann97]) to impose this condition when solving the initial value problem in numerical relativity. It is the natural choice for our three-dimensional quasiequilibrium orbit calculations [@wmm96] which in essence seek to identify a sequence of initial data configurations for neutron-star binaries.
The reason conformal flatness is chosen most frequently for the initial value problem is that it simplifies the solution of the hydrodynamics and field equations. The six independent components of the three metric are reduced to a single position dependent conformal factor.
Since conformal flatness implies no transverse traceless part of $\gamma_{i j}$ it can minimize the amount of initial gravitational radiation apparent in the initial configuration. However, in general the physical data still contain a small amount of preexisting gravitational radiation. This has been clearly demonstrated in numerical calculations of axisymmetric black-hole collisions [@smarr]. In exact numerical simulations, the gravitational radiation appears as the time derivatives of the spatial three metric ($\dot \gamma_{i j}$) and its conjugate (the extrinsic curvature $\dot K_{i j}$) are evolved. The immediate evolution of the fields from conformally flat initial data is characterized by the development of a weak gravity wave exiting the system.
An estimate of the radiation content of initial data slices for axisymmetric black hole collisions has been made by Abrahams [@abrahams]. Even for high values of momentum, the initial slice radiation is always less than about 10% of the maximum possible radiation energy (as estimated from the area theorem).
Two questions then are relevant to our application of the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The traditional difficulty about stochastic singular control is to characterize the regularities of the value function and the optimal control policy. In this paper, a multi-dimensional singular control problem is considered. We found the optimal value function and the optimal control policy of this problem via Dynkin game, whose solution is given by the saddle point of the cost function. The existence and uniqueness of the solution to this Dynkin game are proved through an associated variational inequality problem involving Dirichlet form. As a consequence, the properties of the value function of this Dynkin game implies the smoothness of the value function of the stochastic singular control problem. In this way, we are able to show the existence of a classical solution to this multi-dimensional singular control problem, which was traditionally solved in the sense of viscosity solutions, and this enables the application of the verification theorem to prove optimality. [^1]'
author:
- 'Yipeng Yang [^2]'
title: 'A Multi-dimensional Stochastic Singular Control Problem Via Dynkin Game and Dirichlet Form'
---
Dynkin game, Dirichlet form, Multi-dimensional diffusion, Stochastic singular control
49J40, 60G40, 60H30, 93E20
Introduction and Problem Formulation
====================================
The characterization of the regularities of value function and optimal policy in stochastic singular control remains a big challenge in stochastic control theory, especially the higher dimensional case, see, e.g., [@Soner89]. The traditional approach is to use the viscosity solution technique, see [@Fleming06] [@Bayraktar12] [@Bassan02], which usually yields a less regular solution. Another approach to solve singular control problems and characterize the regularity of value functions is through variational inequalities and optimal stopping or Dynkin game, see, e.g., Karatzas and Zamfirescu [@Karatzas05], Guo and Tomecek [@Guo08]. In [@Karatzas85] Karatzas and Shreve studied the connection between optimal stopping and singular stochastic control of one dimensional Brownian motion, and showed that the region of inaction in the control problem is the optimal continuation region for the stopping problem. In [@Baldursson97], the authors established and exploited the duality between the myopic investor’s problem (optimal stopping) and the social planning problem (stochastic singular control), where an integral form and change of variable formula were also presented on this connection. Ma [@ma92] dealt with a one dimensional stochastic singular control problem where the drift term is assumed to be linear and the diffusion term is assumed to be smooth, and he showed that the value function is convex and $C^2$ and the controlled process is a reflected diffusion over an interval. Guo and Tomecek [@Guo09] solved a one dimensional singular control problem via a switching problem [@Guo08], and showed, using the smooth fit property [@Pham07], that under some conditions the value function is continuously differentiable ($C^1$).
It is found that [@Fuku02] through the approach via game theory and optimal stopping, it is possible to show the existence of a smooth solution. The connection is the following: given a symmetric Markov process on a locally compact separable metric space, it is well known that the solution of an optimal stopping problem admits its quasi continuous version of the solution to a variational inequality problem involving Dirichlet form, e.g., see Nagai [@Nagai78]. Zabczyk [@Zab84] extended this result to a zero-sum game (Dynkin game). In the one dimensional case, the integrated form of the value function of the Dynkin game was identified to be the solution of an associated stochastic singular control problem, e.g., see Taksar [@Taksar85], Fukushima and Taksar [@Fuku02] where a more general one dimensional diffusion is assumed. As a result, the classical smooth solution ($C^2$) can be obtained for this singular control problem.
This paper extends the work by Fukushima and Taksar [@Fuku02] to multi-dimensional stochastic singular control problem. There are many difficulties in this extension. In the one dimensional singular control problem, each point in the space has a positive capacity [@Fuku02], hence the nonexistence of the proper exceptional set. However, this is no longer the case in multi-dimensional singular control problem. We overcome this difficulty using the absolute continuity of the transition function of the underlying process [@Fuku06]. Under some conditions, the optimal control policy of the one dimensional case is proved to be the reflection of the diffusion at two boundary points, but the form of the optimal control policy and the conditions on the regularity of the value function in multi-dimensional case are much more complicated. For instance, in the two dimensional case, the boundary of the continuation region can have various formats, e.g., bounded curves, unbounded curves, singular points, disconnected curves, line segments, etc. The difficulty in characterizing the continuation region is due to the fact that its boundary is a free boundary, and this paper investigates such issues.
In this paper, we are concerned with a multi-dimensional diffusion on ${\mathbb{R}}^n$: $$\label{omodel}
d{\bf X}_t={\bf \mu}({\bf X}_t)dt+{\bf\sigma}({\bf X}_t)d{\bf B}_t,$$ where $${\bf X}_t=\left(\begin{array}{c}X_{1t}\\
\vdots\\
X_{nt}
\end{array}\right), \mu=\left(\begin{array}{c}\mu_1\\
\vdots\\
\mu_n
\end{array}\right), \sigma=\left(\begin{array}{ccc} \sigma_{11} &\ \cdots & \sigma_{1m}\\
\vdots & & \vdots\ \\
\sigma_{n1} &\ \cdots & \sigma_{nm}\end{array}\right), {\bf
B}_t=\left(\begin{array}{c}B_{1t}\\
\vdots\\
B_{mt}\end{array}\right),$$ in which $\mu_i=\mu_i({\bf X}_t)$ and $\sigma_{i,j}=\sigma_{i,j}({\bf X}_t)$ ($1\leqslant i\leqslant n,1\leqslant j\leqslant
m$) are continuous functions of $X_{1t},X_{2t},...,X_{(n-1)t}$, and ${\bf B}_t$ is $m$-dimensional Brownian motion with $m\geqslant n$. Thus we are given a system $(\Omega, \mathcal{F},\mathcal{F}_t, {\bf
X},\theta_t,P_{\bf x})$, where $(\Omega,\mathcal{F})$ is a measurable space, ${\bf X}={\bf X}(\omega)$ is a mapping of $\Omega$ into $C({\mathbb{R}}^n)$, $\mathcal{F}_t=\sigma({\bf X}_s,s\leqslant
t)$, and $\theta_t$ is a shift operator in $\Omega$ such that ${\bf
X}_s(\theta_t\omega)={\bf X}_{s+t}(\omega)$. Here $P_{\bf x}$(${\bf
x}\in{\mathbb{R}}^n$) is a family of measures under which $\{{\bf
X}_t,t\geqslant 0\}$ is an $n$-dimensional diffusion with initial state ${\bf x}$. We assume that $\mu$ and $\sigma$ satisfy the usual Lipschitz growth condition.
A control policy is defined as a pair $(A_t^{(1)},A_t^{(2)})=\mathcal{S}$ of $\mathcal{F}_t$ adapted processes which are right continuous and nondecreasing in $t$ and we assume $A_0^{(1)},A_0^{(2)}$ are nonnegative. Denote $\mathbb{S}$ the set of all admissible policies, whose detailed definition will be given in Section \[mdssc\].
Given a policy $\mathcal{S}=(A_t^{(1)},A_t^{(2)})\in\mathbb{S}$ we define the following controlled process: $$\begin{array}{l}
dX_{1t}=\mu_1dt+\sigma_{11}dB_{1t}+\cdots+\sigma_{1m}dB_{mt},\\
\vdots \quad\quad \vdots \quad\quad\quad \vdots\\
dX_{nt}=\mu_ndt+\sigma_{n1}dB_{1t}+\cdots+\sigma_{nm}dB_{mt}+dA_t^{(1)}-dA_t^{(2)},\\
{\bf X}_0={\bf x},
\end{array}$$ with the cost function $$\begin{aligned}
\label{scost}
k_{\mathcal{S}}({\bf x})=E_{\bf x}\left(\int_0^\infty e^{-\alpha
t}h({\bf X_t})dt+\int_0^\infty e^{-\alpha t}\left(f_1({\bf
X}_t)dA_t^{(1)}+f_2({\bf X}_t)dA_t^{(2)}\right)\right),&&\\
f_1({\bf x}),f_2({\bf x})>0,\ \forall {\bf
x}\in{\mathbb{R}}^n.&&\nonumber\end{aligned}$$ Here we assume that $A_t^{(1)}-A_t^{(2)}$ is the minimal decomposition of a bounded variation process into a difference of two increasing processes.
A natural question is that why the control only applies on one dimension. The difficulty arises in the step where the value function of the zero-sum
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The recently proposed Fully-Renormalized QRPA (FR-QRPA), which fullfils the Ikeda sum rule (ISR) exactly, is applied to the two-neutrino double beta decay of $^{76}$Ge, $^{82}$Se, $^{100}$Mo, $^{116}$Cd, $^{128}$Te and $^{130}$Xe. The results obtained are compared with those of other approaches, standard QRPA and self-consistent QRPA (SCQRPA). The similarities and the differences among the methods are discussed. The influence of the restoration of the Ikeda sum rule on the $2\nu\beta\beta$-decay amplitude is analyzed.'
address:
- |
$^1$ Institut für Theoretische Physik der Universität Tübingen,\
Auf der Morgenstelle 14, D-72076 Tübingen, Germany
- '$^2$Department of Nuclear Physics, Comenius University, Bratislava, Slovakia'
author:
- 'L. Pacearescu$^1$, V. Rodin$^1$, F. Šimkovic$^{1,2}$ and Amand Faessler$^1$'
title: 'Two-neutrino double beta decay within Fully-Renormalized QRPA: Effect of the restoration of the Ikeda sum rule'
---
Introduction
============
Observation of the neutrinoless double beta decay ($0\nu\beta\beta$-decay), violating the total lepton number by two units, would give unambiguous evidence for new physics beyond the Standard Model [@fae98; @hax84; @Vog]. For instance, at least one of the neutrinos would have to be a Majorana particle with non-zero mass [@sch82]. The current experimental upper limits on the $0\nu\beta\beta$-decay half-life impose stringent constraints, e.g., on the parameters of Grand Unification and super-symmetric extensions of the Standard Model.
Rates of the $2\nu\beta\beta$-decay, which is a second order process allowed within the Standard Model, can be calculated within the same nuclear structure models. Thus, the results of the nuclear structure calculations can be directly compared with the corresponding experimental data available for a number of nuclei [@exp]. Such a comparison provides a very useful test of the models.
The Quasiparticle Random Phase Approximation (QRPA) [@book] has been successfully exploited in nuclear physics to describe properties of the excited states of open-shell nuclei and to calculate intensities of various nuclear reactions, including the double beta decay (see reviews [@fae98]).
It was shown that the experimental data on the $2\nu\beta\beta$-decay rates can be reproduced in QRPA calculations with a sufficiently large strength of the particle-particle interaction [@vog86]. But the proximity of the value to the point of the QRPA collapse questions the reliability of the results. It is known that QRPA collapse occurs due to the use of the quasi-boson approximation (QBA) which violates the Pauli exclusion principle (PEP) and generates too many ground state correlations.
Renormalized QRPA (RQRPA) was formulated in Refs. [@rqrpa] to restore PEP in an approximate way. The main goal of the method is to use a self-consistent iteration of the QRPA equation with taking into account quasiparticle occupation numbers in the QRPA ground state. That leads to a modification of the commutation relations for bifermionic operators as compared to the ordinary quasiboson approximation (QBA). At the same time so-called scattering terms (describing transitions of the quasiparticles) are neglected in the Hamiltonian and in the phonon operators. The RQRPA does not collapse for physical values of the particle-particle interaction strength and has been extensively used to calculate the intensities of the double beta decay [@fae98; @FKSS97; @Toi97]. It has been also shown that the RQRPA provides better agreement with the exact solution of the many-body problem within schematic models, even beyond the critical point of the standard QRPA (see, e.g. [@schm] and references therein).
The self-consistent RQRPA (SCQRPA) is a more complex version of RQRPA to describe the strongly correlated Fermi systems. Within this method one goes a step further beyond the RQRPA. In the SCQRPA at the same time the quasiparticle mean field is changed by minimizing the energy and fixing the number of particles in the correlated ground state of RQRPA instead of the uncorrelated one of BCS as is done in the other versions of the RQRPA. In this way SCQRPA partially overcomes the inconsistency between RQRPA and the BCS approach and is closer to a fully variational theory.
Nevertheless, the main drawback of the modern versions of RQRPA and SCQRPA is the violation of the model-independent Ikeda sum rule (ISR) [@Toi97; @Sto01; @Bob00]. A modification of the phonon operator by including scattering terms is needed in order to restore the ISR within RQRPA. The fully-Renormalized QRPA (FR-QRPA) was formulated in Ref. [@Rod02] for even-even nuclei in such a way that it complies with restrictions imposed by the commutativity of the phonon creation operator with the total particle number operator. It was shown analytically that the Ikeda sum rule is fulfilled within the FR-QRPA [@Rod02]. Also FR-QRPA is free from the spurious low-energy solutions which would be generated by the scattering terms considered as additional degrees of freedom as suggested in [@Rad98].
The aim of the paper is twofold. First, we would like to describe the FR-QRPA equations in more details (as compared with the original paper [@Rod02]) for a simple case of a Hamiltonian with the separable residual interaction in both particle-hole and particle-particle channels. Second, the first numerical application of FR-QRPA is given to calculate $2\nu\beta\beta$-decay intensities and relevant quantities. So far, the full convergence of the FR-QRPA solution has been obtained only for a rather small model space. Nevertheless a comparison of the results obtained within FR-QRPA and SCQRPA can be provided.
Basic relationships of the Fully-Renormalized QRPA
==================================================
Within RPA an excited nuclear state, with angular momentum $J$ and projection $M$, is created by applying the phonon operator $Q^{\dagger}_{JM}$ to the vacuum state $|0^+_{RPA}\rangle$ of the initial, even-even, nucleus: $$|JM \rangle = Q^{\dagger}_{JM }|0^+_{RPA}\rangle
\qquad \mbox{with} \qquad
Q_{JM }|0^+_{RPA}\rangle=0.
%\label{eq:19}$$
As was shown in Ref. [@Rod02], the most appropriate way is to write down the phonon structure in terms of the particle creation and annihilation operators. That allows to fulfill the important principle of the commutativity of $Q^{\dagger}_{JM}$ with the total particle number operator $\hat A =\hat N + \hat Z$. The phonon operator has the following structure: $$Q^{\dagger}_{JM } = \sum\limits_{pn}
\left [x_{(pn, J )} C^\dagger(pn, JM)
-y_{(pn, J )}\tilde{C}(pn, JM)\right ],
\label{Qc}$$ with $C^\dagger(pn, JM)=\left[c^\dagger_{p}{\tilde{c}}_{n}\right]_{JM}$ and $\tilde C(pn, JM)=(-)^{J-M}C(pn, J\,-M)$, where $c^{+}_{\tau m_{\tau}}$ ($c^{}_{\tau m_\tau}$) denotes the particle creation (annihilation) operator for protons and neutrons ($\tau=p,n$). Going into the quasiparticle representation, the quasiparticle creation and annihilation operators $a^{+}_{\tau m_{\tau}}$ and $a^{}_{\tau m_{\tau}}, (\tau=p,n)$ can be defined by the Bogolyubov transformation $$\left( \matrix{ a^{+}_{\tau m_{\tau} } \cr
{\tilde{a}}_{\tau m_{\tau} }
}\right) = \left( \matrix{
u_{\tau} & v_{\tau} \cr
-v_{\tau} & u_{\tau}
}\right)
\left( \matrix{ c^{+}_{\tau m_{\tau}} \cr
{\tilde{c}}_{\tau m_{\tau}}
}\right),
\label{uv}$$ that leads to the following expression for the phonon operator $Q^{\dagger}_{JM}$: $$\begin{aligned}
&Q^{\dagger}_{JM } = \sum\limits_{{p}{n}}
\left [ X_{({p}{n}, J )} \bar A^\dagger({p}{n}, JM)
- Y_{({p}{n}, J )}\tilde{\bar A\,}({p}{n}, JM)\right ],
\label{Qa1} \\
&\nonumber\\
& \bar A^\dagger= A^\dagger+\left(u_{n}v_{n}B^\dagger-
u_{p}v_{p}\tilde B\right)\left/(v_{n}^2-v_{p}^2)\right.
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.