diff --git "a/9503.json" "b/9503.json" new file mode 100644--- /dev/null +++ "b/9503.json" @@ -0,0 +1,137 @@ +{ + "9503/gr-qc9503049_arXiv.txt": { + "abstract": " ", + "introduction": "Introduction} One of the most important problems for cosmology is the issue of initial conditions for the Big Bang \\cite{Book}. The first models of inflation assumed that the universe started in a very hot state that supercooled in a metastable vacuum, which then decayed to the true vacuum through a first-order phase transition or just rolled down through a second-order phase transition \\cite{API}. Chaotic inflation \\cite{API} opened up the possibility of starting inflation from a wide range of initial conditions, including the Planck scale. Quantum fluctuations in the inflaton field could produce the small perturbations observed in the background radiation \\cite{API,COBE}, from which galaxies later evolved. Once we include quantum fluctuations of the scalar fields during inflation, we find that these can be large and dominate the classical evolution as we approach the Planck scale \\cite{Book}. As a consequence, the scalar fields diffuse, very much like a particle in Brownian motion. The universe is then divided into causally independent inflationary domains, in which the fields acquire different values. One of the most fascinating features of inflation is the process of self-reproduction of the universe \\cite{Book}, by which the values of the fields in some inflationary domains diffuse towards larger rates of expansion, producing new domains and so on for ever. This process might still be occurring, at scales much larger than our present horizon. The global behavior of the universe can then be described with the formalism of stochastic inflation, using probability distributions for the values of the fields in physical space. It has recently been shown that there are stationary solutions for the diffusion of the inflaton in general relativity \\cite{LLM}. A natural question to ask is whether this picture is still valid as we approach the Planck era, where quantum fluctuations of the metric become important \\cite{GBL}. Although it is generally assumed that the dynamics of the universe can be described by general relativity, the effective theory of gravity might be very different close to the Planck scale. So far the only consistent but by no means definite, since we lack the experimental observations needed to confirm it, theory of quantum gravity is string theory \\cite{GSW}. String theory contains in its massless gravitational sector a dilaton scalar field as well as the graviton. The low-energy effective theory from strings has the form a scalar-tensor theory of gravity \\cite{TEGP}, with non-trivial couplings of the dilaton to matter \\cite{CGQ,Polyakov}. Therefore, it is expected that the description of gravitational phenomena, and in particular inflation, close to the Planck scale should also contain this scalar field \\cite{Olive,GBQ}. The string dilaton field can be understood as a Brans--Dicke field \\cite{JBD}, which acts like a dynamical gravitational `constant'. Jordan--Brans--Dicke theory is the simplest scalar-tensor theory, with a constant kinetic coupling $\\omega$, which is bounded by primordial nucleosynthesis \\cite{PNS} and post-Newtonian experiments \\cite{TEGP} to be $\\omega > 500$. String theory predicts $\\omega = -1$ in ten dimensions \\cite{GSW}. However, the low energy effective value of $\\omega$ depends on the unknown details of the compactification mechanism and supersymmetry breaking \\cite{CGQ}. In general, one would expect a functional dependence of $\\omega$ on the dilaton field \\cite{Polyakov}. Such models were proposed in \\cite{HEI,PLB} for solving the graceful exit problem of extended inflation, and later suggested to be the generic asymptotic behavior of scalar-tensor theories' approach to general relativity during the matter dominated era \\cite{Damour,JPMDW}. In this paper we study the very large scale structure of the universe assuming that the gravitational interaction is described during inflation, close to Planck scale, by a general scalar-tensor theory of gravity, in the context of the stochastic inflation formalism. In Brans--Dicke stochastic inflation \\cite{GBLL,JGB}, we found runaway solutions to the diffusion of the dilaton and inflaton fields due to the fact that the Planck boundary, for generic chaotic potentials, is a line and the probability distribution does not become stationary, as occurs in general relativity \\cite{LLM}, but slides along this boundary. As a consequence, the value of Planck mass at the end of inflation is not well defined (it would depend on new dynamics at large values of the fields, e.g. quantum loop corrections to the potential \\cite{prep}). We will study a particular case in which the Brans--Dicke parameter has a simple pole with respect to the Brans--Dicke field, which will actually give stationary probability distributions for the diffusion of the inflaton and dilaton fields in physical space. We find a probability distribution peaked about domains that produce general relativity as the effective theory of gravity at late times and therefore we conclude that it is most probable to live one of those domains.\\footnote{Note that Coleman's mechanism for the vanishing of the cosmological constant in the context of Brans--Dicke theory also predicts general relativity as a low energy effective theory of gravity \\cite{Garay}.} This prediction seems to be in good agreement with observations. ", + "conclusions": "Why do the constants of nature take the values we observe? Couplings range fourteen orders of magnitude; masses are smaller than $10^{-17} M_{\\rm P}$ and range eleven orders of magnitude; the vacuum energy is smaller than $10^{-120} M_{\\rm P}^4$, and so on. A possible answer, very popular among particle physicists, is that there is a unique logically consistent theory of everything, where all fundamental constants are determined from its vacuum state. Unfortunately, this state is probably not unique, {\\em e.g.} in superstrings it strongly depends on the compactification mechanism and supersymmetry breaking \\cite{GSW}. On the other hand, quantum cosmology proposes that the so-called wave function of the universe provides a probability distribution for all fundamental constants. It is usually studied in the canonical or Euclidean approach, which has problems of interpretation related to the choice of measure. In Ref.~\\cite{GBL} it was suggested that the stochastic inflation formalism could provide a reasonable framework within which to answer these questions. Here an exponentially large, causally disconnected inflationary domain replaces a single nucleated universe of Euclidean quantum cosmology. This formalism proposes that the global measure should be given by the probability distribution in physical space \\cite{GBL,Vilenkin}, which takes into account the proper volume of the universe. Stochastic inflation describes the quantum diffusion of fields close to the Planck boundary. It uses branching diffusion equations to derive the probability of finding a given value of the scalar fields that drive inflation in a given physical proper volume \\cite{LLM}. It can be analyzed in the context of general relativity or in other theories of gravity, like scalar-tensor theories, where the gravitational coupling (Newton's constant in general relativity), becomes another dynamical field, the Brans--Dicke field. In the stochastic picture this leads to different values of the effective Planck mass in different exponentially large, causally disconnected, parts of the universe. This picture may be incorporated in a theory of evolution of the universe \\cite{Evolution}, where quantum fluctuations of Planck mass could act as a mechanism for mutation, while a selection mechanism establishes that its value should be as large as possible in order to increase the rate of expansion, and therefore the proper volume of the universe. Unfortunately, in the simplest scalar-tensor theory, Brans--Dicke theory, diffusion close to the Planck boundary leads to runaway solutions where the global volume of the universe becomes dominated by regions with infinitely large Planck mass, in conflict with observations unless new dynamics is introduced into the model \\cite{GBLL,JGB,prep}. In scalar-tensor theories with power-law behavior of the Brans--Dicke parameter, there are still runaway solutions, but by considering a scalar-tensor theory with an upper bound on the BD field (corresponding to a pole in the BD parameter), we have shown that not only do we recover a stationary probability distribution for the fields along the Planck boundary (peaked at the maximum allowed value of the Planck mass), but that the low-energy effective $\\omega$ parameter becomes exponentially large, thus recovering the general relativistic behavior. It is important to emphasize that this result is expected to be generic in all models involving a maximum value of the Planck mass, not just the particular model considered here. This purely quantum diffusion process towards large values of $\\omega$ is then reinforced by the subsequent classical evolution of the inflationary universe during which the Brans--Dicke parameter exponentially approaches the general relativistic limit. The ability of infinite $\\omega$ to act as an attractor in classical cosmology is well-known during inflation \\cite{PLB} and matter dominated era \\cite{Damour}. What we have presented in this paper is a quantum process which enables us to attribute a relative probability of finding a given value of $\\omega$ in the post-inflationary universe. Along with the classical evolution, this quantum diffusion mechanism predicts an effective theory of gravity which at late-times is indistinguishable from general relativity." + }, + "9503/astro-ph9503041_arXiv.txt": { + "abstract": "I describe a nested-grid particle-mesh (NGPM) code designed to study gravitational instability in three-dimensions. The code is based upon a standard PM code. Within the parent grid I am able to define smaller sub-grids allowing us to substantially extend the dynamical range in mass and length. I treat the fields on the parent grid as background fields and utilize a one-way interactive meshing. Waves on the coarse parent grid are allowed to enter and exit the subgrid, but waves from the subgrid are precluded from effecting the dynamics of the parent grid. On the parent grid the potential is computed using a standard multiple Fourier transform technique. On the subgrid I use a Fourier transform technique to compute the subgrid potential at high resolution. I impose quasi-isolated boundary conditions on the subgrid using the standard method for generating isolated boundary conditions, but rather than using the isolated Green function I use the Ewald method to compute a Green function on the subgrid which possesses the full periodicity of the parent grid. I present a detailed discussion of my methodology and a series of code tests. ", + "introduction": "Over the past decade N-body techniques have become the dominant method for studying the clustering of mass on large scales (see Efstathiou { et al.} 1985, or Hockney \\& Eastwood 1981). Direct particle-particle (PP) codes were the first n-body codes which were widely used, and they reached a remarkable state of development as discussed by Aarseth (1985). These codes tend to be very time consuming since each particle interacts with every other particle directly via a $1/r^2$ force. This technique allows for very high spatial resolution but at the expense of CPU time. As the particles clump the the small scale dynamics dominate and the CPU time required jumps dramatically. Treecodes are dramatically faster since they compute direct PP interactions only locally and the far field computations are performed using nodes of particles. These codes do not require a mesh and hence do not have the spatial resolution problems associated with the introduction of a grid. For both of these codes the small-scale spatial resolution is set by the finite softening length which is introduced to avoid the formation of tight binary pairs which would force a significant decrease in the time step and lead to dramatic CPU requirements. Particle-Mesh codes have circumvented a number of those problems at the expense of the finite spatial resolution introduced by the existence of the grid system. The forces are interpolated from the grid and hence they are truncated on small scales. This helps to eliminate the two-body effects present in PP codes. Thus PM codes are well suited to the study of Vlasov-like systems where it is essential to suppress two-body effects, such as the dark matter density field in cosmological simulations. In an attempt to increase the spatial resolution of PM codes, Particle-Particle-Particle-Mesh ($\\rm{P^3M}$) codes were developed. $\\rm{P^3M}$ codes are a hybrid class which use PM methods to compute large-scale forces and PP methods for small-scale interactions. This modification to PM codes partially helps to increase resolution. On the other hand they will suffer the same dramatic slow down which occurs with PP codes as clumping becomes significant. Furthermore, as emphasized by Sellwood (1987) $\\rm{P^3M}$ codes may introduce two-body effects on small-scales thus making them unsuitable for use in modeling Vlasov-like systems. Pen (1994) has constructed a code which he refers to as a ``Linear Moving Adaptive PM Code''. This code is adaptive in the traditional sense, where the mesh spacing varies according to some local quantity, in his case the density. As a consequence of Adaptive Mesh Refinement (AMR) Pen finds that the particles in his code feel a self-force, this could be a problem for some types of initial conditions. Katz \\& White (1993) introduced a ``multi-mass'' technique which is based upon the hybrid N-body/hydrodynamics code TREESPH, which was developed by Hernquist \\& Katz (1989). Katz \\& White (1993) used their ``multi-mass'' code to examine the properties of simulated galaxy clusters. In this method they use a series of nested lattices, with the particle mass growing smaller as one moves to more finely spaced lattices. Thus, they are able to obtain simultaneously high force accuracy and high mass resolution. This method is similar to the nested-grid methods that I will discuss next. In a similar spirit to $\\rm{P^3M}$ codes where one increases the force resolution without concern for increasing mass resolution Suisalu \\& Saar (1995) and independently Jessop, {et al.} (1994) have developed multi-grid based adaptive particle codes. Both codes are capable of adaptively modifying the underlying grid to accommodate the varying distribution of particles. Suisalu \\& Saar (1995) use a full multi-grid method for the potential solver which is able to refine on regions of high density increasing their force resolution. Jessop, {et al.} (1994) use a relaxation Poisson solver and Neumann boundary conditions interpolated from the parent grid, coupled with a methodology for adaptively creating sub-grids in regions where increased force resolution would be of benefit. Nested mesh codes have been available in the atmospheric sciences for a number of years, and Koch \\& McQueen (1987) provide a nice introduction. In these fields it was realized that nesting a fine mesh within a coarse mesh can be a very economical way to achieve higher resolution without increasing dramatically the required memory and CPU resources that an equivalent larger grid would need. Nested mesh techniques are one way to extend resolution in simulations of large-scale structure. Peebles (1980) first argued that long wavelength Fourier modes can couple to much shorter wavelength modes increasing the power on small scales. More recently Jain \\& Bertschinger (1993) have come to the same conclusion. As a consequence attempts to increase resolution by reducing the box size appear to be doomed from the start. Furthermore, moving from simple PM codes to $\\rm{{P^3M}}$ codes on a similar sized grid appears to be capable of increasing the resolution by only a small factor, roughly a factor $O(10^3)$ shy of the required resolution for full simulations capable of addressing both the details of galaxy formation while still being able to follow the large wavelength modes. Over the last several years a number of nested grid codes have been developed for use in cosmology and/or astrophysics. Chan { et al.} (1986) developed a nested mesh code for use in studying galaxy collisions. In the code of Chan, { et al.} the sub-grid potential is computed using a standard finite difference approach and the boundary conditions for the sub-grid are interpolated from the known coarse grid potential. Villumsen (1987) constructed a Hierarchical Particle Mesh (HPM) code which is similar to the code I will describe here. Recently, Anninos, {et al.} (1993) have reported a nested mesh code for use in cosmology which not only evolves the collisionless dark matter on a fine grid but performs a hydrodynamic calculation on the fine grid as well allowing them to follow both the dark matter and the baryonic matter component. The code developed by Anninos { et al.} is similar to that developed by Chan { et al.} in the sense that boundary conditions for the sub-grid region are obtained directly from the parent grid by interpolating potential values on parent mesh cells to the sub-grid. In a related approach to the nested grid schemes discussed above Couchman (1991) has modified the standard ($ {\\rm P^3 M} $) algorithm by introducing refined meshes in regions of high density. The traditional complaint against such codes has been that as clustering evolves they tend to slow down because there are an increasing number of particles within one cell of one another. This causes the PP step to become the dominant portion of the code and significantly slows down the code as a whole. To circumvent this, in regions of high density Couchman introduces refined meshes to guarantee that there is never a very large number of particles within the same grid cell. This prevents the PP portion of the code from dominating the overall runtime. This implementation of a nested-grid algorithm has a number of advantages over the methods previously mentioned. Unlike codes such as ${\\rm P^3M}$, and tree codes I am able to get very high force resolution but without the slowdown associated in those codes for highly clustered distributions. Furthermore, I am able to not only increase the force resolution, but also the mass resolution on the sub-grid region. This is something which only other nested-grid codes are at present capable of doing. In comparison to the nested-grid codes, I feel that my mass advection scheme should make my method easier to generalize to a more adaptive scheme. I also enforce a Courant-Friedrichs-Levy (CFL) condition on the sub-grid time step, guaranteeing that the time integrator does not go unstable. This is contrary to the Villumsen (1988) version of a nested-grid code where no CFL condition is enforced on the sub-grid particles. This paper is organized in the following way: section \\S II discusses in detail the algorithm that I have implemented, Section \\S III presents the results of a number of tests of the code, and finally in section \\S IV I present my conclusions. ", + "conclusions": "" + }, + "9503/gr-qc9503044_arXiv.txt": { + "abstract": "We present the first numerical solutions of the coupled Einstein-Maxwell equations describing rapidly rotating neutron stars endowed with a magnetic field. These solutions are fully relativistic and self-consistent, all the effects of the electromagnetic field on the star's equilibrium (Lorentz force, spacetime curvature generated by the electromagnetic stress-energy) being taken into account. The magnetic field is axisymmetric and poloidal. Five dense matter equations of state are employed. The partial differential equation system is integrated by means of a pseudo-spectral method. Various tests passed by the numerical code are presented. The effects of the magnetic field on neutron stars structure are then investigated, especially by comparing magnetized and non-magnetized configurations with the same baryon number. The deformation of the star induced by the magnetic field is important only for huge values of $\\vec{B}$ ($B>10^{10} {\\ \\rm T}$). The maximum mass as well as the maximum rotational velocity are found to increase with the magnetic field. The maximum allowable poloidal magnetic field is of the order of $10^{14} {\\ \\rm T}$ and is reached when the magnetic pressure is comparable to the fluid pressure at the centre of the star. For such values, the maximum mass of neutron stars is found to increase by $13$ to $29\\%$ (depending upon the EOS) with respect to the maximum mass of non-magnetized stars. ", + "introduction": "\\label{s:intro} Neutron stars are known to possess strong magnetic fields. Their polar field strength is deduced from the observed spin slowdown of pulsars via the magnetic dipole braking model; for the 558 pulsars of the catalog by Taylor et al. (1993), it ranges from $B=1.7\\ 10^{-5}$ GT \\footnote{In this article, we systematically use S.I. units, so that the magnetic field amplitude is measured in teslas (T) or, more conveniently in gigateslas ($1{\\ \\rm GT} = 10^9{\\ \\rm T}$). We recall that $1{\\ \\rm GT} = 10^{13}$ gauss.} (PSR B1957+20) up to $B=2.1$ GT (PSR B0154+61), with a median value $B=0.13$ GT, most of young pulsars having a surface field in the range $B\\sim 0.1 - 2 {\\ \\rm GT}$. {}From the theoretical point of view, a considerable amount of studies have been devoted to the structure of the magnetic field {\\em outside} the neutron star, in the so-called {\\em magnetosphere}, in relation with the pulsar emission mechanism (for a review, see e.g. Michel 1991). The studies of the magnetic field {\\em inside} neutron stars are far less abundant. Only recently, some works have been devoted to the origin and the evolution of the internal magnetic field, all of them in the non-relativistic approximation (Thompson \\& Duncan 1993, Urpin \\& Ray 1994, Wiebicke \\& Geppert 1995, Urpin \\& Shalybkov 1995). Beside these studies of neutron star magnetic field, there exists a growing number of numerical computations of rapidly rotating neutron stars in the full framework of general relativity, taking into account the most sophisticated equations of state of dense matter to date (cf. Salgado et al.~1994a,b and reference therein, as well as Cook et al.~1994b, Eriguchi et al.~1994, Friedman \\& Ipser 1992). But in all these models the magnetic field is ignored. The present work is the first attempt to compute numerical models of rotating neutron stars with magnetic field in a self-consistent way, by solving the Einstein-Maxwell equations describing stationary axisymmetric rotating objects with internal electric currents. In this way, the models presented below \\begin{enumerate} \\item are fully relativistic, i.e. all the effects of general relativity are taken into account, on the gravitational field as well as on the electromagnetic field. \\item are self-consistent, i.e. the electromagnetic field is generated by some electric current distribution and the equilibrium of the matter is given by the balance between the gravitational force, the pressure gradient and the Lorentz force corresponding to the electric current. Moreover, the electromagnetic energy density is taken into account in the source of the gravitational field. \\item give the solution in all space, from the star's centre to infinity, without any approximation on the boundary conditions. \\item use various equations of state proposed in the literature for describing neutron star matter. \\end{enumerate} The restrictions of our models are the following ones: \\begin{enumerate} \\item We consider strictly stationary configurations. This excludes magnetic dipole moment non-aligned with the rotation axis. Indeed, in the non-aligned case, the star radiates away electromagnetic waves as well as gravitational waves (due to the deviation from axisymmetry induced by a magnetic axis different from the rotation axis); hence it loses energy and angular momentum, so that this situation does not correspond to any stationary solution of Einstein equation. Thus the stationary hypothesis implies that we restrict ourselves to axisymmetric configurations, with the magnetic axis aligned with the rotation axis. \\item Moreover, we consider only {\\em poloidal} magnetic fields (i.e. $\\vec{B}$ lying in the meridional planes). Indeed, if the magnetic field had, in addition to the poloidal part, a {\\em toroidal} component (i.e. a component perpendicular to the meridional planes), the {\\em circularity} property of spacetime would be broken, which means that the two Killing vectors associated with the stationarity and the axisymmetry would no longer be orthogonal to a family of 2-surfaces (cf. Carter 1973, p.~159). In the circular case, a coordinate system $(t,r,\\theta,\\phi)$ can be found such that the components of the metric tensor $\\vec{g}$ are zero except for the diagonal terms and only one off-diagonal term ($g_{t\\phi}$). In the non-circular case, only one component of $\\vec{g}$ can be set to zero ($g_{r\\theta}$), resulting in much more complicated gravitational field equations (Gourgoulhon \\& Bonazzola 1993). On the contrary, perfect fluid stars with purely rotational motion (no convection) generate circular spacetimes and poloidal magnetic fields preserve this property (Carter 1973). \\item Being not interested in modelling pulsar magnetospheres, we suppose that the neutron star is surrounded by vacuum\\footnote{by {\\em vacuum} we mean that there is no matter outside the star; nevertheless there is some electromagnetic field, so that the total stress-energy tensor (right-hand side of the Einstein equation) is not zero outside the star ({\\em electrovac} spacetime).}. \\end{enumerate} The numerical code we use is an electromagnetic extension of the code presented in Bonazzola et al.~1993 (hereafter BGSM), which was devoted to perfect fluid rotating stars and used to compute neutron star models with various equations of state of dense matter (Salgado et al.~1994a,b). We will not give here the complete list of the equations to be solved but only the electromagnetic ones (Maxwell equations) in Sect.~\\ref{s:electromagn}, referring to BGSM for the gravitational part. Likewise we will not present the numerical technique, based on a pseudo-spectral method, since it has been detailed in BGSM. We will discuss in Sect.~\\ref{s:tests} only the numerical procedure and tests of the electromagnetic part of the code. We analyze the effects of the magnetic field on static configurations in Sect.~\\ref{s:static} and on rotating configurations in Sect.~\\ref{s:rotat}, the Sect.~\\ref{s:const,bar} describing in detail the case of constant baryon number sequences. Finally Sect.~\\ref{s:concl} summarizes the main conclusions of this study. ", + "conclusions": "\\label{s:concl} We have extended an existing numerical code for computing perfect fluid rotating neutron stars in general relativity (BGSM, Salgado et al. 1994a,b) to include the electromagnetic field. This latter is calculated by solving the relativistic Maxwell equations with an electric current distribution which is compatible with the star's equilibrium (i.e. the Lorentz force acting on the conducting fluid shall be the gradient of some scalar in order to balance gravity and the inertial centrifugal force). In order to preserve the stationarity, axisymmetry and circularity properties of spacetime, we consider only axisymmetric poloidal magnetic fields. The equations are numerically solved by means of a pseudo-spectral technique which results in a high accuracy as tests on simple electromagnetic configurations (for which an analytical solution is available) have shown: the relative error on the electromagnetic field is of the order of $10^{-5}$ inside the star and $10^{-9}$ outside it. The part of the code relative to the deformation of the star by Lorentz forces has been tested by comparison with Ferraro's analytical solution in the Newtonian case. The fact that the numerical output is a solution of Einstein equations has been tested by two virial identities: GRV2 and GRV3. We have then used the code to investigate the effect of the magnetic field on rotating neutron stars. For this purpose we considered magnetic field amplitude ranging from zero up to huge values, of the order of $10^5 {\\ \\rm GT}$, which is ten thousand times bigger than the highest measured values at the pulsars surfaces and is the value for which the magnetic pressure equals (with an opposite sign along the symmetry axis) the fluid pressure near the centre of the star. Let us note that such enormous magnetic fields are expected to decay on very short time scales via the mecanism of ambipolar diffusion, as investigated by Haensel et al. (1991), Goldreich \\& Reisenegger (1992) and Urpin \\& Shalybkov (1995), which is very efficient when the electric current is perpendicular to $\\vec{B}$, as in the present case. The decay time scale for non-superconducting matter computed by the above authors is $10^2$ yr for $B\\sim 10^4 {\\ \\rm GT}$ and $10^6$ yr for $B\\sim 1 {\\ \\rm GT}$. According to our study, the influence of the magnetic field on the star's structure is mostly due to Lorentz forces and not to the gravitational field generated by the electromagnetic stress-energy. This may be understood once it has been realized that a magnetic field of $10^5{\\ \\rm GT}$ has an energy density of $0.25\\ \\rho_{\\rm nuc} c^2$, whereas the matter density at the centre of neutron stars is between $1$ and $10\\ \\rho_{\\rm nuc}$. Although the electromagnetic energy is much lower than the fluid mass-energy, the deformation of the star can be as dramatic as that of Fig.~\\ref{f:static,isoener,Mmax} because of the {\\em anisotropic} character of the magnetic pressure, just as the anisotropic centrifugal forces can highly deform the star in the rotating case though the kinetic energy is much lower than the fluid rest mass. In static and slowly rotating cases, Lorentz forces stretch the star away from the symmetry axis. The deformation is appreciable only for $B>10^2{\\ \\rm GT}$. In highly relativistic situations (supramassive sequences), the effect of the magnetic field is instead to reduce the star's equatorial radius (at fixed baryon number). The maximum poloidal magnetic field supported by neutron stars has a polar value between $4\\times 10^4$ and $1.5\\times 10^5{\\ \\rm GT}$ depending upon the EOS and the rotation state of the star. Let us recall that the magnetic field at the star's centre is two to four times higher than at the poles. The impact of the magnetic field on the maximum mass of neutron stars is very limited for magnetic fields of the order of 1 GT, whereas it is important for the magnetic fields near the maximum value ($\\sim 10^5{\\ \\rm GT}$): in the static case, $M_{\\rm max}$ is increased by $13\\%$ to $29\\%$ --- depending upon the EOS --- with respect to non-magnetized configurations. In fact, the magnetic field reveals to be more efficient in increasing $M_{\\rm max}$ than the rotation, except for the EOS HKP, where the maximum mass in rotation (without any magnetic field) is $21.2 \\%$ higher than $M_{\\rm max}$ for stationary configurations (Salgado et al. 1994a), whereas the static magnetized $M_{\\rm max}$ is only $13.3\\%$ higher. For the PandN EOS, the $M_{\\rm max}$ increase by both mecanisms are similar ($\\sim 16 \\%$), whereas for the Pol2, BJI and Diaz II EOS, the static magnetized $M_{\\rm max}$ lies above the rotating non-magnetized $M_{\\rm max}$. In the future, we plan to study the stability of the magnetized configurations presented in this article. Two types of instabilities may be expected to occur for high values of $\\vec{B}$: (i) a pure electromagnetic instability towards another electric current - magnetic field distribution (of lower energy) and (ii) a non-axisymmetric instability resulting in a triaxial stellar equilibirum shape, which would be the magnetic analog of the transition from the MacLaurin spheroids to the tri-axial Jacobi ellipsoids for high rotational velocities of Newtonian incompressible bodies." + }, + "9503/astro-ph9503091_arXiv.txt": { + "abstract": "In 1989, an air-borne experiments (VEGA experiment) aiming at the detection of a few 10 GeV $\\gamma$-ray were carried out. In these experiments, nine point-source candidates along the Galactic plane were reported. In these candidates, the five of five highest significance candidates positionally coincide with the EGRET galactic plane sources. ", + "introduction": "Recently, the EGRET(CGRO) developed a new era of the GeV-$\\gamma$-ray astrophysics. More than ten times of point sources were discovered by this satellite \\cite{bib1}. On the other hand, in 1989, we carried out an air-borne experiment for detecting a few ten GeV $\\gamma$-rays \\cite{bib2,bib3,bib4}. In the first experiment, there was an indication of the existence of the galactic plane sources. We report a possible correlation between the EGRET sources and the air-borne experiment which was first noticed by J. R. Mattox (GSFC). ", + "conclusions": "In spite of a positional coincidence, there are two problems as follows. \\begin{enumerate} \\item The intensities of these sources by the VEGA experiment are more than 10 times bigger than those obtained by the EGRET. \\item In the succeeding VEGA experiments, the existence of these sources could not be confirmed \\cite{bib4}. \\end{enumerate} However, by checking carefully on the distributions of the electron arrival directions in Reference \\cite{bib4}, it is possible to say that there are some fluctuations in these signal regions. Therefore if the positional coincidence is a real effect, these sources must be strongly time dependent. It is very interesting to observe these by different energy regions such as X-ray and VHE-$\\gamma$-ray." + }, + "9503/hep-ph9503331_arXiv.txt": { + "abstract": "``Natural'' inflationary theories are a class of models in which inflation is driven by a pseudo-Nambu-Goldstone boson. In this paper we consider two models, one old and one new, in which the potential for inflation is generated by loop effects from a fermion sector which explicitly breaks a global $U(1)$ symmetry. In both models, we retrieve the ``standard'' natural inflation potential, $V\\left(\\theta\\right) = \\Lambda^4\\left[1 + \\cos\\left(\\theta / \\mu\\right)\\right]$, as a limiting case of the exact one-loop potential, but we carry out a general analysis of the models including the limiting case. Constraints from the COBE DMR observation and from theoretical consistency are used to limit the parameters of the models, and successful inflation occurs without the necessity of fine-tuning the parameters. ", + "introduction": " ", + "conclusions": "" + }, + "9503/hep-ph9503462_arXiv.txt": { + "abstract": "We study the equation of state (EOS) of quark matter at zero temperature, using the Color Dielectric Model (CDM) to describe confinement. Sensible results are obtained in the version of the CDM for which confinement is imposed {\\it smoothly}. The two--phases version of the model turns out to give unrealistic results for the EOS. Chiral symmetry plays a marginal r\\^ole and the quarks are massive till high densities. The deconfinement phase transition is smooth and unlikely to be first order. Instabilities of the quark matter and the gap equation are discussed. ", + "introduction": "The study of the Equation Of State (EOS) of Quark Matter (QM) has become a fashionable topic in view of the next experiments using heavy ions in program at RHIC(BNL) and at LHC(CERN) \\cite{qm93}. Furthermore the inner structure of neutron stars is now under investigation: the connection between the composition of the star and the cooling time, which can be measured, allows to discriminate among the various models, indicating the possible existence of a quark matter phase (see, e.g. \\cite{stars}). The study of the equation of state of matter at high densities can also give usefull informations to traditional nuclear physics, since one can search heavy nuclei for precursor phenomena, both of the deconfinement and/or of the chiral restoration phase transition (for a review see \\cite{brown}). In many model calculations of the deconfinement phase transition the frame of the MIT bag model has been used\\cite{rafe,giapu}. In such a way a first order deconfinement phase transition is obtained (apart from specific, {\\it ad hoc} choices of the model parameters) and the deconfinement phase transition coincide with the chiral restauration one. At densities and temperatures slightly bigger than the critical ones the right degrees of freedom are already quarks, having current masses, and perturbative gluons. There are anyway indications, from lattice calculations, that at temperatures bigger than the critical one non perturbative effects are still present in the quark--gluon plasma \\cite {blaizot}. In this paper we study the EOS of QM using the Color Dielectric Model (CDM) to describe confinement \\cite{pirner}. We shortly review CDM in sect. 2. This model has been widely used to study both the static and the dynamical properties of the nucleon. Morover it can be used to describe many--nucleon systems: for a two nucleon system it allows to compute a nucleon--nucleon potential qualitatively similar to the ones used in nuclear physics \\cite{seiquark}; in the case of a homogeneous, infinite system of nucleons the CDM can be used to construct a nonlinear version of the Walecka model \\cite{birsewal}. The aim of our work is to extend previous calculations of the deconfinement phase transition, where the same model has been used \\cite{mitia}. An important point of our calculation will be to fix the model parameters in order to reproduce the basic static properties of the single nucleon, as was already done in the study of the nucleon structure functions \\cite{structure}. We will later use the same parameters to study the EOS of QM, which we define as a system of totally deconfined quarks. In such a way the study of the QM's EOS will turn out to be a severe test for the different versions of the CDM, and we will be able to make some predictions of the properties of matter at high densities. Within CDM (with a double minimum potential for the scalar field), we will investigate the possibility of getting a {\\it scenario} similar to the one described by the MIT bag model, with two phases undergoing a sharp first order phase transition (sect. 3). Our results show that such a description is incompatible with the CDM. We than study another version of the CDM (with a single minimum potential for the scalar field), where confinement is imposed more smoothly, and we get a sensible EOS for QM, without a sharp deconfinement transition. An important feature of this deconfined quark matter is that quark's masses are big till high densities. Chiral restauration and deconfinement do not occur at the same density. The reasons why chiral simmetry is restaured so slowly are discussed in sect. 4. In the last sections we analyze the properties of QM, as described in the CDM. In sect. 5 the stability of quark matter is studied, using the technology of the response function. In sect. 6 we consider a gap equation, trying to understand the formation of quarks' clusters. Finally sect. 7 is devoted to the concluding remarks. ", + "conclusions": "In this paper we have studied quark matter, using the non--perturbative tool of the CDM model. Let us summarize our main results: -- of all the considered versions of the model, only one gives sensible results, i.e. the one in which confinement is imposed in the smoothest way. The other versions of the model give an exceedingly low energy per baryon number for the quark matter. -- the SM (p=1) version gives an EOS for the quark matter which is almost identical to the EOS of nuclear matter as computed using the Walecka model, for the range of densities $\\rho_{eq}\\le\\rho\\le 2\\rho_{eq}$. In this range, the difference in the energy per baryon number between the nuclear matter and the quark matter is very small. Taking into account the theoretical incertitude in the fixing of the parameters (Sec.2B), this energy difference is of the order of 20 MeV. At densities smaller than $\\rho_{eq}$ the energy difference rapidly increases, and for densities higher than $2\\rho_{eq}$ the quark matter is the energetically most favourable state. An important point is that the minimum for the EOS of quark matter is at a density of the order of $\\rho_{eq}$, and this result does not depend on the fine tuning of the parameters. -- the mass of the quarks remains big (of the order of 100 MeV) till high densities, much higher than the density at which quark matter becomes the ground state. The deconfinement phase transition and the chiral symmetry restauration arise at totally different densities. -- the quark matter (in SM and p=1) becomes unstable at low densities, of the order of $\\rho_{eq}/2$. The instability can be obtained both from the study of the compressibility (where the compressibility becomes negative the system is unstable) and from the study of the collective states at zero energy transfer. The two method give the same critical density. -- the process of clusterization can be studied considering the correlations between the particles, beyond the mean field approximation. The gap we obtained seems rather small. We have to bear in mind that we have oversimplified the problem, by linearizing the residual interaction (and thus getting an approximate propagator for $\\tilde\\chi$, good only for small fluctuations), and by considering only two body correlations, where the three body ones are probably the most relevant. To conclude, we would like to consider three possible applications of the model. -- Cooling of neutron stars (see C.J.Pethick in \\cite{brown}). A mechanism called URCA has been invoked to explain the rapid cooling of neutron stars. This mechanism proceeds via the exchange of electrons between neutrons and protons, which cool down emitting neutrinos and antineutrinos: \\be n\\rightarrow p+e^{-}+\\bar\\nu_e \\ee \\be p+e^{-}\\rightarrow n+\\nu_e \\ee A minimal fraction of protons is required, in order to fulfil momentum and energy conservation. This critical fraction is of the order of 1/9. Using traditional nuclear physics models to compute the protons' fraction, one gets numbers slightly smaller than the critical one, and the URCA mechanism cannot start. Another possibility is to invoke the presence of quark matter in the core of the star, and to consider reactions similar to the one previously described, but with the electron now exchanged between up and down quarks. In this case the problem is that, considering quark matter as described by the MIT bag model, quarks are massless and the phase space is thus zero. Therefore one would need a massive quark matter phase, and the possibility of reaching this phase at the density of the core of neutron stars, tipically of the order of $5\\rho_{eq}$. This situation is actually the one described by the SM (p=1) version of the CDM. URCA mechanism should therefore be possible, and with an high luminosity, too. -- Energy released in supernova explosion. Using a traditional nuclear physics approach, the energy released in supernova explosion is generally too small. A softer EOS could solve the problem, but if one uses e.g. the MIT bag to study matter at high density, the deconfinement phase transition is reached at densities larger than the one presumably reached in the collapse of the star. The EOS for matter at high density as computed in the CDM, is softer than the EOS of nuclear matter, and presumably a similar result will be obtained when computing neutron matter. Furthermore the softening starts at densities of the order of $2\\rho_{eq}$. -- EMC effect and swelling of the nucleon. To conclude let us consider the problem of the possible swelling of the nucleons embedded in a nucleus. If one considers,e.g. electron--scattering on heavy nuclei, one realizes that the swelling is a sensible mechanism, but it must be of the order of some $5\\%$ in order to be realistic. The real problem is thus not the one to obtain a swelling, but to obtain a not too big effect. In other words, the nucleons have not to dissolve when embedded in a nucleus. Since the minimum of the EOS of the quark matter is for a density near $\\rho_{eq}$, only in the center of heavy nuclei some swelling mechanism can appear. The exact amount of swelling depends on the precise difference in energy between the quark matter and the nuclear matter at densities $\\sim\\rho_{eq}$, and is beyond the possibility of the present calculation. We are now carrying out researches in all the directions previuosly outlined. \\appendix" + }, + "9503/hep-ph9503295_arXiv.txt": { + "abstract": "A recently reported anomaly in the time structure of signals in the KARMEN neutrino detector suggests the decay of a new particle $x$, produced in $\\pi^+ \\to \\mu^+ x$ with mass $m_x=33.9$ MeV. We discuss the constraints and difficulties in interpreting $x$ as a neutrino. We show that a mainly-sterile neutrino scenario is compatible with all laboratory constraints, within narrow limits on the mixing parameters, although there are problems with astrophysical and cosmological constraints. This scenario predicts that appreciable numbers of other $x$-decay events with different origins and time structures should also be observable in the KARMEN detector. Such $x$-decay events should also be found in the LSND experiment and may be relevant to the search for $\\bar\\nu_\\mu\\to\\bar\\nu_e$ oscillations. ", + "introduction": " ", + "conclusions": "" + }, + "9503/astro-ph9503089_arXiv.txt": { + "abstract": "We have obtained high signal:to:noise optical spectroscopy at 5\\AA\\ resolution of 27 quasars from the APM z$>$4 quasar survey. The spectra have been analyzed to create new samples of high redshift Lyman-limit and damped Lyman-$\\alpha$ absorbers. These data have been combined with published data sets in a study of the redshift evolution and the column density distribution function for absorbers with $\\log$N(HI)$\\ge17.5$, over the redshift range 0.01 $<$ z $<$ 5. The main results are: \\begin{itemize} \\item Lyman limit systems: The data are well fit by a power law $N(z) = N_0(1 + z)^{\\gamma}$ for the number density per unit redshift. For the first time intrinsic evolution is detected in the product of the absorption cross-section and comoving spatial number density for an $\\Omega = 1$ Universe. We find $\\gamma = 1.55$ ($\\gamma = 0.5$ for no evolution) and $N_0 = 0.27$ with $>$99.7\\% confidence limits for $\\gamma$ of 0.82 \\& 2.37. \\item Damped \\lya systems: The APM QSOs provide a substantial increase in the redshift path available for damped surveys for $z>3$. Eleven candidate and three confirmed damped Ly$\\alpha$ absorption systems, have been identified in the APM QSO spectra covering the redshift range $2.8\\le z \\le 4.4$ (11 with $z>3.5$). Combining the APM survey confirmed and candidate damped \\lya absorbers with previous surveys, we find evidence for a turnover at z$\\sim$3 or a flattening at z$\\sim$2 in the cosmological mass density of neutral gas, $\\Omega_g$. \\end{itemize} The Lyman limit survey results are published in Storrie-Lombardi, et~al., 1994, ApJ, 427, L13. Here we describe the results for the DLA population of absorbers. ", + "introduction": "How and when galaxies formed are questions at the forefront of work in observational cosmology. Absorption systems detected in quasar spectra provide the means to study these phenomena up to z$\\sim$5, back to when the Universe was less than 10\\% of its present age. While the baryonic content of spiral galaxies that are observed in the present epoch is concentrated in stars, in the past this must have been in the form of gas. Damped \\lya absorption (DLA) systems have neutral hydrogen column densities of N(HI)$> 2 \\times 10^{20}$cm$^{-2}$. They dominate the baryonic mass contributed by HI. The principal gaseous component in spirals is HI which has led to surveys for absorption systems detected by the DLA they produce (Wolfe, Turnshek, Smith \\& Cohen 1986 [WTSC]; Lanzetta et~al. 1991 [LWTLMH]; Lanzetta, Wolfe \\& Turnshek 1995 [LWT95]). We extend the earlier work on Lyman limit systems and DLAs to higher redshifts using observations of QSOs from the APM z$>$4 QSO survey (Irwin, McMahon \\& Hazard 1991), These data more than triple the redshift path surveyed at z$>$3 and allow the first systematic study up to z=4.5. ", + "conclusions": "The QSOs from the APM survey more than triple the $z>3$ redshift path for DLA surveys. Fourteen candidate DLA systems have been identified in the APM spectra covering $2.8\\le z \\le 4.4$ (11 with $z>3.5$), with 3 confirmed. Combining these data with the previous surveys and fitting a single power law for z$>$1.5 gives N(z)$=.03(1+z)^{1.5\\pm0.6}$, marginally consistent with no evolution models. Evolution is evident in the highest column density absorbers with the incidence of systems with log N(HI)$\\ge$21 apparently decreasing for z\\simgt 3.5. We find evidence for a turnover or flattening in the cosmological mass density of neutral gas, $\\Omega_g$ at high redshift. The more gradual evolution of $\\Omega_g$ than previously found helps alleviate the `cosmic G-dwarf problem' (LWT95), i.e. if a large amount of star formation has taken place between z=3.5 and z=2, a much larger percentage of low metallicity stars should exist than is detected. It is also consistent with the suggestion by Pettini et~al. (1994) that the wide range in DLA metallicities measured at the same epoch indicates that at z$\\sim$2 they are observed prior to the bulk of star formation in the disk. \\vfill\\eject" + }, + "9503/hep-ph9503355_arXiv.txt": { + "abstract": "We propose a criterion to classify hybrid defects occuring in field theoretic models such as the standard electroweak model. This criterion leads us to consider the minimal extension of the electroweak model in which electroweak magnetic monopoles and $Z$-strings are topological. We briefly discuss the cosmology of such defects. ", + "introduction": " ", + "conclusions": "" + }, + "9503/hep-ph9503293_arXiv.txt": { + "abstract": " ", + "introduction": "The Peccei-Quinn symmetry \\cite{Peccei/Quinn} is still the most attractive solution to the strong CP problem of QCD. As a consequence of the spontaneous breaking of that symmetry, the axion is born \\cite{Weinberg...}. The axion properties and their phenomenological consequences have been studied in depth ( for a review see \\cite{Peccei/Jarlskog} ), and some experiments trying to discover the axion are under way ( for a review see \\cite{Sikivie} ). Axions might be constituents of the dark mass of the Universe, and this makes the search experiments even more fascinating. Almost all experiments so far designed to search for light axions make use of the coupling of the axion to two photons \\begin{equation} \\label{lag.axions} {\\cal L} = \\frac{1}{8} \\ g_{a \\gamma \\gamma} \\ \\varepsilon_{\\mu \\nu \\alpha \\beta} F^{\\mu \\nu} F^{\\alpha \\beta} \\ a \\end{equation} The coupling $g_{a\\gamma\\gamma}$ is proportional to the axion mass $m_a$ \\begin{equation} \\label{axioncoupling} g_{a \\gamma \\gamma} \\approx \\frac{\\alpha}{2\\pi} \\ \\frac{m_a}{1 \\ \\mbox{eV}} \\ 10^{-7} \\ \\mbox{GeV}^{-1} \\end{equation} An interesting question is whether these dedicated experiments are 1) only sensible to the axion or 2) they could discover another class of particles. The answer is 2). Indeed, any light pseudoscalar particle $\\phi$ coupled to two photons \\begin{equation} \\label{lag.pseu} {\\cal L} = \\frac{1}{8} \\ g \\ \\varepsilon_{\\mu \\nu \\alpha \\beta} F^{\\mu \\nu} F^{\\alpha \\beta} \\ \\phi \\end{equation} with a strong enough coupling $g$ would induce a positive signal in some of the axion searches. Of course, a scalar particle coupled to two photons would also be detected in such experiments. To simplify the presentation of the paper, first we will thoroughly discuss the pseudoscalar case. In Sec. 6 we will compare the scalar to the pseudoscalar case. With all this in mind, we have studied the phenomenology and consequences of a light particle $\\phi$ that couples {\\bf only} to two photons with strength $g$. We consider exclusively this type of interaction, Eq. (\\ref{lag.pseu}), since it is the existence of this interaction the only requirement for having a signal in the axion experiments. By making this assumption, however, we are not generalizing the axion. Our particle $\\phi$ cannot be identified with the axion, since the axion couples to leptons, quarks and nucleons, and $\\phi$ does not. In this spirit, we will also assume that the coupling $g$ and the mass $m$ of the $\\phi$ particle are not related, as they are for the axion, Eq. (\\ref{axioncoupling}). In principle, we should consider arbitrary $\\phi$ masses, but since we know that axion experiments are sensitive to very light axions, we will restrict the range of masses; we will only consider $m \\leq 1$ GeV. In this paper, we will investigate the laboratory, astrophysical and cosmological constraints on $\\phi$. Some of the axion constraints can be directly translated into constraints on $g$ and $m$, but some cannot. We will also answer the question whether the relic $\\phi$ particles can be, for some range of parameters, the dark matter of the Universe. Another issue we will study is the consequence of adding other couplings to $\\gamma \\gamma \\phi$ in such a way that the full $SU(2) \\times U(1)$ gauge invariance holds at high energies. We finish this section with some general remarks. As we said, the motivation that has led us to assume a light particle coupled only to photons is the fact that experiments are sensitive to such a possibility. As far as we know, there is not a current theoretical model where such peculiarities arise. In fact, one may even wonder whether it can ever occur. The point is that we know that the coupling of (quasi) Goldstone bosons to photons proceeds through anomalous triangle graphs, where the boson couples to charged particles. This is the situation for the neutral pion and in the axion model. One may argue that in order to couple $\\phi$ to photons, $\\phi$ has to couple to charged particles, and one may conclude that our assumption of absence of couplings to matter is inconsistent. We would like to point out that one may think of scenarios where the only coupling that one may constrain at low energies is $g$ in Eq. (\\ref{lag.pseu}). We need to introduce particles that are very heavy and carry a new quantum number. We also have to impose that $\\phi$ carries also this quantum number, and that the known leptons and quarks do not. The anomalous graphs with a triangle loop of new particles would then induce the effective coupling of Eq. (\\ref{lag.pseu}). For heavy enough new particles, the important coupling of $\\phi$ at low energies would be to photons, and all the constraints discussed in this paper do not need to be modified or reconsidered. A related point is the fact that the effective Lagrangian (\\ref{lag.pseu}) can only be used for energies $E \\ll g^{-1}$. We keep in mind this restriction in all the calculations. ", + "conclusions": "Most experiments searching for axions are based on its coupling to two photons. These experiments are also sensitive to a pseudoscalar ( or scalar particle ) $\\phi$ that couples only to two photons, and not to leptons, quarks and nucleons. Motivated by this fact, we have examined the constraints on such a particle, and investigated to which extent $\\phi$ can be the dark matter of the Universe. Some of the constraints can be deduced quite easily from studies on the axion, and other constraints have been deduced in this paper. The laboratory, astrophysical and cosmological limits are shown in Fig. 2. High energy searches of $e^ + e^- \\rightarrow \\gamma + invisible$ give the best constraints of what we have classified as non-dedicated experiments. Among the dedicated experiments, the solar flux detection gives strong limits once one assumes $\\phi$ production in the solar core. Laser experiments give poorer limits, but are free of any astrophysical assumption. The telescope search give very strong constraints, but in a very limited range of $\\phi$ masses. Consideration of He burning stars allows to place very stringent limits for $m \\leq O(10\\ \\mbox{keV})$. For higher $\\phi$ masses, one has to rely on the limits from SN1987A observations and from considerations of big bang nucleosynthesis. We have studied the cosmological evolution of the $\\phi$ species, and calculated the relic $\\phi$ density. The interesting range of masses and couplings that leads to a $\\phi$ density such that $\\phi$ can be at least the galactic dark matter is shown in Fig. 6. Unfortunately, the mass range interesting for dark matter is much higher than the masses to which most of the existing experiments are sensitive. Only the telescope search experiment is sensitive to masses that are close to the dark matter range. Another conclusion that we have reached has been that, if $\\phi$ is a relic species, it must be a hot relic. The case that $\\phi$ is a scalar particle is very similar to the pseudoscalar case, regarding the constraints on the coupling and our conclusions on dark matter. A final aspect we have studied is the $SU(2) \\times U(1)$ gauge invariant generalization of the $\\phi$ interactions. In addition to the vertex $\\gamma \\gamma \\phi$, one then has a vertex of the type $\\gamma Z \\phi$, as well as other exotic couplings. Experimental data from $e^+ e^- \\rightarrow \\gamma + missing$ at the $Z$ peak leads to limits on the coupling $g$, that are stronger than the limits obtained from this process without the gauge invariant generalization." + }, + "9503/nucl-th9503017_arXiv.txt": { + "abstract": "We argue that, prior to the recent GALLEX $^{51}$Cr source experiment, the excited state contributions to the $^{71}$Ga capture cross section for $^{51}$Cr and $^7$Be neutrinos were poorly constrained, despite forward-angle (p,n) measurements. We describe the origin of the uncertainties and estimate their extent. We explore the implications of the source experiment for solar neutrino capture in light of these uncertainties. A reanalysis of the $^7$Be and $^8$B flux constraints and MSW solutions of the solar neutrino puzzle is presented. ", + "introduction": " ", + "conclusions": "" + }, + "9503/astro-ph9503121_arXiv.txt": { + "abstract": "In an $\\Omega=1$ universe dominated by nonrelativistic matter, velocity field and gravitational force field are proportional to each other in the linear regime. Neither of these quantities evolve in time and these can be scaled suitably so that the constant of proportionality is unity and velocity and force field are equal. The Zeldovich approximation extends this feature beyond the linear regime, until formation of pancakes. Nonlinear clustering which takes place {\\it after} the breakdown of Zeldovich approximation, breaks this relation and the mismatch between these two vectors increases as the evolution proceeds. We suggest that the difference of these two vectors could form the basis for a powerful, new, statistical indicator of nonlinear clustering. We define an indicator called velocity contrast, study its behaviour using N-Body simulations and show that it can be used effectively to delineate the regions where nonlinear clustering has taken place. We discuss several features of this statistical indicator and provide simple analytic models to understand its behaviour. Particles with velocity contrast higher than a threshold have a correlation function which is biased with respect to the original sample. This bias factor is scale dependent and tends to unity at large scales. ", + "introduction": "Large scale structures like galaxies etc. are believed to have formed out of small density perturbations via gravitational instability. This process, in most popular models, is driven by dark matter which is the dominant constituent of the universe. We can compute rate of growth of clustering usint linear theory when the perturbations are small.. Linear theory however has a very limited domain of validity and we have to resort to numerical simulations for studying evolution of inhomogeneities at late epochs. In linear regime density field is related to velocity field in a unique manner [ in the growing mode] and the density field alone specifies the system completely. Evolution of perturbations is described by a second order differential equation and specification of initial density field and velocity field completely determine the state of the system at any later time. However for a given nonlinear density field there is no practical method for computing velocity field. Our understanding of nonlinear regime will improve if we have a simple physical indicator of velocity field. We introduce velocity contrast, a new statistical indicator, that may be used to quantify some features of velocity field. Velocity contrast can be used for comparison of simulation results with observations. It provides a simple and stable algorithm in contrast with some other methods that are used for this purpose. These methods are required as studies of nonlinear gravitational clustering have focussed mainly on aspects relating to dark matter. Comparison of these studies with observations is made difficult by the fact that we observe only sources of light. Many techniques have been devised for isolating regions that can host galaxies in numerical simulations. Some of these are quite elaborate, like DENMAX {\\cite{denmax}}, and hence computationally very intensive. Other schemes, like density threshold or the friend of friend algorithm are very simple to implement but have problems with interlopers as these algorithms do not use dynamical information. We show that velocity contrast can be used to isolate regions of interest in a relatively simple and robust manner. In next section, we briefly review dynamical evolution of trajectories in a system undergoing gravitational collapse. This is used to motivate the form of new indicator which is introduced in \\S 3. In \\S 4 we use N-Body simulations to study velocity contrast for CDM and an HDM like spectrum. \\S 5 contains discussion of the new indicator using spherical model and nonlinear approximations. In \\S 6 we compare density and velocity contrast and study the average relation between them as well as dispersion around it. We also discuss clustering properties of nonlinear mass and with respect of total mass. ", + "conclusions": "The purpose of this note was to introduce this new statistical parameter, study its behaviour and establish a prima facie case that it is worth being considered further. It will be interesting to see whether one can develop an analytic model for the evolution of this statistical indicator -- or a closely related one. This and related questions are under investigation. We are also studying the bias parameter introduced here in detail, including effect of mass resolution etc. This work is in progress and will be reported elsewhere {\\cite{bias_pr}}. JSB is being supported by the Senior Research Fellowship of CSIR India." + }, + "9503/hep-ph9503342_arXiv.txt": { + "abstract": "s#1{{ \\centering{\\begin{minipage}{30pc}\\tenrm\\baselineskip=12pt\\noindent \\centerline{\\tenrm ABSTRACT}\\vspace{0.3cm} \\parindent=0pt #1 \\end{minipage}}\\par}} \\newcommand{\\bibit}{\\it} \\newcommand{\\bibbf}{\\bf} \\renewenvironment{thebibliography}[1] {\\begin{list}{\\arabic{enumi}.} {\\usecounter{enumi}\\setlength{\\parsep}{0pt} \\setlength{\\leftmargin 1.25cm}{\\rightmargin 0pt} \\setlength{\\itemsep}{0pt} \\settowidth {\\labelwidth}{#1.}\\sloppy}}{\\end{list}} \\topsep=0in\\parsep=0in\\itemsep=0in \\parindent=1.5pc \\newcounter{itemlistc} \\newcounter{romanlistc} \\newcounter{alphlistc} \\newcounter{arabiclistc} \\newenvironment{itemlist} {\\setcounter{itemlistc}{0} \\begin{list}{$\\bullet$} {\\usecounter{itemlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{romanlist} {\\setcounter{romanlistc}{0} \\begin{list}{$($\\roman{romanlistc}$)$} {\\usecounter{romanlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{alphlist} {\\setcounter{alphlistc}{0} \\begin{list}{$($\\alph{alphlistc}$)$} {\\usecounter{alphlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{arabiclist} {\\setcounter{arabiclistc}{0} \\begin{list}{\\arabic{arabiclistc}} {\\usecounter{arabiclistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newcommand{\\fcaption}[1]{ \\refstepcounter{figure} \\setbox\\@tempboxa = \\hbox{\\tenrm Fig.~\\thefigure. #1} \\ifdim \\wd\\@tempboxa > 6in {\\begin{center} \\parbox{6in}{\\tenrm\\baselineskip=12pt Fig.~\\thefigure. #1} \\end{center}} \\else {\\begin{center} {\\tenrm Fig.~\\thefigure. #1} \\end{center}} \\fi} \\newcommand{\\tcaption}[1]{ \\refstepcounter{table} \\setbox\\@tempboxa = \\hbox{\\tenrm Table~\\thetable. #1} \\ifdim \\wd\\@tempboxa > 6in {\\begin{center} \\parbox{6in}{\\tenrm\\baselineskip=12pt Table~\\thetable. #1} \\end{center}} \\else {\\begin{center} {\\tenrm Table~\\thetable. #1} \\end{center}} \\fi} \\def\\@citex[#1]#2{\\if@filesw\\immediate\\write\\@auxout {\\string\\citation{#2}}\\fi \\def\\@citea{}\\@cite{\\@for\\@citeb:=#2\\do {\\@citea\\def\\@citea{,}\\@ifundefined {b@\\@citeb}{{\\bf ?}\\@warning {Citation `\\@citeb' on page \\thepage \\space undefined}} {\\csname b@\\@citeb\\endcsname}}}{#1}} \\newif\\if@cghi \\def\\cite{\\@cghitrue\\@ifnextchar [{\\@tempswatrue \\@citex}{\\@tempswafalse\\@citex[]}} \\def\\citelow{\\@cghifalse\\@ifnextchar [{\\@tempswatrue \\@citex}{\\@tempswafalse\\@citex[]}} \\def\\@cite#1#2{{$\\null^{#1}$\\if@tempswa\\typeout {IJCGA warning: optional citation argument ignored: `#2'} \\fi}} \\newcommand{\\citeup}{\\cite} \\def\\fnm#1{$^{\\mbox{\\scriptsize #1}}$} \\def\\fnt#1#2{\\footnotetext{\\kern-.3em {$^{\\mbox{\\sevenrm #1}}$}{#2}}} \\font\\twelvebf=cmbx10 scaled\\magstep 1 \\font\\twelverm=cmr10 scaled\\magstep 1 \\font\\twelveit=cmti10 scaled\\magstep 1 \\font\\elevenbfit=cmbxti10 scaled\\magstephalf \\font\\elevenbf=cmbx10 scaled\\magstephalf \\font\\elevenrm=cmr10 scaled\\magstephalf \\font\\elevenit=cmti10 scaled\\magstephalf \\font\\bfit=cmbxti10 \\font\\tenbf=cmbx10 \\font\\tenrm=cmr10 \\font\\tenit=cmti10 \\font\\ninebf=cmbx9 \\font\\ninerm=cmr9 \\font\\nineit=cmti9 \\font\\eightbf=cmbx8 \\font\\eightrm=cmr8 \\font\\eightit=cmti8 \\textwidth 6.0in \\textheight 8.5in \\pagestyle{plain} \\topmargin -0.25truein \\oddsidemargin 0.30truein \\evensidemargin 0.30truein\\raggedbottom\\parindent=1.5pc \\baselineskip=14pt \\def\\beq{\\begin{equation}} \\def\\eeq{\\end{equation}} \\def\\bl{B - L} \\def\\la{~\\mbox{\\raisebox{-.6ex}{$\\stackrel{<}{\\sim}$}}~} \\def\\ga{~\\mbox{\\raisebox{-.6ex}{$\\stackrel{>}{\\sim}$}}~} \\begin{document} \\rightline{UMN-TH-1319/94} \\rightline{November 1994} \\centerline{\\tenbf SUPERSYMMETRIC ``SOLUTIONS\" TO COSMOLOGICAL PROBLEMS: } \\baselineskip=16pt \\centerline{\\tenbf BARYOGENESIS AND DARK MATTER\\footnote{To be published in the proceedings of the Joint US-Polish Workshop on Physics from Planck Scale to Electroweak Scale, Warsaw, Poland, September 21-24, 1994, eds. S. Pokorski, P. Nath, and T. Taylor (World Scientific, Singapore).}} \\vspace{0.8cm} \\centerline{\\tenrm KEITH A. OLIVE} \\baselineskip=13pt \\centerline{\\tenit School of Physics and Astronomy, University of Minnesota} \\baselineskip=12pt \\centerline{\\tenit Minneapolis, MN 55455, USA} \\vspace{0.9cm} \\abstracts{The possible role of supersymmetry in our understanding of big bang baryogenesis and cosmological dark matter is explored. The discussion will be limited to the out-of equilibrium decay scenario in SUSY GUTs, the decay of scalar condensates, and lepto-baryogenesis as a means for generating the observed baryon asymmetry. Attention will also be focused on neutralino dark matter. } \\vfil \\rm\\baselineskip=14pt ", + "introduction": "There are several outstanding problems in cosmology which rely on particle physics solutions. If supersymmetry (broken as it may be) is realized in nature, then it is not unreasonable to expect that supersymmetry plays a non-trivial role in the solutions to these problems. The two specific problems that I will concentrate upon here are: the origin of the baryon asymmetry and the nature of dark matter. The former problem has historically been associated with Grand Unified Theories (GUTs) and among the original ideas to generate the asymmetry was the out-of-equilibrium decay scenario\\cite{ww}. I will begin, therefore, with a look back at the supersymmetric versions of this scenario. There are also purely supersymmetric solutions to baryogenesis, most notably is the decay of scalar condensates known as the the Affleck- Dine (AD) scenario\\cite{ad} which will also be briefly discussed. I will comment on the role of cosmological inflation on both the out-of-equilibrium decay and the AD scenarios. Finally, it is no longer sufficient to generate a baryon asymmetry, but one must preserve it in the face of baryon number violating interactions associated with the standard electroweak model\\cite{krs}. These interactions, however, open up new possibilities for generating an asymmetry such as the out-of-equilibrium decay of superheavy leptons\\cite{fy1}. These possibilities (in the context of supersymmetry) will also be discussed. There are many possible solutions to the dark matter problem, many of which do not involve supersymmetry (nor any new particle physics candidate). However, the minimal supersymmetric standard model (MSSM) with unbroken R-parity does offer (in much of the parameter space) a cosmologically interesting dark matter candidate, the lightest supersymmetric particle or LSP\\cite{ehnos,osi34}. The most likely choice being the supersymmetric partner of the U(1)-hypercharge gauge boson, the bino. Though the ``entire\" supersymmetric parameter space will be surveyed, I will focus on the bino as the LSP. A curious possibility that the LSP is a light photino which is nearly degenerate with the lighter stop quark\\cite{fmyy,or2} will also be discussed. ", + "conclusions": "" + }, + "9503/astro-ph9503039_arXiv.txt": { + "abstract": "We analyze the limitations imposed by photon counting statistics on extracting useful information about MACHOs from Earth-based parallax observations of microlensing events. We find that if one or more large (say $2.5\\,\\rm m$) telescopes are dedicated to observing a MACHO event for several nights near maximum amplification, then it is possible, in principle, to measure the velocity of the MACHO well enough to distinguish between disk and halo populations for events with $\\omega {A_m}\\!^2 \\gta 1\\,\\rm day^{-1}$, where $\\omega^{-1}$ denotes the timescale of the event and $A_m$ denotes its maximum amplification. Thus, if it turns out to be possible to reduce all other sources of error to below that of photon statistics, it may be feasible to do useful Earth-based parallax observations for high amplification events. ", + "introduction": " ", + "conclusions": "" + }, + "9503/gr-qc9503031_arXiv.txt": { + "abstract": "\\normalsize\\noindent The importance of general relativity to the induced electric field exterior to pulsars has been investigated by assuming aligned vacuum and non-vacuum magnetosphere models. For this purpose the stationary and axisymmetric vector potential in Schwarzschild geometry has been considered and the corresponding expressions for the induced electric field due to the rotation of the magnetic dipole have been derived for both vacuum and non-vacuum conditions. Due to the change in the magnetic dipole field in curved spacetime the induced electric field also changes its magnitude and direction and increases significantly near the surface of the star. As a consequence the surface charge density, the acceleration of charged particles in vacuum magnetospheres and the space charge density in non-vacuum magnetosphere greatly increase near the surface of the star. The results provide the most general feature of the important role played by gravitation and could have several potentially important implications for the production of high-energy radiation from pulsars. \\\\ {\\bf Subject headings:} electromagnetism : theory -- pulsars : general -- relativity -- stars : neutron ", + "introduction": " ", + "conclusions": "" + }, + "9503/hep-ph9503223_arXiv.txt": { + "abstract": "We study the formation of vortices in a U(1) gauge theory following a first-order transition proceeding by bubble nucleation, in particular the effect of a low velocity of expansion of the bubble walls. To do this, we use a two-dimensional model in which bubbles are nucleated at random points in a plane and at random times and then expand at some velocity $v_{\\rm b} v_{\\rm b}$, the fluxons are fixed to them, but when the junction speed falls below $v_{\\rm f}$, the fluxons are freed and continue to move independently with speed $v_{\\rm f}$, bouncing off any other bubbles they encounter. If we wait long enough of course the bubbles will percolate and fill the whole of space, so all fluxons will eventually become trapped. The gaps between the bubbles will finally close, trapping vortices with quantized flux, so the fluxons will either annihilate or be forced together in threes to form vortices. But of course the total flux trapped where three bubbles finally coalesce {\\it cannot\\/} now be found just by looking at their initial phases. Some of their fluxons may have escaped, while others from farther afield may have wandered in. When a fluxon passes between two bubbles, it changes the relative phase between them. So to follow all the changes in phase as the system evolves would be a very complicated task. Fortunately, it is not necessary to do so. Our strategy will be not to choose the phases until it is necessary to do so, at bubble collisions. When two bubbles collide, there are two distinct cases. If they belong to {\\it disjoint\\/} bubble clusters, the relative phase between these clusters has not yet been fixed, so we make a random choice, here restricted to the three discrete values. In principle it would be possible to trace the evolution of the phase difference back, following the movements of all the intervening fluxons, to discover what the initial phase difference was when the bubbles nucleated, but this is not something we ever need to know. Whenever it is chosen, the phase difference is random. The other possibility is that the two bubbles belong to the {\\it same\\/} cluster. Then the collision completes a circuit within the bubble cluster, usually enclosing a region of the symmetric phase, or splitting an already enclosed region into two. (Other cases are described below.) In that case, the relative phase between the two colliding bubbles is already in principle fixed by earlier choices, so we do not have a random choice to make. In fact, by consistency, the relative phase must be such as to ensure that the total flux within the newly enclosed region is an integer number of flux quanta, i.e., that the net fluxon number is a multiple of three. In line with the geodesic rule, we assume the phase difference is as small as possible consistent with this condition. In other words, we create at most a single fluxon--anti-fluxon pair. The algorithm we adopt is described in the following Section, its implementation in Section III and the results in Section IV. We are particularly interested in examining the dependence of the defect density on the velocity ratio $v_{\\rm b}/v_{\\rm f}$. If the bubble-wall velocity is low, one expects the number of defects per bubble to be reduced. This is because three-bubble collisions will less often trap strings, since the phases of the first two bubbles may have equilibrated before they encounter the third. In our model this effect is represented by the escape of the relevant fluxons. In the three-dimensional case, another effect could be to change the ratio of long strings to small loops. In two dimensions, the analogue of a small loop is a close vortex--anti-vortex pair, so we also study the ratio between the mean nearest-neighbor vortex--anti-vortex distance and the corresponding vortex--vortex one. For $v_{\\rm b}=v_{\\rm f}$, there is strong vortex--anti-vortex correlation: the ratio is substantially less than one. But for $v_{\\rm b} < v_{\\rm f}$ we shall see that, in contrast to models with tilted potentials \\cite{Vach91}, the reduction in the number of defects is accompanied by a {\\it reduced} vortex--anti-vortex correlation. Our conclusions are discussed in Section V. ", + "conclusions": "Our goal in this paper was to study the statistical properties of the system of vortices formed in a first-order phase transition. In particular, we were interested in the dependence of these properties on the speed of bubble expansion, which is characterized in our model by the parameter $v_{\\rm b} /v_{\\rm f}$. Our main results are presented in Fig.\\ 7 which shows the number of vortices formed per bubble and the ratio of the average nearest-neighbor vortex--anti-vortex and vortex--vortex distances, $R = \\langle D_{\\rm d \\bar{d}} \\rangle /\\langle D_{\\rm dd} \\rangle$, as functions of $v_{\\rm b} / v_{\\rm f}$. The ratio $R$ gives a quantitative measure of the vortex--anti-vortex correlation. We see, first of all, that the number of vortices decreases as $v_{\\rm b}$ gets smaller (at fixed $v_{\\rm f}$). This is not difficult to understand. At low values of $v_{\\rm b}$, fluxon escape prevents the formation of vortices in places where they would otherwise be formed. The escaped fluxons are eventually captured, but they mix with the escaped anti-fluxons, and there is a tendency for the net flux to cancel. Annihilation of large groups of fluxons and anti-fluxons can be seen in Fig.\\ 5. Apart from a decrease in the number of defects, a visual inspection of vortex distributions in Figs.\\ 5e and 6a suggests that the flux escape decreases correlation between vortices and anti-vortices. For $v_{\\rm b} = v_{\\rm f}$ there is no flux escape, and the distribution in Fig.\\ 6a contains many close vortex--anti-vortex pairs, while there are very few such pairs for $v_{\\rm b} = 0.5 v_{\\rm f}$. This trend is confirmed by the graph in Fig.\\ 7 which shows a decrease in vortex--anti-vortex correlation with a decreasing speed of bubble walls $v_{\\rm b}$. It is interesting to compare the vortex distribution for $v_{\\rm b} = v_{\\rm f}$ with that obtained using a random-phase lattice simulation (Fig.\\ 6b). The visual appearance of the latter distribution is quite different, but it also shows a strong correlation between vortices and anti-vortices. The nearest neighbors of almost all vortices are anti-vortices and vice versa. A calculation of the ratio $R$ for the lattice simulation gives $R = 0.58$, which is fairly close to the value $R = 0.5$ for $v_{\\rm b} = v_{\\rm f}$. The difference probably arises from the fact that the lattice imposes a minimum defect--anti-defect separation distance. It is noteworthy that $R$ decreases with decreasing defect density. This is in contrast to the `biased' case when the order parameter potential is tilted \\cite{Vach91}. There it is found that $R$ decreases with increasing defect density. The strong vortex--anti-vortex correlation in lattice simulations has been known for a long time \\cite{Ein80}. The total magnetic flux $\\Phi$ through a region of size $L$ is proportional to the phase variation around the region's perimeter. If the phase varies at random on the scale of the lattice spacing $\\xi$, we have $\\Phi \\propto (L/\\xi )^{1/2}$. On the other hand, the number of defects inside the region is $N \\sim (L/\\xi)^2$, and an uncorrelated distribution would give a much larger flux, $\\Phi \\propto N^{1/2} \\sim L/\\xi$. Essentially the same argument applies to our bubble simulation, but now the spread in bubble sizes results in a spread in the nearest-neighbor separations. This spread is responsible for the different visual appearance of the two distributions. The decrease in the vortex--anti-vortex correlation at low bubble speeds can be easily understood. Correlations are destroyed when fluxons escape from the bubble intersections where they originated. The escaped fluxons form a random gas, and we expect no correlations on small scales, where fluxons and anti-fluxons had enough time to randomize. If $L_{\\rm r}$ is the characteristic scale on which randomization has occured, then we expect magnetic flux fluctuations to scale as $\\Phi \\propto N^{1/2}$ for $L < L_{\\rm r}$ and as $\\Phi \\propto N^{1/4}$ for $L > L_{\\rm r}$. We finally briefly discuss the implications of our results for defect formation in three-dimensional phase transitions. As already mentioned in the Introduction, a close vortex--anti-vortex pair is a two-dimensional analogue of a small closed loop of string. Our results suggest that magnetic flux spreading will decrease the amount of string in small loops relative to the infinite strings. If magnetic monopoles are formed in a slow first order phase transition, we expect a decrease in the monopole density and in the correlation between monopoles ($M$) and anti-monopoles (${\\bar M}$). For a suitably defined scale $L_{\\rm r}$, the magnetic charge fluctuations will scale as $N^{1/2}$ for $L < L_{\\rm r}$ and as $N^{1/3}$ for $L > L_{\\rm r}$. This randomization of the monopole distribution can be important in models where monopoles get connected by strings, particularly in Langacker-Pi-type models \\cite{Lang80} where strings disappear at a subsequent phase transition. If $M$'s and ${\\bar M}$'s are strongly correlated, as in second-order or fast first-order transitions, then most of the $M{\\bar M}$ pairs get connected by the shortest possible strings of length $l$ comparable to the average inter-monopole distance $d$. Longer strings with $l \\gg d$ are exponentially suppressed \\cite{Sik83}, \\cite{Cop86}. For monopoles formed in a slow first-order transition, the length distribution of strings can be much broader. Since the lifetime of $M{\\bar M}$ pairs is determined mainly by the time it takes to dissipate the energy of the string, the number of monopoles surviving after the strings disappear can be significantly affected." + }, + "9503/astro-ph9503051_arXiv.txt": { + "abstract": "s#1{{ \\centering{\\begin{minipage}{30pc}\\tenrm\\baselineskip=12pt\\noindent \\centerline{\\tenrm ABSTRACT}\\vspace{0.3cm} \\parindent=0pt #1 \\end{minipage} }\\par}} \\renewenvironment{thebibliography}[1] {\\begin{list}{\\arabic{enumi}.} {\\usecounter{enumi}\\setlength{\\parsep}{0pt} \\setlength{\\leftmargin 1.25cm}{\\rightmargin 0pt} \\setlength{\\itemsep}{0pt} \\settowidth {\\labelwidth}{#1.}\\sloppy}}{\\end{list}} \\topsep=0in\\parsep=0in\\itemsep=0in \\parindent=1.5pc \\newcounter{itemlistc} \\newcounter{romanlistc} \\newcounter{alphlistc} \\newcounter{arabiclistc} \\newenvironment{itemlist} {\\setcounter{itemlistc}{0} \\begin{list}{$\\bullet$} {\\usecounter{itemlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{romanlist} {\\setcounter{romanlistc}{0} \\begin{list}{$($\\roman{romanlistc}$)$} {\\usecounter{romanlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{alphlist} {\\setcounter{alphlistc}{0} \\begin{list}{$($\\alph{alphlistc}$)$} {\\usecounter{alphlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{arabiclist} {\\setcounter{arabiclistc}{0} \\begin{list}{\\arabic{arabiclistc}} {\\usecounter{arabiclistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newcommand{\\fcaption}[1]{ \\refstepcounter{figure} \\setbox\\@tempboxa = \\hbox{\\tenrm Fig.~\\thefigure. #1} \\ifdim \\wd\\@tempboxa > 6in {\\begin{center} \\parbox{6in}{\\tenrm\\baselineskip=12pt Fig.~\\thefigure. #1 } \\end{center}} \\else {\\begin{center} {\\tenrm Fig.~\\thefigure. #1} \\end{center}} \\fi} \\newcommand{\\tcaption}[1]{ \\refstepcounter{table} \\setbox\\@tempboxa = \\hbox{\\tenrm Table~\\thetable. #1} \\ifdim \\wd\\@tempboxa > 6in {\\begin{center} \\parbox{6in}{\\tenrm\\baselineskip=12pt Table~\\thetable. #1 } \\end{center}} \\else {\\begin{center} {\\tenrm Table~\\thetable. #1} \\end{center}} \\fi} \\def\\@citex[#1]#2{\\if@filesw\\immediate\\write\\@auxout {\\string\\citation{#2}}\\fi \\def\\@citea{}\\@cite{\\@for\\@citeb:=#2\\do {\\@citea\\def\\@citea{,}\\@ifundefined {b@\\@citeb}{{\\bf ?}\\@warning {Citation `\\@citeb' on page \\thepage \\space undefined}} {\\csname b@\\@citeb\\endcsname}}}{#1}} \\newif\\if@cghi \\def\\cite{\\@cghitrue\\@ifnextchar [{\\@tempswatrue \\@citex}{\\@tempswafalse\\@citex[]}} \\def\\citelow{\\@cghifalse\\@ifnextchar [{\\@tempswatrue \\@citex}{\\@tempswafalse\\@citex[]}} \\def\\@cite#1#2{{$\\null^{#1}$\\if@tempswa\\typeout {IJCGA warning: optional citation argument ignored: `#2'} \\fi}} \\newcommand{\\citeup}{\\cite} \\def\\fnm#1{$^{\\mbox{\\scriptsize #1}}$} \\def\\fnt#1#2{\\footnotetext{\\kern-.3em {$^{\\mbox{\\sevenrm #1}}$}{#2}}} \\font\\twelvebf=cmbx10 scaled\\magstep 1 \\font\\twelverm=cmr10 scaled\\magstep 1 \\font\\twelveit=cmti10 scaled\\magstep 1 \\font\\elevenbfit=cmbxti10 scaled\\magstephalf \\font\\elevenbf=cmbx10 scaled\\magstephalf \\font\\elevenrm=cmr10 scaled\\magstephalf \\font\\elevenit=cmti10 scaled\\magstephalf \\font\\bfit=cmbxti10 \\font\\tenbf=cmbx10 \\font\\tenrm=cmr10 \\font\\tenit=cmti10 \\font\\ninebf=cmbx9 \\font\\ninerm=cmr9 \\font\\nineit=cmti9 \\font\\eightbf=cmbx8 \\font\\eightrm=cmr8 \\font\\eightit=cmti8 \\centerline{\\tenbf ROTATION CURVES OF 967 SPIRAL GALAXIES: } \\baselineskip=22pt \\centerline{\\tenbf IMPLICATIONS FOR DARK MATTER} \\baselineskip=16pt \\vspace{0.8cm} \\centerline{\\tenrm MASSIMO PERSIC, PAOLO SALUCCI \\& FULVIO STEL} \\baselineskip=13pt \\centerline{\\tenit SISSA, Via Beirut 4, 34013 Trieste, Italy} \\vspace{0.9cm} \\abstracts{We present the rotation curves of 967 spiral galaxies, obtained by deprojecting and folding the raw H$\\alpha$ data published by Mathewson \\etal (1992). Of these, 80 meet objective excellence criteria and are suitable for individual detailed mass modelling, while 820 are suitable for statistical studies. A preliminary analysis of their properties confirms that rotation curves are a universal function of luminosity and that the dark matter fraction in spirals increases with decreasing luminosity.} ", + "introduction": "Rotation curves (hereafter RCs) are the prime mass tracers within spiral galaxies. Therefore, knowledge of their morphology and structural implications is essential for theories/experiments concerning galaxy formation (\\eg: Cen \\& Ostriker 1993; Navarro \\& White 1994; Evrard \\etal 1994). Such a knowledge is of course crucially increased when large samples of good-quality curves become available. The H$\\alpha$ velocities referred to the plane of the sky, of nearly one thousand spirals published by Mathewson \\etal (1992; hereafter MFB) represent by far the largest available sample of (raw) measurements of galaxy rotation. However, it is well known that recessional velocities, which are adequate for the purpose of estimating the maximum circular velocity, require a careful treatment before they can yield the actual rotation curves. In a related paper (Persic \\& Salucci 1995; hereafter PS95), after folding, deprojecting and smoothing the raw MFB data we work out the actual RC's. Because of its size, homogeneity, quality, and spanned range of luminosities and asymptotic velocities, the sample of RCs thus obtained will serve as a main database for studies of galaxy structure. The plan of this contribution is as follows. In section 2 we outline the procedure used to obtain the RCs from the raw data. In section 3 we classify the 967 RCs into three quality subsets. Section 4 briefly illustrates the results of a preliminary analysis of these RCs. The 967 rotation curves will be published in Persic and Salucci, 1995 (PS95). ", + "conclusions": "" + }, + "9503/astro-ph9503081_arXiv.txt": { + "abstract": "Lithium abundances in a sample of halo dwarfs have been redetermined by using the new T$_{eff}$ derived by Fuhrmann et al (1994) from modelling of the Balmer lines. These T$_{eff}$ are reddening independent, homogeneous and of higher quality than those based on broad band photometry. Abundances have been derived by generating new atmospheric models by using the ATLAS-9 code by Kurucz (1993) with enhanced $\\alpha$-elements and without the overshooting option. The revised abundances show a remarkably flat {\\it plateau} in the Li-T$_{eff}$ plane for T$_{eff}$$>$ 5700 K with no evidence of trend with T$_{eff}$ or falloff at the hottest edge. Li abundances are not correlated with metallicity for [Fe/H]$<$ -1.4 in contrast with Thorburn (1994). All the determinations are consistent with the same pristine lithium abundance and the errors estimated for individual stars fully account for the observed dispersion. The weighted average Li value for the 24 stars of the plateau with T$_{eff}$$>$ 5700 K and [Fe/H]$\\le$ -1.4, is [Li] = 2.210 $\\pm$ 0.013, or 2.224 when non-LTE corrections by Carlsson et al (1994) are considered. ", + "introduction": "The lithium observed in the atmospheres of unevolved halo stars is generally believed to be an essentially unprocessed element which reflects the primordial yields. In the framework of the standard BBN it provides a sensitive measure of $\\eta$=$n_{b}/n_{\\gamma}$ at the epoch of the primordial nucleosynthesis and thus of the present baryon density $\\Omega_{b}$. The primordial nature of the lithium of the halo dwarfs is inferred from the presence of a constant lithium abundance for all the halo dwarfs where convection is not effective (T$_{eff} \\ge$ 5600 K). Such an uniformity is taken as evidence for the absence of any stellar depletion during the formation and the long life of the halo stars and also as evidence for the absence of any production mechanism acting either before or at the same time of the formation of the halo population. The existence of a real {\\it plateau} has been recently questioned by Thorburn (1994), Norris et al (1994) and Deliyannis et al (1993). Thorburn (1994) found trends of the Li abundance both with T$_{eff}$ and [Fe/H], while Norris et al (1994) found that the most extreme metal poor stars provide lower abundances by $\\approx$ 0.15 dex, thus questioning their genuine primordial value. An intrinsic dispersion of Li abundances in the plateau was claimed by Deliyannis et al (1993) from the analysis of the {\\it observable} EW and (b-y)$_{0}$ . These results open the possibility of substantial depletion by rotational mixing where a certain degree of dispersion is foreseen for different initial angular momenta of the stars and/or to a significant Galactic lithium enrichment within the first few Gyrs. Thus it appears rather problematic to pick up the precise primordial value from the observations of the Pop II stars. Thorburn (1994) has suggested to estimate it from the surface lithium abundances of the hottest and most metal-poor stars. In this work we tackle these problems by recomputing the lithium abundances for a significant subset of those stars already studied in literature for which new and better effective temperatures are now available. A possible origin of the systematic differences in the lithium abundances resulting in the most recent determinations will also be discussed. Further details can be found in Molaro et al (1995). ", + "conclusions": "" + }, + "9503/hep-ph9503339_arXiv.txt": { + "abstract": "s#1{{ \\centering{\\begin{minipage}{12.2truecm}\\footnotesize\\baselineskip=12pt\\noindent \\centerline{\\footnotesize ABSTRACT}\\vspace*{0.3cm} \\parindent=0pt #1 \\end{minipage}}\\par}} \\newcommand{\\bibit}{\\it} \\newcommand{\\bibbf}{\\bf} \\renewenvironment{thebibliography}[1] {\\begin{list}{\\arabic{enumi}.} {\\usecounter{enumi}\\setlength{\\parsep}{0pt} \\setlength{\\leftmargin 1.25cm}{\\rightmargin 0pt} \\setlength{\\itemsep}{0pt} \\settowidth {\\labelwidth}{#1.}\\sloppy}}{\\end{list}} \\topsep=0in\\parsep=0in\\itemsep=0in \\parindent=1.5pc \\newcounter{itemlistc} \\newcounter{romanlistc} \\newcounter{alphlistc} \\newcounter{arabiclistc} \\newenvironment{itemlist} {\\setcounter{itemlistc}{0} \\begin{list}{$\\bullet$} {\\usecounter{itemlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{romanlist} {\\setcounter{romanlistc}{0} \\begin{list}{$($\\roman{romanlistc}$)$} {\\usecounter{romanlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{alphlist} {\\setcounter{alphlistc}{0} \\begin{list}{$($\\alph{alphlistc}$)$} {\\usecounter{alphlistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newenvironment{arabiclist} {\\setcounter{arabiclistc}{0} \\begin{list}{\\arabic{arabiclistc}} {\\usecounter{arabiclistc} \\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{0pt}}}{\\end{list}} \\newcommand{\\fcaption}[1]{ \\refstepcounter{figure} \\setbox\\@tempboxa = \\hbox{\\footnotesize Fig.~\\thefigure. #1} \\ifdim \\wd\\@tempboxa > 6in {\\begin{center} \\parbox{6in}{\\footnotesize\\baselineskip=12pt Fig.~\\thefigure. #1} \\end{center}} \\else {\\begin{center} {\\footnotesize Fig.~\\thefigure. #1} \\end{center}} \\fi} \\newcommand{\\tcaption}[1]{ \\refstepcounter{table} \\setbox\\@tempboxa = \\hbox{\\footnotesize Table~\\thetable. #1} \\ifdim \\wd\\@tempboxa > 6in {\\begin{center} \\parbox{6in}{\\footnotesize\\baselineskip=12pt Table~\\thetable. #1} \\end{center}} \\else {\\begin{center} {\\footnotesize Table~\\thetable. #1} \\end{center}} \\fi} \\def\\@citex[#1]#2{\\if@filesw\\immediate\\write\\@auxout {\\string\\citation{#2}}\\fi \\def\\@citea{}\\@cite{\\@for\\@citeb:=#2\\do {\\@citea\\def\\@citea{,}\\@ifundefined {b@\\@citeb}{{\\bf ?}\\@warning {Citation `\\@citeb' on page \\thepage \\space undefined}} {\\csname b@\\@citeb\\endcsname}}}{#1}} \\newif\\if@cghi \\def\\cite{\\@cghitrue\\@ifnextchar [{\\@tempswatrue \\@citex}{\\@tempswafalse\\@citex[]}} \\def\\citelow{\\@cghifalse\\@ifnextchar [{\\@tempswatrue \\@citex}{\\@tempswafalse\\@citex[]}} \\def\\@cite#1#2{{$\\null^{#1}$\\if@tempswa\\typeout {IJCGA warning: optional citation argument ignored: `#2'} \\fi}} \\newcommand{\\citeup}{\\cite} \\font\\twelvebf=cmbx10 scaled\\magstep 1 \\font\\twelverm=cmr10 scaled\\magstep 1 \\font\\twelveit=cmti10 scaled\\magstep 1 \\font\\elevenbfit=cmbxti10 scaled\\magstephalf \\font\\elevenbf=cmbx10 scaled\\magstephalf \\font\\elevenrm=cmr10 scaled\\magstephalf \\font\\elevenit=cmti10 scaled\\magstephalf \\font\\bfit=cmbxti10 \\font\\tenbf=cmbx10 \\font\\tenrm=cmr10 \\font\\tenit=cmti10 \\font\\ninebf=cmbx9 \\font\\ninerm=cmr9 \\font\\nineit=cmti9 \\font\\eightbf=cmbx8 \\font\\eightrm=cmr8 \\font\\eightit=cmti8 \\textwidth 6.0in \\textheight 8.6in \\pagestyle{plain} \\topmargin -0.25truein \\oddsidemargin 0.30truein \\evensidemargin 0.30truein \\parindent=1.5pc \\baselineskip=15pt \\begin{document} \\newcommand{\\st}{\\scriptstyle} \\newcommand{\\sst}{\\scriptscriptstyle} \\newcommand{\\mco}{\\multicolumn} \\newcommand{\\epp}{\\epsilon^{\\prime}} \\newcommand{\\vep}{\\varepsilon} \\newcommand{\\ra}{\\rightarrow} \\newcommand{\\ppg}{\\pi^+\\pi^-\\gamma} \\newcommand{\\vp}{{\\bf p}} \\newcommand{\\ko}{K^0} \\newcommand{\\kb}{\\bar{K^0}} \\newcommand{\\al}{\\alpha} \\newcommand{\\ab}{\\bar{\\alpha}} \\def\\be{\\begin{equation}} \\def\\ee{\\end{equation}} \\def\\bea{\\begin{eqnarray}} \\def\\eea{\\end{eqnarray}} \\def\\li#1{\\hbox{${}^{#1}$Li}} \\def\\he#1{\\hbox{$^{#1}{\\rm He}$}} \\def\\la{~\\mbox{\\raisebox{-.6ex}{$\\stackrel{<}{\\sim}$}}~} \\def\\ga{~\\mbox{\\raisebox{-.6ex}{$\\stackrel{>}{\\sim}$}}~} \\def\\CPbar{\\hbox{{\\rm CP}\\hskip-1.80em{/}}}% \\rightline{UMN-TH-1328/95} \\rightline{February 1995} \\centerline{\\normalsize\\bf BIG BANG NUCLEOSYNTHESIS\\footnote{To be published in the proceedings of Beyond the Standard Model IV, Lake Tahoe, CA, December 13-18, 1994, eds. J. Gunion, T. Han, and J. Ohnemus (World Scientific, Singapore).}} \\baselineskip=16pt \\centerline{\\footnotesize KEITH A. OLIVE} \\baselineskip=13pt \\centerline{\\footnotesize\\it School of Physics and Astronomy, University of Minnesota} \\baselineskip=12pt \\centerline{\\footnotesize\\it Minneapolis, MN 55455, USA} \\centerline{\\footnotesize E-mail: Olive@mnhep.hep.umn.edu} \\vspace*{0.3cm} \\baselineskip=13pt \\vspace*{0.9cm} \\abstracts{The current status of big bang nucleosynthesis and its implications for physics beyond the standard model is reviewed. In particular, limits on the effective number of neutrino flavors and extra Z gauge boson masses are updated.} \\normalsize\\baselineskip=15pt \\setcounter{footnote}{0} \\renewcommand{\\thefootnote}{\\alph{footnote}} ", + "introduction": "The overall status of big bang nucleosynthesis is determined by the comparison of the rather slowly changing theoretical predictions of the light element abundances and the sometimes quickly changing observationally determined abundances. The observed elements, D, \\he3, \\he4, \\li7, have abundances relative to hydrogen which span nearly nine orders of magnitude. By and large, these observations are consistent with the theoretical predictions and play a key role in determining the consistency of what we refer to as the standard big bang model and its extrapolation to time scales on the order of one second. Here, I will review the status of this consistency. I will begin by briefly outlining the key sequence of events in the early Universe which leads to the formation of the light elements. I will then discuss the current status of the observations in relation to theory of each of the light elements. Finally, I will discuss the current limits on physics beyond the standard model. ", + "conclusions": "" + }, + "9503/astro-ph9503086_arXiv.txt": { + "abstract": "We study the resonant divergences that occur in quantum scattering cross-sections in the presence of a strong external magnetic field. We demonstrate that all such divergences may be eliminated by introducing radiative corrections to the leading-order scattering amplitudes. These corrections impose a choice of basis states that must be used in scattering calculations: electron states must diagonalize the mass operator, while photon states must diagonalize the polarization operator. The radiative corrections introduce natural line-widths into the energy denominators of all propagators, as well as into the time-development exponentials of all scattering states corresponding to external lines. Since initial and final scattering states may now decay, it is logically necessary to compute scattering amplitudes for a finite time-lapse between the preparation of the initial state and the measurement of the final state. Strict energy conservation, which appeared in previous formulations of the theory, must thus be abandoned. We exhibit the generic formulae for the scattering cross-sections in two useful limits, corresponding to the cases where either the initial states or the final states are stable, and discuss the application of the general formula when neither of these limits applies. ", + "introduction": "Astrophysicists have had a long-standing interest in the physics of elementary processes in super-strong magnetic fields, with field strengths $B\\gtrsim 10^{12}$~G. The cyclotron lines observed in the spectra of Her X-1 \\cite{tr78} and of 4U 0115+63 \\cite{gr80} as well as in many other X-ray pulsars have energy centers which correspond to field intensities in this range. There is also evidence for such field strengths in the spin-down rates of radio pulsars. If the spin-down is attributed to energy loss to electromagnetic radiation from a spinning magnetic dipole, many observations are consistent with field strengths of the order of $10^{12}$-$10^{13}$~G, with some pulsar field strengths well in excess of even $10^{13}$~G \\cite{ha91,ts86}. In addition, there is tantalizing evidence for cyclotron lines in the spectra of gamma-ray bursts seen by the gamma-ray burst detector aboard the GINGA satellite \\cite{mu88}, with line center energies consistent with field intensities of order $10^{12}$~G. The association of Soft Gamma Repeaters with supernova remnants provides indirect evidence for even stronger fields. If the 8~s periodicity of the March 5 1979 event is identified with the rotation period of the neutron star, the known age of the N49 remnant may be used to estimate that the field strength is approximately $6\\times 10^{14}$~G \\cite{dt92}. Moreover, such a strong field could help resolve the puzzle of how the March 5 1979 event could ostensibly have been so extravagantly in excess of the Eddington limit ($L\\approx 10^4L_{\\text{Edd}}$), by suppressing the Thomson cross-section for photons propagating nearly parallel to the field lines \\cite{pa92}. In fields such as these, comparable in strength to the critical field strength $B_c\\equiv m^2c^3/e\\hbar = 4.414\\times 10^{13}$~G, all calculations of elementary processes must be carried out using Quantum Electrodynamics. There have been many such calculations over the past two decades, covering topics such as cyclotron absorption \\cite{dv78}, cyclotron decay \\cite{mz81,hrw82}, single photon pair production \\cite{k54,dh83}, pair annihilation to a single photon \\cite{w79,db80,h86,wphr86}, Compton scattering \\cite{h79,dh86,bam86,hd91}, two photon pair production \\cite{km86}, two photon pair annihilation \\cite{w79,db80}, $\\text{e}^-\\text{e}^-$ scattering \\cite{l81}, and several more. All these processes have very different behavior from their $B=0$ counterparts (if those counterparts are even possible), on account of the peculiar kinematics,as well as the discrete electronic states (Landau levels) associated with a uniform external magnetic field. These calculations have always been carried out in the Furry picture, in close analogy to the $B=0$ Feynman rules. The free space electron propagator is replaced by a propagator which is a Green's function for the Dirac equation in the external field, and the external fermion lines are represented by solutions of that equation. The results have often been interesting and useful, but they have not been uniformly satisfactory. The leading order calculation of resonant Compton scattering yields results which are divergent at the cyclotron resonances \\cite{h79,dh86}, evidently because to this order the theory makes no provision for natural line width. The line width may be included ``by hand'' in the results, making the cyclotron resonances finite \\cite{bam86,hd91}. However, there are other such ``resonant divergences'' in Compton scattering, which have nothing to do with the cyclotron resonances and which are not nearly as tractable \\cite{h79,dh86}. In fact, one such resonance, which occurs exactly at the threshold where the initial photon may pair-produce, is responsible for making the total Compton cross-section divergent {\\em everywhere} above this threshold. In fact, the theory of elementary processes in external magnetic fields is plagued with such divergences. With a little practice, it is not hard to discover divergences resembling these {\\em in every single process of second or higher order}. Clearly, this is a troublesome development that casts a shadow upon the entire undertaking. These resonant divergences occur because the kinematics of these processes allow intermediate ``virtual states'' to be real --- that is, on-shell. Often, the on-shell intermediate state is an excited state, such as an electron in an excited Landau level, or a photon above the one photon pair-production threshold. In this case there is an associated decay width that may be pressed into service to control the divergence. The Weisskopf-Wigner broadening prescription, \\begin{equation} E\\rightarrow E-\\frac{1}{2}i\\Gamma,\\label{ecmplx} \\end{equation} when applied to the energy denominators in the propagator, pushes the poles in the propagator off the real axis, so that while the intermediate states may still be on-shell the propagator no longer diverges there. This is in fact the approach that has been adopted for Compton scattering \\cite{bam86,hd91,g93}, and which ascribes the natural line width to the cyclotron resonances. However, there are circumstances in which a {\\em stable} on-shell intermediate state may be produced. Such states are not attended by a decay width, so the associated divergence may not be reined in as before. It should be pointed out that these divergences are entirely unrelated to the notorious ultraviolet divergences of QED. They are not the consequence of improper manipulation of field-theoretic distributions; rather, they occur whenever the circumstances of the elementary process permit a kinematically accessible on-shell intermediate state (KAOSIS). It is easy to recognize when a KAOSIS is permitted. For example, a second-order process will allow one if it may be viewed as a succession of two real first-order processes. Thus the KAOSIS corresponding to cyclotron resonance occurs because the process may be viewed as a real cyclotron absorption followed by a real cyclotron emission. Similarly, the second ``disastrous'' resonance in Compton scattering is due to a KAOSIS that corresponds to the initial photon undergoing a real decay to a pair, followed by the resulting positron annihilating with the initial electron to produce the final photon. The reason this KAOSIS is catastrophic is that the intermediate positron may be in the Landau ground state, so that no decay width is available to restrain the divergence. At the same time, there is a second defect of the theory which so far has not received recognition as a problem. The calculation of S-matrix elements as outlined above always results in a $\\delta$-function which enforces strict energy conservation between initial and final states. This remains true even if some of these scattering states are unstable. But this is not physically sensible; the energy of an unstable state is only known to within its decay width, so that it is ludicrous to demand strict energy conservation for such transitions. Nevertheless, the current theory does so irrespective of whether or not the states are stable. As an example, the calculation of the decay of an electron in an excited Landau state produces the result that the emitted cyclotron photon is monochromatic, rather than having the Lorentzian line shape characteristic of resonant decay \\cite{mz81,hrw82}. The difficulties of resonant divergence and of spurious energy conservation are related. Briefly, a stable KAOSIS may only occur if some of the particles in the initial and final states are themselves unstable, and it is their decay widths that restrain the divergence. However, the introduction of these decay widths smears out their energy, so that energy conservation (which was a consequence of their assumed eternal duration) no longer obtains. Thus it appears that we must modify the current theory somehow if we are to circumvent these unphysical features. That modification is the purpose of this work. We demonstrate below that radiative corrections modify the propagators and the scattering states by introducing their respective decay widths into the S-matrix elements. In this sense, we are extending the results of Graziani \\cite{g93}, who carried out this program for the electron propagator only. The corrected ``dressed'' states and propagators result in scattering cross-sections that are always finite. In Sec.\\ \\ref{sec_rdp} we discuss the role of the bare propagators in producing resonant divergences. In Sec.\\ \\ref{sec_dep} we review the work of reference \\cite{g93} on the electron propagator, which we extend to the photon propagator in Sec.\\ \\ref{sec_dpp}. We exhibit the corrections to the scattering states in Sec.\\ \\ref{sec_css}. In Sec.\\ \\ref{sec_sf} we derive the modification of the S-matrix elements, and exhibit two useful limits: the absorption limit and the emission limit, wherein the initial and final states are stable, respectively. Finally, in Sec.\\ \\ref{sec_disc} we discuss the applicability of these results and the extent of the modification from the ``standard'' theory. ", + "conclusions": "\\label{sec_disc} The regime of validity of the ``emission'' and ``absorption'' scattering limits is obvious from their context. On the other hand, the general formula in Eq.\\ (\\ref{etfacgen}) requires some discussion. As discussed previously, the formula never ``misbehaves'', in the sense that it never yields a divergent result. In fact, in general that result tends to zero as $\\tau\\rightarrow\\infty$. While this is physically sensible, it obviously makes the large time-lapse limit a less than useful one. It is clear that what fails in this limit is the validity of the perturbation-theoretic order of the calculation. Since the scattering states are themselves decaying to other states, those other states should be included in the calculation, leading to higher-order processes. The second-order calculations outlined in the previous section are only useful for values of $\\tau$ such that the scattering states have little chance to decay, that is for $\\Gamma\\tau\\ll 1$, where $\\Gamma$ is the largest of the decay rates in the process. For example, we might choose $\\tau$ to be on the order of a collision time, if we are studying a gas with density and temperature such that the collision rates far exceed the decay rates. The general case above obviously represents a fairly radical departure from the usual scattering formulae. The ``emission'' and ``absorption'' scattering limits amount to somewhat less radical modifications. One obvious such modification is the replacement of strict energy conservation with ``Lorentzian'' energy conservation. Another is that the ``non-resonant'' energy denominators [Eqs.\\ (\\ref{absetfdmn}), (\\ref{absetfcpl}), (\\ref{absetfcmn}), (\\ref{emetfdmn}), (\\ref{emetfcpl}), (\\ref{emetfcmn})] now contain the decay rates of the scattering states as well as those of the intermediate states. The importance of these changes depends upon whether the decay widths entering the energy denominators are electron decay widths or photon decay widths. If the decay widths are purely fermionic, they are gently varying functions of energy, and their magnitudes are smaller by $e^2$ than their own characteristic scale of variation, the scale of variation of the interaction matrix elements, and the characteristic separation of the resonances. Consequently, in this case the relative change that results from introducing Lorentzian, rather than exact, energy conservation, and from introducing the decay widths of the scattering states into the energy denominators, is of order $e^2$. On the other hand, if some of the decay widths correspond to photon lines, they can vary rather rapidly as a function of energy \\cite{k54,dh83}. Thus, the behavior of the energy denominators is not really ``Lorentzian'', despite notational appearances to the contrary. The departure from the cross-sections computed assuming strict energy conservation and not including external line decay widths might turn out to be appreciable in this case, although its precise magnitude remains to be assessed. In the limit of stable scattering states the usual results are completely reproduced, since we recover strict energy conservation and there are no scattering state decay widths to include in resonant energy denominators. In connection with the dressed photon propagator of Eq.\\ (\\ref{dppropa}), we wish to comment on a point which is a potential source of confusion. The decay width $\\Gamma({\\bf k},j)$ is to be evaluated on the light cone, as implied by the first line of Eq.\\ (\\ref{gammaphot}). Now, when the photon propagator is used in an S-matrix element, the values of $k^0$, $k^2$, and $k^3$ are fixed by the $x^0$, $x^2$, and $x^3$ momenta of the scattering states. Thus, the sum over intermediate photon wave states involves an integral over the component $k^1$ of the wave vector ${\\bf k}$. The value of $\\Gamma({\\bf k},j)$ must be evaluated for {\\em each} value of $k^1$ in the integral. This is analogous to the case of the electron propagator, Eq.\\ (\\ref{depropa}), in which the intermediate electron decay width is evaluated, on the energy shell, for each Landau level in the sum over intermediate states. The only difference between the two cases is that the relevant degrees of freedom are discrete for the electron propagator, while they are continuous for the photon propagator. The radiatively corrected photon propagator permits for the first time the evaluation of processes such as $\\text{e}^+\\text{e}^-\\rightarrow\\text{e}^+\\text{e}^-$, $\\text{e}^-\\rightarrow\\text{e}^-\\text{e}^+\\text{e}^-$, and $\\gamma\\text{e}^-\\rightarrow\\text{e}^-\\text{e}^+\\text{e}^-$, all of which are important for neutron star emission. Of equal astrophysical importance is the evaluation above the one photon pair-production threshold of Compton scattering, two photon pair annihilation, and two photon pair production, which is now possible by virtue of the radiatively corrected scattering states. Finally, we now have access to the processes $\\text{e}^-\\text{e}^-\\rightarrow\\text{e}^-\\text{e}^-$ and $\\text{e}^+\\text{e}^-\\rightarrow\\text{e}^+\\text{e}^-$ even when the initial and final states are excited." + }, + "9503/astro-ph9503092_arXiv.txt": { + "abstract": "Laboratory searches for the detection of gravitational waves have focused on the detection of burst signals emitted during a supernova explosion, but have not resulted in any confirmed detections. An alternative approach has been to search for continuous wave (CW) gravitational radiation from the Crab pulsar. In this paper, we examine the possibility of detecting CW gravitational radiation from pulsars and show that nearby millisecond pulsars are generally much better candidates. We show that the minimum strain $h_c \\sim 10^{-26}$ that can be detected by tuning an antenna to the frequency of the millisecond pulsar PSR 1957+20, with presently available detector technology, is orders of magnitude better than what has been accomplished so far by observing the Crab pulsar, and within an order of magnitude of the maximum strain that may be produced by it. In addition, we point out that there is likely to be a population of rapidly rotating neutron stars (not necessarily radio pulsars) in the solar neighborhood whose spindown evolution is driven by gravitational radiation. We argue that the projected sensitivity of modern resonant detectors is sufficient to detect the subset of this population that lies within 0.1 kpc of the sun. ", + "introduction": "There are two types of signals of gravitational radiation that are expected to be most readily detectable from astrophysical sources: burst signals of short duration, from sources such as a stellar merger or a nonspherical core collapse associated with a supernova event; and continuous wave (CW) signals from sources such as short period, compact binary star systems or rapidly rotating, nonaxisymmetric (or precessing) compact stars. Most experimental searches for gravitational radiation have focused on the detection of burst signals. To date, there have been no confirmed detections of this type of gravitational radiation, although the terminal phase of the coalescence of neutron-star binaries appears to be a promising source for future ground-based laser interferometers such as LIGO (Abramovici et al. 1992; Cutler et al. 1992; Finn \\& Chernoff 1993). The detection of CW radiation from binary star systems by a space-based interferometer has been proposed (Faller \\& Bender 1984; Evans, Iben, \\& Smarr 1987; Faller et al. 1989). Currently, though, the only experimental effort to detect CW gravitational radiation that is underway has been pioneered by the Tokyo group which has searched for CW emission from the Crab Pulsar (Tsubono 1991). We argue here that nearby millisecond pulsars are likely to be stronger sources of CW radiation than the Crab pulsar and therefore warrant the attention of experimental searches for gravitational radiation. An axisymmetric object which is rotating about its minor axis will not emit gravitational radiation because it has no time varying quadrupole moment. Therefore a pulsar (or any rotating neutron star, for that matter) must be nonaxisymmetric and/or precessing in order for it to radiate (Ferrari \\& Ruffini 1969; Zimmerman 1978, 1980; Shapiro \\& Teukolsky 1983; Barone et al. 1988). Several mechanisms for the production of nonaxisymmetric deformations in pulsars have been suggested (Ipser 1971), including asymmetric crystallization of the crusts (Ruderman 1969; Ferrari \\& Ruffini 1969), pressure and magnetic stress anisotropies (Ostriker \\& Gunn 1969, Ruderman 1970), and rotationally induced instabilities (Imamura, Friedman, \\& Durisen 1985). Misalignments between the symmetry and spin axes of pulsars might occur as the result of electromagnetic torques due to magnetic dipole radiation, corequakes (Pines and Shaham 1974), or encounters with neighboring stars. Precessing, nearby millisecond pulsars have recently been put forth by de Ara\\'ujo et al. (1994) as good candidates for the detection of gravitational radiation by upcoming interferometric detectors. They suggest, however, that the signals seen by these detectors due to radiation from wobbling pulsars may be burst signals, not CW signals. This is because the damping time-scale for the wobble angle due to the emission of gravitational radiation is expected to be on the order of seconds whereas the observation time needed to observe CW sources is likely to be $\\sim 10^{7}$ s. This situation does seem likely to present itself with the first generation of detectors because the source signals are expected to be near the limit of detectability. Thus, at least for some time, nonaxisymmetric deformations may be the only channel through which pulsars produce detectable CW radiation. The rate at which gravitational energy is radiated from a nonaxisymmetric object that is rotating about its minor axis with angular velocity $\\omega$ is (Ferrari \\& Ruffini 1969; Shapiro \\& Teukolsky 1983), $$ \\dot E_{GR} = - {32 G \\over 5 c^5} {I_3}^2 \\epsilon^2 \\omega^6. \\eqno (1)$$ This expression has been derived in the quadrupole approximation for nearly-Newtonian sources assuming that the object has principal moments of inertia $I_1$, $I_2$ and $I_3$, respectively, about its three principal axes $a \\gtrsim b > c$ fixed in the body frame (Landau \\& Lifshitz 1962; Misner, Thorne, \\& Wheeler 1973); and $\\epsilon \\equiv (a-b)/(ab)^{1/2}$ is the ellipticity in the equatorial plane. Here, and throughout this paper, dots denote differentiation with respect to time. Several authors have proposed that the Crab pulsar is the best candidate for detecting CW radiation (e.g., Zimmerman \\& Szedenits 1978; Tsubono 1991) because it has the largest spin down energy flux density $\\dot E_{rot} / (4 \\pi r^2)$ of all pulsars; here $r$ is the distance to the pulsar and $\\dot E_{rot} = I_3 \\omega {\\dot \\omega}$ ($\\sim 10^{38}$ erg s$^{-1}$ for the Crab pulsar) is the rotational energy loss rate. We note, however, that for a neutron star of given ellipticity $\\epsilon$, the loss rate due to gravitational radiation strongly favors stars that are spinning more rapidly (equation [1]). Also, if the ellipticity is caused by the rapid rotation of the star, one might expect the value of $\\epsilon$ to be higher in stars that are rotating more rapidly. With this in mind, we have suggested (Barker et al. 1994; see also Schutz 1995) that nearby millisecond pulsars (Backer et al. 1982; Fruchter, Stinebring, \\& Taylor 1988; Johnston et al. 1993) may be stronger sources of CW radiation than the Crab pulsar (see \\S 2). One favorable aspect of CW radiation is that a resonant detector can be tuned to the frequency of the emission. In order to tune a cylindrical bar detector so that its primary quadrupole mode of oscillation resonates with the frequency of radiation from the pulsar $\\omega_0 =2\\omega$ (see \\S 2), the bar must have a length $L \\approx \\pi c_s/\\omega_0 $, where $c_s \\approx 5.2 \\times 10^5$ cm s$^{-1}$ is the speed of sound in a prototypical aluminum alloy bar (see \\S 3). Thus the length of the bar must be $L \\sim 1.3 P_{ms}$ m, where $P_{ms} = 10^3 (2\\pi/\\omega) $ is the period of the pulsar in milliseconds. This is prohibitively long ($L \\sim 43$ m) for the Crab pulsar which has a period $P_{ms} = 33$. Consequently, the Tokyo group (Tsubuno 1991) has used two short crossed bars instead of a single bar. Because better sensitivities are achievable using a single bar, millisecond pulsars appear to be better candidates from a detector design standpoint as well (see \\S 3). ", + "conclusions": "Nearby millisecond pulsars are good candidates for the detection of CW gravitational radiation. Because of their close proximity and rapid rotation, they are capable of emitting radiation with larger gravitational strain amplitudes than pulsars that are farther away and/or have longer periods. The minimum strain $h_c \\sim 10^{-26}$ ($1.5 \\times 10^{-26}$ for a bar detector, $8.0 \\times 10^{-27}$ for a spherical detector) that can be detected by designing an antenna tuned to the rotation frequency of the millisecond pulsar PSR 1957+20 and employing presently available resonant detector technology is orders of magnitude better than what has been accomplished so far by observing the Crab pulsar, and within an order of magnitude of the maximum strain that can be produced by PSR 1957+20 as a result of rotationally induced nonaxisymmetric deformations. The design and operation of a resonant antenna that is tuned to the rotation frequency of PSR 1957+20 would, at the very least, place physically meaningful constraints on the nonaxisymmetric ellipticity of millisecond pulsars. We have argued (\\S 2.3) that there almost certainly is a population of rapidly rotating neutron stars (not necessarily radio pulsars) within the solar neighborhood whose spindown evolution is driven by gravitational radiation. Throughout their entire lifetime, these stars will radiate at an ``absolute'' strain $H_c \\gtrsim 7 \\times 10^{-28}$. It is significant that the projected sensitivity of modern resonant detectors is sufficient to detect the subset of this population of stars that resides within 0.1 kpc of the sun." + }, + "9503/astro-ph9503071_arXiv.txt": { + "abstract": "Johnson {\\it BV} CCD observations have been made of the young Large Magellanic Cloud cluster NGC~2214 and a nearby field using the Anglo-Australian Telescope. It has been suggested in the literature that this elliptical cluster is actually two clusters in the process of merging. No evidence is found from profile fitting or the colour--magnitude diagrams to support this contention. Completeness factors are estimated for the CCD frames. These values are used in conjunction with luminosity functions to estimate the initial mass function (IMF) for NGC~2214. A power law is assumed for the IMF, with a good fit being found for the exponent (1 + $x$) = 2.01~$\\pm$ 0.09. There is some indication that the low-mass end ($\\loa \\rm 3\\: M_{\\sun}$) has a lower gradient than the high-mass end of the derived IMF. This value is in reasonable agreement with literature values for other Magellanic IMFs, and not substantially different from those of the poorly determined Galactic IMFs, suggesting the possibility of a `universal' IMF over the Magellanic Clouds and our Galaxy in the mass range $\\sim 1$ to $\\sim 10$ $\\rm M_{\\sun}$. ", + "introduction": "NGC~2214 ($ \\rm \\: \\alpha_{2000} \\: = \\: \\rm 6^{h} \\: 12^{m} \\: 57^{s}, \\delta_{2000} \\: = \\: 68^{o} \\: 15' \\: 33'' $ South) is a young ($ \\rm 32 \\: \\times \\: 10^6 $ yr; Elson 1991) populous star cluster situated in a relatively uncrowded field to the far north-east of the bar in the Large Magellanic Cloud (LMC). Meylan~\\& Djorgovski (1987) analysed an intensity profile of the cluster, and found that the core was abnormal. They conjectured that perhaps it had collapsed, although Elson, Fall~\\& Freeman (1987) have shown that the two-body relaxation time of the cluster is $\\rm \\sim2-6 \\: \\times \\: 10^8 $ yr, and so greater than its age. Bhatia~\\& MacGillivray (1988) found the cluster to have a very elliptical ($ e \\, = \\, 0.5$) core with an almost spherical halo, and suggested that this unusual shape could be due to NGC~2214 being a binary star cluster in an advanced stage of merging. Comparison with N-body simulations lent support to this idea. Sagar, Richtler~\\& de Boer (1991a) used the 1.54-m ESO Danish telescope in $ \\sim1$-arcsec seeing, and presented a {\\it BV} colour--magnitude diagram (CMD) with two well-defined supergiant branches, separated by $ \\sim2$ mag in {\\it V}. The older population was more centrally condensed than the younger one, and Sagar~{et al.} (1991a) suggested that the first published CMD (Robertson 1974) had failed to detect the older branch due to the problems of photometry in such a crowded region. A major objective of the present study was to derive an estimate of the initial mass function (IMF) of the cluster. The IMF is defined as the frequency distribution of stellar masses on the main sequence at the formation time of a group of stars (Scalo 1986). Mass is one of the primary factors influencing stellar evolution, and a detailed knowledge of the IMF would be important in a wide range of studies ranging from galactic evolution to the spectral properties of binary stars (see Tinsley 1980). A fundamental question about the IMF is whether it is universal in time and location, or whether the distribution of stars formed is a function of parameters such as metallicity. Derivation of the IMF is not straightforward. An initial approach might be to use the nearby solar neighbourhood to do this, but this technique is complicated by the fact that these stars have a range of distances, ages, and metallicities. For instance, the random velocities of the stars, combined with their lifetimes, means that, while massive stars will still be near the site of their formation, low-mass stars will have travelled significant distances. Variations in composition may result just from such spatial considerations, if not from galactic evolution as well. Scalo (1986) comments that the many assumptions, such as any variation in the star formation rate with time, complicate estimates of the field IMF to the point of impracticality. In addition, a universal nature is assumed for the IMF in such studies. A better approach is to use clusters, where the component stars will be effectively coeval and of the same composition. Such work is complicated by effects such as dynamical evolution leading to mass segregation in the cluster, tidal stripping (which in the presence of mass segregation will lead to the proportional decrease of low-mass stars; see Spitzer 1987), and stellar evolution as stars evolve off the main sequence, which leads to no easily derivable mass function information for stars of a main sequence lifetime less than the age of the cluster. The mass function of the cluster may alter substantially with time, and it is best to select young clusters where these effects have not had time to become significant. Many studies have centred on young Galactic open clusters with their large observable mass range (e.g. Phelps~\\& Janes 1993; Reid 1992; Stauffer~{et al.} 1991). However, such work is complicated by field star contamination, counting incompleteness, and low number statistics (see Scalo 1986 for more details), as well as the problem that most open clusters suffer substantial and variable reddenings due to their positions in the Galactic disc (Mateo 1988). There is no strong evidence for variations in the shapes of their mass functions (Sagar~\\& Richtler 1991). Globular clusters offer better statistics due to the increased number of stars they contain, but the observable mass range is limited due to their distances and age. Evolutionary effects, such as mentioned above, are additional complications. The resulting mass functions appear to vary considerably between clusters, and may be correlated with metallicity (Sagar~\\& Richtler 1991), although this is clouded by the above problems. The LMC clusters are effectively a mixture of the best features of these two types of star clusters. They are populous, with resultingly good statistics, and span a wide range of ages and metallicities (Da Costa 1991). The clusters are distant enough to subtend only a small angle on the sky, and yet not too distant to suffer from resolution problems. Questions, such as the universal nature of the IMF, might be able to be addressed using these clusters, although the very populous nature of both the clusters and their fields leads to counting incompleteness problems. A major portion of this study involved the derivation of counting estimates, in order to correct observed luminosity functions to the `real' distribution. IMFs have been derived for some LMC clusters by Mateo (1988), Sagar~\\& Richtler (1991), Cayrel, Tarrab~\\& Richtler (1988), and Elson, Fall~\\& Freeman (1989). The results have not been in good agreement. The first three studies were based on CCD frames, and attempted to estimate the counting incompleteness using artificial star trials (see below). A power law ${ \\frac{{\\rm d}N}{{\\rm d}M} \\: = \\: M}^{-(1 \\: + \\: x )}$ was assumed for the IMF, where d$N$ is the number of stars in a given mass interval d$M$ at mass $ M . $ Mateo (1988) found that the IMFs of six Magellanic clusters (the Small Magellanic Cloud cluster NGC 330 was included) could all be fitted with the single power law with $x$ = $ 2.52 \\: \\pm \\: 0.16$ over the mass interval 0.9 to 10.5 $\\rm M_{\\sun}$. Sagar~\\& Richtler (1991) used a different method of estimating the incompleteness (see below), and arrived at an $x$ value of $\\sim$1.1, not too different from the Salpeter (1955) value of 1.35 and in reasonable agreement with the value of 1.2 for NGC 330 and NGC 1818 derived by Cayrel~{et al.} (1988). They commented that if they used the same incompleteness technique as Mateo (1988) on NGC 1711, which was the only cluster studied by both, then the mass function estimate of Mateo (1988) was confirmed. All these values contrast sharply with the photographic star count analysis of Elson~{et al.} (1989), which gave $x$ values between $-0.2$ and 0.8 (over 1.5--6.0 $\\rm M_{\\sun}$). In light of these differences and the comment of Sagar~\\& Richtler (1991) about NGC 1711, a review of the incompleteness techniques is obviously of major importance given the effect a chosen method has on the derivation of the mass function slope, and any subsequent conclusions about the universality of the IMF. ", + "conclusions": "We have found no evidence supporting the contention that NGC~2214 is a merging cluster. Models have been fitted to radial profiles of the cluster, and support the contention of Elson~et al.\\ (1987) that an unbound halo of stars causes poor fits by King models. Techniques for evaluating completeness methods have been tested, with the method of Mateo~(1988) being shown to underestimate completeness factors. The best techniques were employed on the AAT data, leading to an estimate of the cluster's mass function, with an $x$ value of $\\sim1$, which is in good agreement with other studies of young Magellanic Cloud clusters. IMF estimates for Galactic regions are unreliable, which makes any conclusions about the universality of the IMF indistinct. However, there are no {\\em substantial}\\/ differences between IMFs derived for the Magellanic Clouds and for our Galaxy, and it is likely that star formation in these three galaxies can be described by a `universal' IMF, at least over the mass interval $\\sim1$ to $\\sim10$~$\\rm M_{\\sun}$." + }, + "9503/astro-ph9503123_arXiv.txt": { + "abstract": "I report the detection of two X-ray sources of luminosities $\\sim 10^{36} {\\rm~ergs~s^{-1}}$ in the central region of 30 Doradus. These two sources appear point-like in images taken with the {\\sl ROSAT} HRI. One of the sources is most likely associated with a close spectroscopic binary R140a2 (WN6) with an orbital period of 2.76 days. The mass of the unknown binary component is likely in the range of $2.4 - 15{\\rm~M}_\\odot$. This suggests that the X-ray source could represent a long-sought class of binaries containing a Wolf-Rayet star and a black hole. The other source, which coincides spatially with Mk34 (WN4.5), may have a similar nature. Available X-ray spectral data support this Wolf-Rayet + black-hole binary explanation of the two sources. I have used a {\\sl ROSAT} PSPC observation to show that the sources have ultrasoft spectra with possible intrinsic absorption. Modeled with multicolor blackbody disks, the spectra provide estimates of the disks' characteristic inner radii, which are in agreement with those obtained for the known black-hole candidates. An {\\sl ASCA} observation has further revealed a hard X-ray spectral component from the central 30 Doradus region. This component, represented by a power law with a photon index of $\\sim 2.4$, may belong to the two sources. The characteristics of both the power-law and ultrasoft components strongly indicates that the two sources are black-hole candidates in high-mass X-ray binaries. ", + "introduction": " ", + "conclusions": "" + }, + "9503/hep-ph9503217_arXiv.txt": { + "abstract": "\\baselineskip 16pt Oscillons are localized, non-singular, time-dependent, spherically-symmetric solutions of nonlinear scalar field theories which, although unstable, are {\\it extremely} long-lived. We show that they naturally appear during the collapse of subcritical bubbles in models with symmetric and asymmetric double-well potentials. By a combination of analytical and numerical work we explain several of their properties, including the conditions for their existence, their longevity, and their final demise. We discuss several contexts in which we expect oscillons to be relevant. In particular, their nucleation during cosmological phase transitions may have wide-ranging consequences. \\noindent PACS: 98.80.Cq, 64.60.Cn, 64.60.-i, 11.10.Lm ", + "introduction": "The search for static, localized, non-singular solutions of nonlinear field theories has by now a long history \\cite{SOLITONS}. In (1+1)-dimensions, it is possible to find exact static solutions to the nonlinear Klein-Gordon field equations for certain interacting potentials, such as the kink solutions of sine-Gordon or $\\phi^4$ models. For a larger number of spatial dimensions, Derrick's theorem forbids the existence of static solutions for models involving only real scalar fields \\cite{DERRICK}. There are several ways to circumvent Derrick's theorem, by invoking more complicated models with two or more interacting fields. Well-known examples include topological defects such as the 't Hooft-Polyakov monopole or the Nielsen-Olesen vortices \\cite{RAJARAMAN}. Topological conservation laws guarantee the stability of these configurations. It is also possible to find localized time-dependent but non-dissipative solutions of nontopological nature, the so-called nontopological solitons \\cite{NTSs}. The simplest model of a nontopological soliton in the context of renormalizable theories has a complex scalar field quadratically coupled to a real scalar field with quartic potential. The stability of the configuration comes from the conserved global charge $Q$ carried by the complex field which is confined within a spherically-symmetric domain formed by the real scalar field. One can show that for $Q$ larger than a critical value, the energy of the configuration is smaller than the energy of $Q$ free particles. There has been a recent upsurge of interest on nontopological solitons due to their potential relevance to cosmology and astrophysics \\cite{NTSCOS}. If one waives the requirement of renormalizability, it is possible to find nontopological solitons for models with a single complex scalar field, by invoking, e.g., a $\\phi^6$ term in the potential. These are the so-called $Q$-ball solutions discovered by Coleman and collaborators \\cite{QBALL}. In the present work we will go back to the simple models involving only a self-interacting real scalar field and study the properties of {\\it time-dependent} spherically-symmetric solutions. Due to the constraint imposed by Derrick's theorem, these configurations have been somewhat overlooked in the literature (but not completely, as we will discuss below). Why should anyone bother with solutions which are known to be unstable? One possible answer is that instability is a relative concept, which only makes sense in context, that is, when the lifetime of a given configuration is compared with typical time-scales of the system under study. Thus, unstable but long-lived configurations may be relevant for systems with short dynamical time-scales. Another answer is that a detailed study of these configurations can greatly clarify dynamical aspects of nonlinearities in field theories and the r\\^ole they play in several phenomena, ranging from nonlinear optics to phase transitions both in the laboratory and in cosmology \\cite{TEXTURES}. One of the motivations for studying the evolution of unstable spherically-symmetric configurations comes from the work of Gleiser, Kolb, and Watkins on the r\\^ole subcritical bubbles may play in the dynamics of weak first order phase transitions \\cite{GKW}. Considering models with double-well potentials in which the system starts localized in one minimum, these authors proposed that for sufficiently weak transitions correlation-volume bubbles of the other phase could be thermally nucleated, promoting an effective phase mixing between the two available phases even before the critical temperature is reached from above. This could have important consequences for models of electroweak baryogenesis which rely on the usual homogeneous nucleation mechanism \\cite{GK}. However, Gleiser, Kolb, and Watkins did not include the shrinking of the bubbles in their estimate of the fraction of the volume occupied by each of the two phases, leading some authors to question their results \\cite{CRITICS}. Since then, Gleiser and Gelmini included the shrinking of the bubbles into the original estimates, concluding that for sufficiently weak transitions subcritical bubbles are indeed nucleated at a fast enough rate to cause substantial phase mixing \\cite{GG}. Although an improvement, the modeling used to describe the bubble shrinking was still too simplistic, as it assumed that the bubbles just shrunk with constant velocity. The evolution of spherically-symmetric unstable solutions of the nonlinear Klein-Gordon equation was originally studied numerically in the mid-seventies by Bogolubsky and Makhankov \\cite{PULSON1}. Using a quasiplanar initial configuration for the bubbles (that is, a tanh$(r-R_0)$ profile, with $R_0$ the initial radius), these authors discovered that for a certain range of initial radii the bubble evolution could be described in three stages; after radiating most of its initial energy the bubble settled into a regime which was quite long-lived, with a lifetime which depended on the initial radius. The bubble then disappeared by quickly radiating away its remaining energy. These configurations were called ``Pulsons'' by these authors, due to the pulsating mechanism by which they claimed the initial energy was being radiated away. Their results were recently rediscovered and refined by one of us \\cite{PULSONMG}. After a more detailed analysis of these configurations, it became clear that their most striking feature was not the pulsating mechanism by which bubbles radiate their initial energy, but the rapid oscillations of the field's amplitude at the core of the configuration during the pseudo-stable regime, in a manner somewhat analogous to resonant breathers in kink-antikink scattering \\cite{BREATHERS}. In fact, it was realized that during the pseudo-stable regime almost no energy is radiated away and the radial pulsation is actually quite small in amplitude. Hence the name ``Oscillon'' was proposed instead. It was also shown that these configurations appear both in symmetric and asymmetric potentials, are stable against small radial perturbations, and have lifetimes far exceeding naive expectations. However, not much else has been done in order to explore the properties of these configurations. Other works on this topic were concerned in establishing the existence of these solutions for other potentials, such as the sine-Gordon and logarithmic potentials, different symmetries, and somewhat limited stability studies \\cite{PULSON2}. By a combination of analytical and numerical methods, we will shed some light on the properties of these configurations (henceforth oscillons). We will establish the conditions for their existence, the reason for their longevity, and clarify their final collapse. Armed with a better understanding of their properties, we will also be able to suggest several situations where we believe oscillons can be of importance. The rest of this paper is organized as follows. In the next Section we will set up the general formalism and obtain the exact solution of the spherically-symmetric linear Klein-Gordon equation. As expected, in the linear case no oscillons appear, with bubbles quickly decaying away. We obtain the time-scale in which this decay occurs in order to later compare it to the case when nonlinearities are present. In Section 3 we present the numerical results that establish several of the key properties of oscillons for symmetric double-well potentials. Guided by these results, in Section 4 we present analytical arguments to explain why there is a minimum initial radius for bubbles to settle into the oscillon stage, why some oscillons live longer than others, and how oscillons finally disappear. In Section 5 we extend the numerical analysis of Section 3 to asymmetric double-well potentials, showing how the lifetime of oscillons is sensitive to the amount of asymmetry between the two minima. Here one must be careful to set the initial radius to be smaller than the critical radius, as bubbles with radii larger than critical will grow. As in the symmetric potential case there are no critical bubbles, we can say that we are studying the evolution of subcritical bubbles in symmetric and asymmetric potentials. Oscillons are thus a possible stage in the evolution of subcritical bubbles toward their demise. In Section 6 we discuss several possible situations in which these configurations will play an important r\\^ole. Although we focus mainly on cosmological phase transitions, some of our arguments apply equally well to phase transitions in the laboratory. We conclude in Section 7 with a summary of our results and an outlook to future work. ", + "conclusions": "In this paper we have presented the results of a detailed investigation of the properties of oscillon configurations and explained, where possible, the physics behind their interesting dynamics. The fact that they exist in both first and second order phase transitions makes them of particular interest. They are localized, non-singular, time-dependent, spherically-symmetric solutions of nonlinear scalar field theories, which are unstable but extremely long-lived, with lifetimes of order $10^3-10^4~m^{-1}$, where $m$ is the mass of the scalar field. They naturally appear during the collapse of spherically symmetric field configurations. We have obtained the conditions required for their existence, namely that the initial energy needs to be above a plateau energy and the initial amplitude of the field needs to be above the inflection point on the potential in order to probe the nonlinearities of the theory (but does not need to be at the true minimum of the potential). Of the many intriguing aspects of these configurations, some that stand out include the fact that they exist only for a given range of initial radii and core amplitudes. The lower value of the radii can be explained by perturbation theory. It corresponds to the minimum radius beyond which the field probes the nonlinearity of the potential. Explaining the upper bound for the initial radius of the field profile is not so straightforward and we are currently investigating this. It could well be that since larger bubbles have larger initial energies, during their collapse higher nonspherical modes are excited, triggering the rapid growth of instabilities responsible for the bubble's collapse before it can settle into the oscillon stage. Another remarkable feature is that the plateau energy of the oscillon is practically independent of the initial radius. We have interpreted this fact by showing that the oscillon can be thought of as the attractor field configuration which minimizes the departure from virialization. There is much that remains to be investigated. One concern is that we only investigated stability to radial perturbations. We really need to investigate how nonspherical perturbations affect the spherically symmetric solutions. One possibility is that they will tend to make the oscillons collapse into a pancake configuration, and hence decay more quickly than in the spherical case, although we believe this will only be the case for bubbles with large initial radii. We may also think of higher nonspherical modes as excited states of the ``ground-state'' $\\ell=0$ resonance studied here. It is thus possible that oscillons may appear in higher energy configurations, which may decay either to the ground-state oscillon or just into scalar radiation. Finally, a more detailed study of the coupling of these objects to other matter fields and hot plasmas is required in order to investigate how they affect the dynamics of phase transitions and how their own decay is affected by these couplings. It is clear though that they are of interest cosmologically. We are currently analyzing the consequences of oscillons if they were to be formed at the electroweak scale \\cite{CGM}." + }, + "9503/astro-ph9503102_arXiv.txt": { + "abstract": "We determine the luminosity function (LF) of galaxies in the core of the Coma cluster for $M_R\\le-11.4$ (assuming $H_0=75$~km~s$^{-1}$~Mpc$^{-1}$), a magnitude regime previously explored only in the Local Group. Objects are counted in a deep CCD image of Coma having RMS noise of 27.7~$R$~mag~arcsec$^{-2}$. A correction for objects in the foreground or background of the Coma cluster---and the uncertainty in this correction---are determined from images of five other high-latitude fields, carefully matched to the Coma image in both resolution and noise level. Accurate counts of Coma cluster members are obtained as faint as $R=25.5$, or $M_R=-9.4$. The LF for galaxies is well fit by a power law $dN/dL\\propto L^\\alpha$, with $\\alpha=-1.42\\pm0.05$, over the range $-19.4-11.4$. We discuss in the next section the observations of the Coma and control fields, and the production of flattened images and object catalogs for these fields. The image processing steps are discussed in unusual detail, because the success of our method depends critically on the elimination of any systematic differences in detection efficiencies between the Coma field and the control fields; the trusting reader can skip these details. In \\S3 we present the methodology for deriving counts of Coma cluster members from the catalogs of Coma and control field images. We also enumerate several possible pitfalls in the application of this methodology to our data, concluding that none of them are of serious concern. In \\S4 the methodology is applied to derive the luminosity and size distributions of the members of the Coma cluster core, and the variation of their surface density with distance from the cluster center. In \\S5 we look in closer detail at the LF of galaxies in the Coma cluster, comparing our results to those of other clusters by other authors. We also estimate the masses required of these objects for tidal integrity. In \\S6 we discuss several possible evolutionary scenarios for the dwarf galaxies in Coma, and their relation to dwarfs in the field, and we give a brief summary and suggestions for future observations in \\S7. An extensive review of the properties of dwarf elliptical (dE) galaxies and their evolution is given by Ferguson \\& Binggeli (1994, FB94). Because the breadth of knowledge and speculation on dwarf galaxies is so large, we refer the interested reader to this review and minimize our rehashing of the literature. Though we have little direct knowledge of the morphology of the $M_R\\sim-12$ galaxies in the Coma cluster, observations of nearer clusters detect few dwarf irregular galaxies, and these gas-rich galaxies are unlikely to be present in the core of Coma. We will therefore, for simplicity, often refer to the dwarfs in Coma as dE galaxies. We will assume a Hubble parameter of $H_0=75$~km~s$^{-1}$~Mpc$^{-1}$, giving a distance modulus of 34.9 for the Coma cluster. Under this assumption, 1\\arcsec\\ subtends 460~pc at the Coma cluster, and our field spans a 200~kpc square. ", + "conclusions": "We have measured the LF in the Coma cluster core to fainter absolute magnitudes ($M_R=-11.4$, $L=2h_{75}^{-2}\\times 10^6 L_\\odot$) than any other LF study of which we are aware, and with a broader sensitivity in surface brightness (the recent De~Propris \\etal\\ [1995] study is to similar depth as ours). We have found that an $\\alpha=-1.4$ power law describes the LF down to luminosities typical of the Local Group dwarf spheroidal galaxies. These most extreme Coma cluster galaxies have sizes comparable to Local Group galaxies at a given absolute magnitude. This should be a useful constraint for theories of galaxy formation. We have, however, performed this measurement in a most extreme environment, one of the densest regions of the nearby Universe---there are $10^5$--$10^6$ galaxies per Mpc$^3$ in the Coma core, compared to 10-100 per Mpc$^3$ in the Local Group. This complicates comparison to the general field LF because of the many additional processes which could have influenced the development of the Coma dwarfs, such as the increased pressure and tides in the cluster environment. The significance of these results to theories of galaxy or cluster formation depends upon the evolutionary history of the Coma dwarfs: are their formation and evolution identical to those of field dwarfs, merely being concentrated for our convenience in the core of the cluster? Or, at the other extreme, are they formed by processes that exist solely in the cluster core, giving them little relation to faint Local Group or field galaxies? We first discuss the possibility that Coma and field dwarfs are very similar, and then move on to scenarios for the Coma dwarfs that are increasingly disparate from those of field galaxies. We refer the reader also to the FB94 review for discussions of evolutionary scenarios for dE galaxies; the formation of dwarf galaxies (indeed all galaxies) is poorly understood---it is not even clear what are the dominant physical processes---so we will not go into depth on any particular hypothesis. \\subsection{Coma Dwarf Population Similar to the Field Population?} A pleasingly simple interpretation would be that the dwarf galaxies in the Coma cluster core are an entombed, concentrated sample of field dwarf galaxies. The giant galaxies in the Coma cluster are extremely atypical compared to the field, being nearly completely devoid of late types. Dwarf galaxies, however, are much less massive and thus less subject to dynamical friction than giants. Furthermore they are smaller, and could (depending on the scaling of $m/r_t^3$ with $L$) be less subject to tidal disruption than the giants, with our simplest calculations indicating that Local Group dwarfs would not be severely harmed by the tidal field around NGC~4874. Thus a dwarf galaxy probably has a better chance than a giant of inhabiting the cluster environment without substantial merging or destruction. It is likely, however, that dwarf galaxies in the Coma cluster would be stripped of gas (FB94), and thus the episodic star formation history that seems characteristic of the Local Group dwarfs (see FB94 for review) would have been truncated for the Coma dwarfs soon after the development of the hot intra-cluster medium. This of course has been advanced as an explanation of the apparent deficit of dwarf irregular galaxies in clusters relative to the field, as discussed by FB94. They also note, however, the dominance of gas-stripped dE's over gas-rich irregular dwarfs in regions of Virgo where stripping should {\\it not} be important, indicting some internal mechanism as the agent of gas stripping, rather than ram pressure. Thus it is possible that the Coma cluster medium has had little evolutionary effect on the dwarfs we observe to be resident there. In this most simplistic case, the Coma dwarfs are a typical population of low-luminosity galaxies, save that they have had no recent star formation episodes to significantly perturb their luminosities. The agreement between our faint-end LF slope and those from the Virgo studies (SBT85 and the LSB extension of Impey, Bothun, \\& Malin 1988) supports the idea of a universal mass function (or more precisely, universal stellar mass function) for dwarf galaxies. Of course should the much steeper cluster LF slopes measured by De~Propris \\etal\\ (1995) be confirmed, we would have to abandon this point of view. The main problem with the idea of a universal faint-end LF is that field surveys yield shallower slopes: $\\alpha=-0.97\\pm0.25$ from Loveday \\etal\\ (1992); $\\alpha=-1.0\\pm0.2$ from Marzke, Huchra, \\& Geller (1994); $\\alpha=-1.1$ from Ellis \\etal\\ (1995). In the Local Group the LF may be determined for even fainter galaxies than in our study: van~den~Bergh (1992) fits $\\alpha=-1.1$ to the Local Group galaxy LF for $M_V<-7.6$. With new Local Group dwarfs being discovered on an annual basis, however, it is possible that this slope will steepen with time. Babul \\& Rees (1992) proposed that dwarf galaxies are born with common mass functions in both cluster and field environments. Field dwarfs might effectively blow themselves up in star formation incidents, but cluster dwarfs would remain confined by the pressure of the intra-cluster medium. In this view the field LF vs.\\ cluster LF dichotomy is real and due to the demise of field dwarfs. We have found a faint-end LF in Coma similar to that in Virgo, and Thompson \\& Gregory (1993) also report a dwarf-to-giant ratio similar in Coma and Virgo, despite the former being a denser environment. Thus if the Babul \\& Rees scenario is correct, then the confinement effect may ``saturate'' at Virgo densities---the additional pressure in Coma preserves no more dwarfs, and $\\alpha\\approx-1.4$ represents the ``intrinsic'' dE LF. Countering this view somewhat is the observation that Coma cluster dwarf galaxies have similar sizes---and perhaps similar masses---to the Local Group dwarfs at comparable magnitudes. If dE evolution is strongly controlled by local pressure, would we expect Local Group and Coma cluster objects to be this similar? Another school of thought on the field/cluster LF dichotomy is that the differences are primarily due to selection effects. In particular, it is suggested that cluster LF studies are generally done with deeper images than are nearby field surveys, resulting in omission of LSB galaxies from the field surveys. The field survey of Ellis \\etal\\ (1995), for example, selects target galaxies from a survey with limiting surface brightness 26.5~$b_J$~mag~arcsec$^{-2}$, substantially shallower than the Virgo, Fornax, and Coma cluster surveys discussed here. Deep field redshift surveys seem to show an increase in the LF slope at higher redshifts (Ellis \\etal\\ 1995; Eales 1993), which could likewise be due to the fact that deep redshift survey targets are selected from deeper photographic or CCD exposures with more sensitivity to LSB objects. Some steepening in the LF even at fixed SB sensitivity does, however, seem to be indicated by the Ellis \\etal\\ (1995) survey, since their $z\\approx0$ and their $z\\approx0.3$ LFs are constructed from the same parent survey, with a single SB threshold, yet they find a steeper LF at higher $z$. Ferguson \\& McGaugh (1995) also suggest that local field LF slopes are depressed by failure to count LSB galaxies. This may be evidenced by the excess (over the $\\alpha=-1.0$ prediction) of nearby $-16-17$, where local information is sparse. Such behavior would be consistent also with our Coma cluster LF. Their derived LF is consistent with the Loveday \\etal\\ (1992) LF, but may disagree with the deeper Ellis \\etal\\ (1995) $M_{B_J}<-15$ LF. Should the field LFs somehow be reconciled with the Coma or Virgo LFs, or the Gronwall \\& Koo LF, we would be led to suspect some internal process ({\\it e.g.\\/} supernova heating) as the driver of dwarf galaxy evolution rather than environmental variables such as pressure or ionizing radiation field. \\subsection{Inhibition or Destruction of Cluster Dwarfs} While the Babul \\& Rees (1992) scenario invokes the intra-cluster medium to {\\it increase} the number of dwarfs in clusters over the field counts, there are of course processes that could {\\it reduce} the number of cluster dwarfs, especially within 100~kpc of the Coma cluster core. While our simple calculations suggested that tidal forces are not important, a more interesting test will be to compare the size distributions of the Coma dwarfs to those in the Virgo cluster. In a further publication, after re-analyzing the most diffuse objects in our image, we will compare the maximum sizes of galaxies in the Coma core to those found in Virgo. Should the latter be larger at a given magnitude, it would suggest tidal forces are indeed at work in Coma. For now we simply note that the $M_R\\approx-12$ galaxies in the Coma cluster do not appear to be depleted near the core, as might be expected were tidal forces destroying most dwarfs. \\subsection{Satellites of NGC 4874} The dwarf galaxies in our image show a strong concentration toward the giant elliptical NGC~4874, which may not be the dynamical center of the Coma cluster. It may be incorrect to think of these dwarfs as belonging to the cluster; a more appropriate description may be as satellites of NGC~4874. The spatial distribution of dwarfs around NGC~4874 is consistent with that observed for satellites of field ellipticals (Vader \\& Sandage 1991) and of field spirals (Zaritsky \\etal\\ 1993), albeit far richer. The neighborhoods of giant ellipticals may be particularly fertile for production of dwarf galaxies. It would be interesting to see whether NGC~4889, or NGC~4881, which are giant Coma cluster ellipticals with apparently lower specific frequencies of globular clusters than NGC~4874 (Harris 1987; Baum \\etal\\ 1995), also have fewer dwarf galaxies in their vicinities. Note in Figure~\\ref{radfig}, however, that the brightest galaxies in Coma may have an equally strong concentration toward NGC~4874, though the number of objects in the nearest bin is small (5). Note further that we did not detect any concentration of dwarf galaxies around the other elliptical galaxies in our Coma field---but these are each at least ten times less luminous than NGC~4874. If the dwarf galaxies which we have detected in the Coma core are members of an enhanced satellite population, then the comparison to the field LF is further complicated, and environmental mechanisms are of course implicated in the formation and evolution of these galaxies. In this context it is worth noting that the diffuse light, globular cluster density, and dwarf galaxy density seem to have similar radial structure on the outskirts of NGC~4874. McLaughlin \\etal\\ (1995) note that all the current data on globular cluster populations around giant ellipticals are consistent with the idea that globular clusters and cD envelopes (as opposed to the ``bodies'' of cD galaxies, which follow an $r^{-1/4}$ profile) always have common structure. NGC~4874 is classified as a cD galaxy by Schombert (1988), and our data are taken at large enough distances from its center that the envelope light is dominant, by his definitions. If the globular clusters are part of the same system as the diffuse envelope, then our data may be suggesting that the formation of the dwarf galaxies is likewise integrally connected with the mysterious origin of the cD envelope. \\subsection{Dwarfs as Shards} In the above subsections we assumed that the objects we detect in a cloud around NGC~4874 are long-lived and predate the cluster. Neither need be true---the dwarfs could be pieces of larger galaxies destroyed by the cluster tidal fields or through interaction with NGC~4874. They could also be in the process of dissolution. Note that the dwarf galaxies show roughly the same spatial distribution as the diffuse flux around NGC~4874, but with only a few percent as much total luminosity. If dwarf ``galaxies'' are constantly being formed and dissolved near NGC~4874, then each must typically live at least a few percent of the age of the system, else there will be too much diffuse light left over. We would perhaps expect, however, to see large numbers of very extended dwarfs were there a continual process of dissolution, and we do not. If the stellar contents of the dwarfs were to fade before becoming part of the diffuse light, then the dwarfs could be even shorter-lived, and we might not detect them in their most diffuse state. Color or spectral information would fairly quickly tell us whether in fact these objects are transient ($\\lesssim10^7$ year) starburst phenomena. The similarity between the diffuse light and the dwarf galaxy gradient suggests a common, unknown origin. It is often suggested that cD galaxy halos are the remains of galactic cannibalism. If the dwarf galaxies are shards of NGC~4874's victims, then their colors should resemble those of the old populations of giant galaxies. The colors of field and cluster dE's more closely resemble those of metal-poor globular clusters. A further point worth noting about the diffuse light is that it is, in a nutshell, diffuse. We calculate that the visible ($R$) component of the hot intracluster bremsstrahlung radiation [detected in X-rays, {\\it cf.\\/} White, Briel, \\& Henry (1993)] is several orders of magnitude weaker than the detected $R$-band flux, so that most of the $R$-band should be starlight. Our sensitivity is such that any unresolved clumps brighter than $M_R=-9.4$, or $3\\times10^5\\,L_\\odot$, would be detected. Yet the total flux in such lumps is still only a few percent of the diffuse flux. Even if we were to extrapolate the $\\alpha=-1.4$ LF to zero-luminosity galaxies, we would not have enough flux to make up the diffuse light. If the diffuse light is lumpy, the lumps must be quite small. Scheick \\& Kuhn (1994) similarly conclude, from a search for surface-brightness fluctuations in the diffuse light of the cluster Abell~2670, that the typical unit of luminosity in the diffuse light must be $\\le3\\times10^3\\,L_\\odot$. Thus if the diffuse light is formed from disrupted galaxies, the remnants are very effectively dispersed. It would seem odd for $10^6\\,L_\\odot$ objects to be left behind, with scale sizes of $\\sim200$~pc just like Local Group dSph's. The origin of the cloud of diffuse light, globular clusters, and dwarf galaxies surrounding NGC~4874 begs further investigation, especially since the flux of starlight in the diffuse component is comparable to the total amount in galaxies of any size within the central 100~kpc. \\subsection{Implications for the Coma Cluster} Finally, the observations presented in this paper have implications for the Coma cluster as a whole. As stated earlier, the implied total amount of mass in previously undetected galaxies and globular clusters is insignificant compared to that already postulated from the brighter galaxies in the cluster (assuming mass-to-light ratios consistent with those measured in the Local Group dwarfs). The Coma dwarfs would need to have mass-to-light ratios several orders of magnitude greater than any previously measured dwarf galaxy to have a significant impact on the total mass of the cluster. We have surveyed and catalogued all objects in the center of Coma having luminosities of globular clusters or brighter, plus we have measured the diffuse halo light. It is thus highly unlikely that there remains undetected, visibly luminous matter sufficient to close the cluster. White \\etal\\ (1993) have carried out a full inventory of the x--ray gas mass in Coma and also find this to be grossly insufficient to close the cluster. Unless there is a large population of extremely faint massive objects in Coma, we are left once again with the hypothesis that Coma and clusters in general are dominated by non-luminous mass. Recent observations of microlensing by massive compact objects in our own galaxy suggest that only $\\sim20$\\% of the total galaxy mass can be accounted for in extremely faint baryonic objects (Gates \\etal\\ 1995). Thus, even if every one of the Coma galaxies had a population of MACHOS, it would not close the cluster. These circumstances continue to suggest the presence of non-baryonic dark matter in the cluster. The discussion above also has bearing on recent claims of a baryon crisis in Coma. White \\etal\\ (1993) have suggested that the amount of baryons seen in Coma is in conflict with the bounds set from nucleosynthesis. Our observations have shown that the amount of baryons in faint, previously unknown objects is inconsequential to this argument. The extra amount of baryonic mass added to their inventory of the cluster is well within their quoted error estimates." + }, + "9503/astro-ph9503099_arXiv.txt": { + "abstract": "Seismological observations with the Whole Earth Telescope (WET) allow the determination of the subsurface compositional structure of white dwarf stars. The hot DO PG~1159--035 has a helium surface layer with a mass of $\\sim 10^{-3} M_{\\odot}$, while the cooler DB white dwarf GD~358 has a much thinner surface helium layer of 10$^{-6} M_{\\odot}$. Taken literally, these results imply that either there is no evolutionary relation between these two stars, or that there is an unknown mass loss mechanism. In order to investigate a possible evolutionary link between these objects, we computed evolutionary sequences of white dwarf models that included time-dependent diffusive processes. We used an initial model based on the PG~1159 pulsational data, which has a surface layer $3\\times 10^{-3}M_{\\odot}$ thick, and a composition of 30\\% helium, 35\\% carbon, and 35\\% oxygen. Below this surface layer is a thin transition zone where the helium fraction falls to zero. As expected, diffusion caused a separation of the elements; a thickening surface layer of nearly pure helium overlays a deepening transition zone where the composition changes to the surface composition of the original model. When the model reached the temperature range inhabited by GD~358 and the pulsating DB white dwarfs, this pure helium surface layer was $\\sim 10^{-5.5}M_*$ deep. The resulting evolved model is very similar to the model used by Bradley and Winget (1994) to match the pulsation observations of GD 358. The pulsation periods of this model also show a good fit to the WET observations. These results demonstrate the plausibility of a direct evolutionary path from PG~1159 stars to the much cooler DB white dwarfs by inclusion of time-dependent diffusion. A problem still remains in that our models have no hydrogen, and thus must retain their DB nature while evolving through the T$_{\\mbox{eff}}$ 45,000K to 30,000K. Since there are no known DB stars in this range, we plan to address this problem in future calculations. ", + "introduction": "Seismological exploration of white dwarf stars with the Whole Earth Telescope, described by Nather et al. (1990), has yielded unprecedented details about their subsurface compositional stratification. Winget et al. (1991) observed the pulsations of the hot ($T_{\\rm eff}$=140,000K) white dwarf PG 1159--035 (hereafter simply ``PG~1159''), uncovering over 120 independent pulsation modes. These modes appear at frequencies that are naturally explained by pulsation theory as nonradial $g-$modes. Kawaler \\& Bradley (1994: ``KB'') examined the observed pulsation periods in detail, and determined several parameters of this star such as its mass and luminosity. In addition, they show that PG 1159 has a subsurface composition transition zone at about $3\\times 10^{-3} M_{\\odot}$ below the surface. The surface composition of that star is roughly 33\\% He, 50\\% $^{12}C$, and 17\\%$^{16}O$ by mass, (Werner et al. 1991); the composition transition is where the helium mass fraction goes down to zero. Thus the surface layer contains roughly $10^{-3} M_{\\odot}$ of helium. A likely route of evolution to the PG~1159 phase begins with departure from the AGB during a thermal pulse (Iben \\& Tutukov 1984). Such a model has at most $10^{-2}M_{\\odot}$ of helium in the surface layers. With modest mass loss between the AGB and the PG~1159 stage, the helium layer mass determined by KB fits this picture. Winget et al. (1994) report observations of the pulsating white dwarf, GD~358. This star is much cooler than PG~1159, and is the prototype of the DB pulsators, with an effective temperature of about 25,000K (Thejl, Vennes, and Shipman 1991). Winget et al. (1994) found over 180 separate pulsation frequencies in this star. Model analysis of this star by Bradley \\& Winget (1994: ``BW'') successfully reproduced in detail the pulsation frequencies with a model of about the same mass as PG~1159, but with a surface layer of pure helium of approximately $1.2\\times 10^{-6} M_{\\odot}$. This is three order of magnitude smaller than PG~1159. This difference in helium layer mass suggests that establishing an evolutionary connection between these two objects requires finely--tuned mass loss beyond the PG~1159 stage. Alternatively, these results challenge the notion of there being a direct evolutionary relationship between these objects. The models used by KB were evolutionary models which had a single common ancestor on the AGB. The time that they took to reach the PG 1159 stage was less than $10^6$ yr. The models used by BW were also evolutionary models with ages of several $\\times 10^7$ years after departure from the AGB. In both calculations, the compositional stratification was fixed at the start, and did not change in the course of the calculation. However, diffusion by gravitational settling, acting on reasonably short time scales, is believed to be responsible for the extremely pure surface compositions of most white dwarfs stars (Schatzman 1958). While there has been much discussion of diffusion in the literature, there have few attempts to include time--dependent diffusion as part of an evolutionary calculations (see for example Iben and MacDonald 1985). Other white dwarf evolution calculations that address diffusion usually assume that diffusion reaches equilibrium quickly, and so evolve models with equilibrium diffusion profiles in their interiors (e.g. Tassoul, Fontaine, \\& Winget 1990). The time scale to reach equilibrium increases dramatically with depth and with a shallow gradient. In realistic models, diffusion takes a long time to approach equilibrium in deeper layers, and equilibrium concentrations are reached from the outside inward. Therefore, the use of equilibrium (or near-equilibrium) diffusive profiles is open to question in objects as young as PG~1159 and GD~358. BW's evolutionary models employ an equilibrium diffusion profile above the midpoint of the composition transition region, and enforce a steeper gradient below, which crudely models the approach to diffusive equilibrium in deeper layers. This paper reports on results of our calculations of time--dependent diffusion in evolving white dwarf models. Since diffusive equilibrium is only truly achieved after an infinite time, we constructed a series of evolutionary models including time--dependent diffusive processes. We find that GD~358 is a ``snapshot'' of diffusion in progress; it represents an intermediate step in the approach to diffusive equilibrium. Its outer layer structure is a natural consequence of the evolution of the compositional structure of a PG~1159-like star. In Section II below we briefly describe the implementation of diffusion within our white dwarf evolution code. In Section III, we show the results of evolution of models with starting models taken from KB , and compare the results to the models used by BW. Since pulsation periods and their differences are the prime observational test of the model, in Section IV we compare the pulsation properties of our models with GD 358. Section V concludes this paper with a discussion of the evolutionary link between the PG 1159--035 stars and the DB stars. ", + "conclusions": "Our models demonstrate how chemical diffusion can cause PG~1159 stars to evolve into DBV stars such as GD~358 without appeal to mass loss, despite the incongruity in surface helium layer masses. This gives one definitive path of evolution of white dwarf stars from the PG~1159 stage, and is a step in clarifying white dwarf evolutionary relationships. This also demonstrates the necessity of invoking time-dependent diffusion when considering white dwarf evolution. In this investigation, we assumed a zero hydrogen fraction in all of our models. This implies that that they must retain their DB nature while cooling through the DB gap from 45,000 K to 30,000 K. This is an obvious contradiction with the observed lack of DB stars in this temperature range (Wesemael et al. 1985; Liebert 1986). In future calculations with trace hydrogen present, we will examine the behavior of the hydrogen and helium layers in this range." + }, + "9503/astro-ph9503094_arXiv.txt": { + "abstract": " ", + "introduction": "In this paper we consider various constraints inferred from the possible \\ph of \\4he in the early universe. Following Protheroe, Stanev, and Berezinsky~\\cite{PSB} we note that the \\ph of this isotope can be employed to place stringent limits on early cosmic energy injections associated with, for example, decaying particles~\\cite{Lindley1,Ellis}, evaporating black holes~\\cite{Miyama}, or annihilating topological defects~\\cite{Hill1,Witten,OTW,Hill2,Brandenb,Bh5}. Our focus here will be particularly on constraining the latter scenario. It has also been suggested that \\4he-\\ph in the early universe could be a production mechanism for the observed light-element abundances of deuterium and \\3he~\\cite{Gnedin}. In this work we will study the feasibility of such a scenario and show that the (\\hh) ratio poses a problem to it. We will show that \\ph yields $(^3{\\rm He}/^2{\\rm H})>>1$ and since \\2h is destroyed and \\3he increases with evolution, measures of (\\hh) place severe constraints on photodisintegration. Nonthermal energy releases at high redshifts may leave various observable signatures. The cosmic microwave background radiation (hereafter, CMBR) has been measured to have a blackbody spectrum to very high accuracy~\\cite{Mather}. Any injection of energy between redshifts of $z\\simeq 10^3$ and $z\\simeq3\\times 10^6$ may produce observable spectral distortions of the blackbody spectrum~\\cite{Wright}. Here the lower redshift represents the approximate epoch of decoupling (assuming no re-ionization), whereas the higher redshift represents the epoch at which double-Compton scattering is still efficient enough to completely thermalize significant energy releases~\\cite{Peebles}. The diffuse \\g-ray background observed at the present epoch can also be used to constrain early cosmic energy injections~\\cite{Trombka}. For redshifts $z\\la 300-1000$ pair production by \\g-rays on protons and \\4he is rare so that the universe becomes transparent to \\g-rays with energies below $\\Emax$. Here the energy $\\Emax$ is \\begin{equation} \\Emax\\simeq {m_e^2\\over 15T}\\simeq 17{\\rm GeV}\\biggl({T\\over 1{\\rm eV}}\\biggr)^{-1}\\,,\\label{Eth} \\end{equation} where $T$ is the CMBR temperature and $m_e$ is the electron mass. $\\Emax$ is related to the threshold energy for $e^+e^-$-pair creation by high-energy \\g-rays scattering off CMBR-photons. Any radiation with energies above this threshold is effectively instantaneously \\lq\\lq recycled\\rq\\rq\\ by pair production ($\\gamma\\gamma_{\\rm CMBR}\\to e^+e^-$) and inverse Compton scattering of the created electrons and positrons ($e\\gamma_{\\rm CMBR}\\to e\\gamma$). These processes yield a degraded \\g-ray spectrum with generic energy dependence $\\propto E_{\\gamma}^{-1.5}$ considerably below $\\Emax$ before steepening and finally cutting off at $\\Emax$~\\cite{Ellis}. Significant energy releases in form of high-energy \\g-rays and charged particles at epochs with redshifts below $z\\simeq 300-1000$ may therefore produce a present day \\g-ray background and are subject to constraint. For redshifts smaller than $z\\simeq 10^6$ stringent constraints on various forms of injected energy can also be derived from the possible photodisintegration of \\4he and the concomitant production of deuterium and \\3he. The injection of high-energy particles and \\g-rays above the energy threshold $\\Emax$ will initiate an epoch of cascade nucleosynthesis subsequent to the epoch of standard primordial nucleosynthesis at $T\\sim100\\keV$. The abundance yields of \\2h and \\3he produced by \\4he-\\ph during cascade nucleosynthesis are quite independent from the primary \\g-ray and charged particle energy spectra. Deuterium and \\3he abundance yields depend only on the amount of injected energy and the injection epoch. For the detailed calculations leading to these conclusions the reader is referred to the work by Protheroe, Stanev, and Berezinsky~\\cite{PSB}. The nucleosynthesis limits on the release of energy into the primordial gas can be up to a factor of $\\sim 100$ more stringent than equivalent limits on energy releases derived from distortions of the CMBR-blackbody spectrum. For redshifts $z\\ga 10^6$, corresponding to CMBR-temperatures of $T\\ga 200\\,$eV, the \\ph of \\4he is inefficient. This is because the energy threshold for pair production falls below the energy threshold for \\4he-\\ph, $\\Emax\\la E_{th}^{^4{\\rm He}}$. The best nucleosynthesis limits on decaying particles and annihilating topological defects in the cosmic temperature range $1\\keV\\la T\\la10\\keV$ come from the possible \\ph of deuterium~\\cite{Ellis,Dimopoulos1}. These limits are stronger than analogous limits from distortions of the CMBR blackbody spectrum. In this narrow temperature range limits on decaying particles and topological defects may, in fact, be more stringent due to effects of injecting antinucleons. Antinucleons may be produced during \\g\\g$_{\\rm CMBR}$ pair production for \\g-energies $E_{\\gamma}\\ga 10^5$GeV or when there is a significant hadronic decay channel for a massive decaying particle or topological defect. These antinucleons can then annihilate on \\4he thereby producing approximately equal amounts of \\2h and \\3he~\\cite{Balestra}. We will, however, not further pursue this idea here. For temperatures above $T\\simeq1\\keV$ there are virtually no constraints on decaying particles and topological defects from distortions of the CMBR blackbody spectrum. However, stringent limits on decaying particles and topological defects may obtain from the injection of hadrons (for a review see~\\cite{Ellis}). An injection of mesons and baryons generally increases the neutron-to-proton ratio and results in increased \\4he-mass fractions ($1\\MeV\\ga T\\ga 100\\keV$) and/or increased \\2h and \\3he-abundances ($100\\keV\\ga T\\ga 10\\keV$;~\\cite{Reno}). It has been suggested that a combination of \\4he-hadrodestruction and \\2h,\\3he-photodestruction induced by a late-decaying particle ($T\\sim 3\\keV$) may bring big-bang-produced light-element abundances close to observationally inferred abundance constraints for a wide range of fractional contributions of baryons to the closure density, $\\Omega_b$~\\cite{Dimopoulos2}. The observational signatures of such scenarios are primordial isotope ratios of (\\hh)$\\simeq 2-3$ and $^6{\\rm Li}/^7{\\rm Li}\\sim 1$, contrasting the predictions of a standard, or inhomogeneous, big-bang freeze-out from nuclear statistical equilibrium. For a wide range of parameters, such as decaying particle life times and hadronic branching ratios, these models would overproduce \\2h and \\3he and therefore the calculations by Dimopoulos {\\it et al.}~\\cite{Dimopoulos2} do also serve as constraints on particle parameters and abundances. We note here that the high (\\hh) ratio may in fact be a severe problem for such scenarios. In this paper we restrict ourselves to constraints derived from the effects of nonthermal energy injections at epochs with redshifts $z\\la10^6$. The outline of the paper is as follows. In Section 2 we briefly review the observationally inferred light-element abundances of \\2h and \\3he. We then consider \\4he-\\ph scenarios and their compatibility with the observations. In Section 3 we study the effects of possible energy injection by superconducting strings, ordinary strings, and magnetic monopoles on the primordial \\2h and \\3he abundances, the distortions of the CMBR-blackbody, and the diffuse \\g-ray background. In these scenarios we assume that such topological defects would radiate on a level such that they could produce the observed highest energy cosmic rays at the present epoch. Conclusions are drawn in Section 4. Throughout this paper we will mostly use $c=\\hbar=1$. ", + "conclusions": "We have discussed limits on cosmic high energy particle injection derived from $^4$He photodisintegration, CMBR distortions and the diffuse \\g-ray background. We have found that the nucleosynthesis limits give the most stringent constraints for epochs with redshift $z\\ga5\\times10^3$ whereas at lower redshifts particle injection is predominantly limited by its contribution to the diffuse \\g-ray background (see Fig.~1). These constraints were applied to topological defects potentially radiating supermassive GUT scale (``X'') particles which subsequently decay into high energy leptons and hadrons. The history of high energy particle injection is more or less determined within these defect models. The model dependent parameters to be fixed are the number density of X-particles radiated within unit time and the effective fragmentation function for the decay products of these X-particles. We have assumed that the flux of these decay products contributes significantly to the present day observed HECR flux. This allowed us to formulate our constraints as lower limits on the fractional energy release at HECR energies ($\\simeq10^{20}\\eV$) which is mainly determined by the \\g-ray fragmentation function. We have found that for reasonable \\g-ray fragmentation functions superconducting strings can not explain the HECR flux without violating at least the bound coming from \\4he-photodisintegration. In contrast, magnetic monopole and ordinary cosmic string models producing observable HECR fluxes are most severely constrained, but not yet ruled out, by their contribution to the diffuse \\g-ray background. In the second part of the paper we have studied the possibility that the presently observed deuterium has been produced by an epoch of \\4he-\\ph subsequent to a standard nucleosynthesis scenario. Such an epoch may have been initiated by the decay of particles, the annihilation of topological defects, or, in general, the production of energetic \\g-rays by any source. We have found that only a small fraction ($\\la10\\%$) of the observed deuterium may have its origin in the process of \\4he-\\ph since, otherwise, anomalously large primordial (\\hh)-ratios would result. A larger fraction of the primordial deuterium contributed by this process would require either standard assumptions of chemical evolution to break down or the existence of \\g-ray sources in the early universe which radiate with extremely \\lq\\lq soft\\rq\\rq\\ \\g-ray energy spectra. We have shown that a scenario which employs massive black holes to reprocess the light element abundances from a standard big bang nucleosynthesis process~\\cite{Gnedin} is in conflict with \\2h and \\3he observations. We have also used the anomaly in the (\\hh)-ratios produced during \\4he-\\ph to slightly tighten constraints on the abundances and parameters of decaying particles and topological defects." + } +} \ No newline at end of file