abstract
stringlengths
3
192k
title
stringlengths
4
857
we present a numerical study of dark matter halo concentrations in λcdm and self-similar cosmologies. we show that the relation between concentration, c, and peak height, ν, exhibits the smallest deviations from universality if halo masses are defined with respect to the critical density of the universe. these deviations can be explained by the residual dependence of concentration on the local slope of the matter power spectrum, n, which affects both the normalization and shape of the c-ν relation. in particular, there is no well-defined floor in the concentration values. instead, the minimum concentration depends on redshift: at fixed ν, halos at higher z experience steeper slopes n, and thus have lower minimum concentrations. we show that the concentrations in our simulations can be accurately described by a universal seven-parameter function of only ν and n. this model matches our λcdm results to <~ 5% accuracy up to z = 6, and matches scale-free ωm = 1 models to <~ 15%. the model also reproduces the low concentration values of earth-mass halos at z ≈ 30, and thus correctly extrapolates over 16 orders of magnitude in halo mass. the predictions of our model differ significantly from all models previously proposed in the literature at high masses and redshifts. our model is in excellent agreement with recent lensing measurements of cluster concentrations.
a universal model for halo concentrations
if dark matter is thermally decoupled from the visible sector, the observed relic density can potentially be obtained via freeze-in production of dark matter. typically in such models it is assumed that the dark matter is connected to the thermal bath through feeble renormalisable interactions. here, rather, we consider the case in which the hidden and visible sectors are coupled only via non-renormalisable operators. this is arguably a more generic realisation of the dark matter freeze-in scenario, as it does not require the introduction of diminutive renormalisable couplings. we examine general aspects of freeze-in via non-renormalisable operators in a number of toy models and present several motivated implementations in the context of beyond the standard model (bsm) physics. specifically, we study models related to the peccei-quinn mechanism and z ' portals.
ultraviolet freeze-in
we present the first catalog and data release of the swift-bat agn spectroscopic survey. we analyze optical spectra of the majority of the detected agns (77%, 642/836)based on their 14-195 kev emission in the 70-month swift-bat all-sky catalog. this includes redshift determination, absorption and emission-line measurements, and black hole mass and accretion rate estimates for the majority of obscured and unobscured agns (74%, 473/642), with 340 measured for the first time. with ∼90% of sources at z< 0.2, the survey represents a significant advance in the census of hard x-ray-selected agns in the local universe. in this first catalog paper, we describe the spectroscopic observations and data sets, and our initial spectral analysis. the fwhms of the emission lines show broad agreement with the x-ray obscuration (∼94%), such that sy 1-1.8 have {n}{{h}}< {10}21.9 cm-2, and seyfert 2 have {n}{{h}}> {10}21.9 cm-2. seyfert 1.9, however, show a range of column densities. compared to narrow-line agns in the sdss, the x-ray-selected agns have a larger fraction of dusty host galaxies ({{h}}α /{{h}}β > 5), suggesting that these types of agn are missed in optical surveys. using the [o iii] λ5007/hβ and [n ii] λ6583/hα emission-line diagnostic, about half of the sources are classified as seyferts; ∼15% reside in dusty galaxies that lack an hβ detection, but for which the upper limits on line emission imply either a seyfert or liner, ∼ 15 % are in galaxies with weak or no emission lines despite high-quality spectra, and a few percent each are liners, composite galaxies, h ii regions, or in known beamed agns.
bat agn spectroscopic survey. i. spectral measurements, derived quantities, and agn demographics
in cosmological first-order phase transitions (pt) with relativistic bubble walls, high-energy shells of particles generically form on the inner and outer sides of the walls. shells from different bubbles can then collide with energies much larger than the pt or inflation scales, and with sizeable rates, realising a `bubbletron'. as an application, we calculate the maximal dark matter mass $m_{dm}$ that can be produced from shell collisions in a u(1) gauge pt, for scales of the pt $v_\varphi$ from mev to $10^{16}$ gev. we find for example $m_{dm} \sim 10^6/10^{11}/10^{15}$ gev for $v_\varphi \sim 10^{-2}/10^3/10^8$ gev. the gravity wave signal sourced at the pt then links pulsar timing arrays with the pev scale, lisa with the zev one, and the einstein telescope with grand unification.
bubbletrons
the dark energy spectroscopic instrument (desi) will precisely constrain cosmic expansion and the growth of structure by collecting ~40 million extragalactic redshifts across ~80% of cosmic history and one-third of the sky. the emission line galaxy (elg) sample, which will comprise about one-third of all desi tracers, will be used to probe the universe over the 0.6 < z < 1.6 range, including the 1.1 < z < 1.6 range, which is expected to provide the tightest constraints. we present the target selection for the desi survey validation (sv) and main survey elg samples, which relies on the imaging of the legacy surveys. the main elg selection consists of a g-band magnitude cut and a (g - r) versus (r - z) color box, while the sv selection explores extensions of the main selection boundaries. the main elg sample is composed of two disjoint subsamples, which have target densities of about 1940 deg-2 and 460 deg-2, respectively. we first characterize their photometric properties and density variations across the footprint. we then analyze the desi spectroscopic data that have been obtained from 2020 december to 2021 december in the sv and main survey. we establish a preliminary criterion for selecting reliable redshifts, based on the [o ii] flux measurement, and assess its performance. using this criterion, we are able to present the spectroscopic efficiency of the main elg selection, along with its redshift distribution. we thus demonstrate that the main selection 1940 deg-2 subsample alone should provide 400 deg-2 and 460 deg-2 reliable redshifts in the 0.6 < z < 1.1 and the 1.1 < z < 1.6 ranges, respectively.
target selection and validation of desi emission line galaxies
we study the phenomenology of a recent string construction with a quantum mechanically stable dark energy. a mild supersymmetry protects the vacuum energy but also allows tev scale superpartner masses. the construction is holographic in the sense that the 4d spacetime is generated from "spacetime pixels" originating from five‑branes wrapped over metastable five‑cycles of the compactification. the cosmological constant scales as in the pixel number. an instability in the construction leads to cosmic expansion. this also causes more five‑branes to wind up in the geometry, leading to a slowly decreasing cosmological constant which we interpret as an epoch of inflation followed by (pre‑)heating when a rare event occurs in which the number of pixels increases by an order one fraction. the sudden appearance of radiation triggers an exponential increase in the number of pixels. dark energy has a time varying equation of state with , which is compatible with current bounds, and could be constrained further by future data releases. the pixelated nature of the universe also implies a large‑l cutoff on the angular power spectrum of cosmological observables with . we also use this pixel description to study the thermodynamics of de sitter space, finding rough agreement with effective field theory considerations.
pixelated dark energy
the unitarity of time evolution, or colloquially the conservation of probability, sits at the heart of our descriptions of fundamental interactions via quantum field theory. the implications of unitarity for scattering amplitudes are well understood, for example through the optical theorem and cutting rules. in contrast, the implications for in-in correlators in curved spacetime and the associated wavefunction of the universe, which are measured by cosmological surveys, are much less transparent. for fields of any mass in de sitter spacetime with a bunch-davies vacuum and general local interactions, which need not be invariant under de sitter isometries, we show that unitarity implies an infinite set of relations among the coefficients ψn of the wavefunction of the universe with n fields, which we name cosmological optical theorem. for contact diagrams, our result dictates the analytic structure of ψn and strongly constrains its form. for example, any correlator with an odd number of conformally-coupled scalar fields and any number of massless scalar fields must vanish. for four-point exchange diagrams, the cosmological optical theorem yields a simple and powerful relation between ψ3 and ψ4, or equivalently between the bispectrum and trispectrum. as explicit checks of this relation, we discuss the trispectrum in single-field inflation from graviton exchange and self-interactions. moreover, we provide a detailed derivation of the relation between the total-energy pole of cosmological correlators and flat-space amplitudes. we provide analogous formulae for sub-diagram singularities. our results constitute a new, powerful tool to bootstrap cosmological correlators.
the cosmological optical theorem
this review of scaling theories of magnetohydrodynamic (mhd) turbulence aims to put the developments of the last few years in the context of the canonical time line (from kolmogorov to iroshnikov-kraichnan to goldreich-sridhar to boldyrev). it is argued that beresnyak's (valid) objection that boldyrev's alignment theory, at least in its original form, violates the reduced-mhd rescaling symmetry can be reconciled with alignment if the latter is understood as an intermittency effect. boldyrev's scalings, a version of which is recovered in this interpretation, and the concept of dynamic alignment (equivalently, local 3d anisotropy) are thus an example of a physical theory of intermittency in a turbulent system. the emergence of aligned structures naturally brings into play reconnection physics and thus the theory of mhd turbulence becomes intertwined with the physics of tearing, current-sheet disruption and plasmoid formation. recent work on these subjects by loureiro, mallet et al. is reviewed and it is argued that we may, as a result, finally have a reasonably complete picture of the mhd turbulent cascade (forced, balanced, and in the presence of a strong mean field) all the way to the dissipation scale. this picture appears to reconcile beresnyak's advocacy of the kolmogorov scaling of the dissipation cutoff (as $\mathrm {re}^{3/4}$) with boldyrev's aligned cascade. it turns out also that these ideas open the door to some progress in understanding mhd turbulence without a mean field - mhd dynamo - whose saturated state is argued to be controlled by reconnection and to contain, at small scales, a tearing-mediated cascade similar to its strong-mean-field counterpart (this is a new result). on the margins of this core narrative, standard weak-mhd-turbulence theory is argued to require some adjustment - and a new scheme for such an adjustment is proposed - to take account of the determining part that a spontaneously emergent 2d condensate plays in mediating the alfvén-wave cascade from a weakly interacting state to a strongly turbulent (critically balanced) one. this completes the picture of the mhd cascade at large scales. a number of outstanding issues are surveyed: imbalanced turbulence (for which a new, tentative theory is proposed), residual energy, mhd turbulence at subviscous scales, and decaying mhd turbulence (where there has been dramatic progress recently, and reconnection again turned out to feature prominently). finally, it is argued that the natural direction of research is now away from the fluid mhd theory and into kinetic territory - and then, possibly, back again. the review lays no claim to objectivity or completeness, focusing on topics and views that the author finds most appealing at the present moment.
mhd turbulence: a biased review
fast radio bursts are millisecond-duration, bright radio signals (fluence 0.1-100 jy ms) emitted from extragalactic sources of unknown physical origin. the recent chime/frb and stare2 detection of an extremely bright (fluence ~mjy ms) radio burst from the galactic magnetar sgr 1935+2154 supports the hypothesis that (at least some) fast radio bursts are emitted by magnetars at cosmological distances. in follow-up observations totalling 522.7 h on source, we detect two bright radio bursts with fluences of 112 ± 22 jy ms and 24 ± 5 jy ms, respectively. both bursts appear to be affected by interstellar scattering and we measure significant linear and circular polarization for the fainter burst. the bursts are separated in time by ~1.4 s, suggesting a non-poissonian, clustered emission process—similar to those seen in some repeating fast radio bursts. together with the burst reported by chime/frb and stare2, as well as a much fainter burst seen by fast (fluence 60 mjy ms), our observations demonstrate that sgr 1935+2154 can produce bursts with apparent energies spanning roughly seven orders of magnitude, and that the burst rate is comparable across this range. this raises the question of whether these four bursts arise from similar physical processes, and whether the fast radio burst population distribution extends to very low energies (~1030 erg, isotropic equivalent).
detection of two bright radio bursts from magnetar sgr 1935 + 2154
we present a full result for the equation of state (eos) in 2+1+1 (up/down, strange and charm quarks are present) flavour lattice qcd. we extend this analysis and give the equation of state in 2+1+1+1 flavour qcd. in order to describe the evolution of the universe from temperatures several hundreds of gev to several tens of mev we also include the known effects of the electroweak theory and give the effective degree of freedoms. as another application of lattice qcd we calculate the topological susceptibility (chi) up to the few gev temperature region. these two results, eos and chi, can be used to predict the dark matter axion's mass in the post-inflation scenario and/or give the relationship between the axion's mass and the universal axionic angle, which acts as a initial condition of our universe.
lattice qcd for cosmology
it has been known for decades that the observed number of baryons in the local universe falls about 30-40 per cent short1,2 of the total number of baryons predicted3 by big bang nucleosynthesis, as inferred4,5 from density fluctuations of the cosmic microwave background and seen during the first 2-3 billion years of the universe in the so-called `lyman α forest'6,7 (a dense series of intervening h i lyman α absorption lines in the optical spectra of background quasars). a theoretical solution to this paradox locates the missing baryons in the hot and tenuous filamentary gas between galaxies, known as the warm-hot intergalactic medium. however, it is difficult to detect them there because the largest by far constituent of this gas—hydrogen—is mostly ionized and therefore almost invisible in far-ultraviolet spectra with typical signal-to-noise ratios8,9. indeed, despite large observational efforts, only a few marginal claims of detection have been made so far2,10. here we report observations of two absorbers of highly ionized oxygen (o vii) in the high-signal-to-noise-ratio x-ray spectrum of a quasar at a redshift higher than 0.4. these absorbers show no variability over a two-year timescale and have no associated cold absorption, making the assumption that they originate from the quasar's intrinsic outflow or the host galaxy's interstellar medium implausible. the o vii systems lie in regions characterized by large (four times larger than average11) galaxy overdensities and their number (down to the sensitivity threshold of our data) agrees well with numerical simulation predictions for the long-sought warm-hot intergalactic medium. we conclude that the missing baryons have been found.
observations of the missing baryons in the warm-hot intergalactic medium
compared to primordial perturbations on large scales, roughly larger than 1 mpc, those on smaller scales are not severely constrained. we revisit the issue of probing small-scale primordial perturbations using gravitational waves (gws), based on the fact that, when large-amplitude primordial perturbations on small scales exist, gws with relatively large amplitudes are induced at second order in scalar perturbations, and these induced gws can be probed by both existing and planned gravitational-wave projects. we use accurate methods to calculate these induced gws and take into account sensitivities of different experiments to induced gws carefully, to report existing and expected limits on the small-scale primordial spectrum.
gravitational waves induced by scalar perturbations as probes of the small-scale primordial spectrum
the discovery of the accelerating universe in the late 1990s was a watershed moment in modern cosmology, as it indicated the presence of a fundamentally new, dominant contribution to the energy budget of the universe. evidence for dark energy, the new component that causes the acceleration, has since become extremely strong, owing to an impressive variety of increasingly precise measurements of the expansion history and the growth of structure in the universe. still, one of the central challenges of modern cosmology is to shed light on the physical mechanism behind the accelerating universe. in this review, we briefly summarize the developments that led to the discovery of dark energy. next, we discuss the parametric descriptions of dark energy and the cosmological tests that allow us to better understand its nature. we then review the cosmological probes of dark energy. for each probe, we briefly discuss the physics behind it and its prospects for measuring dark energy properties. we end with a summary of the current status of dark energy research.
dark energy two decades after: observables, probes, consistency tests
we present the spectroscopic confirmation of a protocluster at z = 7.88 behind the galaxy cluster abell 2744 (hereafter a2744-z7p9od). using jwst nirspec, we find seven galaxies within a projected radius of 60 kpc. although the galaxies reside in an overdensity around ≳20× greater than a random volume, they do not show strong lyα emission. we place 2σ upper limits on the rest-frame equivalent width <16-28 å. based on the tight upper limits to the lyα emission, we constrain the volume-averaged neutral fraction of hydrogen in the intergalactic medium to be x hi > 0.45 (68% c i). using an empirical m uv-m halo relation for individual galaxies, we estimate that the total halo mass of the system is ≳4 × 1011 m ⊙. likewise, the line-of-sight velocity dispersion is estimated to be 1100 ± 200 km s-1. using an empirical relation, we estimate the present-day halo mass of a2744-z7p9od to be ~2 × 1015 m ⊙, comparable to the coma cluster. a2744-z7p9od is the highest redshift spectroscopically confirmed protocluster to date, demonstrating the power of jwst to investigate the connection between dark-matter halo assembly and galaxy formation at very early times with medium-deep observations at <20 hr total exposure time. follow-up spectroscopy of the remaining photometric candidates of the overdensity will further refine the features of this system and help characterize the role of such overdensities in cosmic reionization.
early results from glass-jwst. xiv. a spectroscopically confirmed protocluster 650 million years after the big bang
we study the gravitational-wave (gw) signatures of clouds of ultralight bosons around black holes (bhs) in binary inspirals. these clouds, which are formed via superradiance instabilities for rapidly rotating bhs, produce distinct effects in the population of bh masses and spins, and a continuous monochromatic gw signal. we show that the presence of a binary companion greatly enriches the dynamical evolution of the system, most remarkably through the existence of resonant transitions between the growing and decaying modes of the cloud (analogous to rabi oscillations in atomic physics). these resonances have rich phenomenological implications for current and future gw detectors. notably, the amplitude of the gw signal from the clouds may be reduced, and in many cases terminated, much before the binary merger. the presence of a boson cloud can also be revealed in the gw signal from the binary through the imprint of finite-size effects, such as spin-induced multipole moments and tidal love numbers. the time dependence of the cloud's energy density during the resonance leads to a sharp feature, or at least attenuation, in the contribution from the finite-size terms to the waveforms. the observation of these effects would constrain the properties of putative ultralight bosons through precision gw data, offering new probes of physics beyond the standard model.
probing ultralight bosons with binary black holes
one class of single-field inflationary models compatible with the recently-conjectured swampland criteria would be those in which a hubble slow-roll arameter ɛ h is not the same as ɛ v ∼ ( v ' /v)2. however, a roadblock for these models (with a convex potential) lie in the unacceptably high tensor-to-scalar ratio, r, generically predicted by them. in this work, illustrating through an explicit example, we point out that having a non-bunch-davies component to the initial state of cosmological perturbations makes the value of r compatible with observations. in this way, we lay down a new path even for standard models of slow-roll inflation to be consistent with the swampland criteria by invoking deviations from the bunch-davies initial state.
avoiding the string swampland in single-field inflation: excited initial states
both absorption and emission-line studies show that cold gas around galaxies is commonly outflowing at speeds of several hundred km s-1. this observational fact poses a severe challenge to our theoretical models of galaxy evolution since most feedback mechanisms (e.g. supernovae feedback) accelerate hot gas, and the time-scale it takes to accelerate a blob of cold gas via a hot wind is much larger than the time it takes to destroy the blob. we revisit this long-standing problem using three-dimensional hydrodynamical simulations with radiative cooling. our results confirm previous findings that cooling is often not efficient enough to prevent the destruction of cold gas. however, we also identify regions of parameter space where the cooling efficiency of the mixed, `warm' gas is sufficiently large to contribute new comoving cold gas, which can significantly exceed the original cold gas mass. this happens whenever, tcool, mix/tcc < 1, where tcool, mix is the cooling time of the mixed warm gas and tcc is the cloud-crushing time. this criterion is always satisfied for a large enough cloud. cooling `focuses' stripped material on to the tail where mixing takes place and new cold gas forms. a sufficiently large simulation domain is crucial to capturing this behaviour.
the growth and entrainment of cold gas in a hot wind
gravitational waves can provide an accurate measurement of the luminosity distance to the source but cannot provide the source redshift unless the degeneracy between mass and redshift can be broken. this makes it essential to infer the redshift of the source independently to measure the expansion history of the universe. we show that by exploiting the clustering scale of the gravitational wave sources with galaxies of a known redshift, we can infer the expansion history from redshift unknown gravitational wave sources. by using gravitational wave sources of unknown redshift that are detectable from the network of gravitational wave detectors with advanced ligo design sensitivity, we will be able to obtain accurate and precise measurements of the local hubble constant, the expansion history of the universe, and the gravitational wave bias parameter, which captures the distribution of gravitational wave sources with respect to the redshift tracer distribution. while we showcase its application to low redshift gravitational waves, this technique will be applicable also to the high redshift gravitational wave sources detectable from laser interferometer space antenna (lisa), cosmic explorer (ce), and einstein telescope (et). moreover, this method will also be applicable to samples of supernovae and fast radio bursts with unknown or photometric redshifts.
accurate precision cosmology with redshift unknown gravitational wave sources
the detection of binary black hole coalescences by ligo and virgo has aroused the interest in primordial black holes (pbhs), because they could be both the progenitors of these black holes and a compelling candidate of dark matter (dm). pbhs are formed soon after the enhanced scalar perturbations reenter horizon during the radiation dominated era, which would inevitably induce gravitational waves as well. searching for such scalar induced gravitational waves (sigws) provides an elegant way to probe pbhs. we perform the first direct search for the signals of sigws accompanying the formation of pbhs in the north american nanohertz observatory for gravitational waves (nanograv) 11-year dataset. no statistically significant detection has been made, and hence we place a stringent upper limit on the abundance of pbhs at 95% confidence level. in particular, less than one part in a million of the total dm mass could come from pbhs in the mass range of [2 ×10-3,7 ×10-1] m⊙ .
pulsar timing array constraints on primordial black holes with nanograv 11-year dataset
in this paper we on inflationary dynamics in the context of einstein-gauss-bonnet gravitational theories. we investigate the implications of the slow-roll condition on the slow-roll indices and we investigate how the inflationary dynamical evolution is affected by the presence of the gauss-bonnet coupling to the scalar field. to exemplify our analysis, we investigate how the dynamics of inflationary cubic-order, quartic-order, and exponential scalar potentials are affected by the nontrivial gauss-bonnet coupling to the scalar field. as we demonstrate, it is possible to obtain a viable phenomenology compatible with the observational data, although the canonical scalar field theory with cubic- and quartic-order potentials does not yield phenomenologically acceptable results. in addition, with regard to the exponential potential example, the einstein-gauss-bonnet extension of the single canonical scalar field model has an inherent mechanism that can trigger the graceful exit from inflation. furthermore, we introduce a bottom-up reconstruction technique where, by fixing the tensor-to-scalar ratio and the hubble rate as a function of the e -folding number, one is capable of reproducing the einstein-gauss-bonnet theory which generates the aforementioned quantities. we illustrate how the method works by using some relatively simple examples.
viable inflation in scalar-gauss-bonnet gravity and reconstruction from observational indices
we present constraints on extensions of the minimal cosmological models dominated by dark matter and dark energy, λ cdm and w cdm , by using a combined analysis of galaxy clustering and weak gravitational lensing from the first-year data of the dark energy survey (des y1) in combination with external data. we consider four extensions of the minimal dark energy-dominated scenarios: (1) nonzero curvature ωk, (2) number of relativistic species neff different from the standard value of 3.046, (3) time-varying equation-of-state of dark energy described by the parameters w0 and wa (alternatively quoted by the values at the pivot redshift, wp, and wa), and (4) modified gravity described by the parameters μ0 and σ0 that modify the metric potentials. we also consider external information from planck cosmic microwave background measurements; baryon acoustic oscillation measurements from sdss, 6df, and boss; redshift-space distortion measurements from boss; and type ia supernova information from the pantheon compilation of datasets. constraints on curvature and the number of relativistic species are dominated by the external data; when these are combined with des y1, we find ωk=0.002 0-0.0032+0.0037 at the 68% confidence level, and the upper limit neff<3.28 (3.55 ) at 68% (95%) confidence, assuming a hard prior neff>3.0 . for the time-varying equation-of-state, we find the pivot value (wp,wa)=(-0.9 1-0.23+0.19,-0.5 7-1.11+0.93) at pivot redshift zp=0.27 from des alone, and (wp,wa)=(-1.0 1-0.04+0.04,-0.2 8-0.48+0.37) at zp=0.20 from des y1 combined with external data; in either case we find no evidence for the temporal variation of the equation of state. for modified gravity, we find the present-day value of the relevant parameters to be σ0=0.4 3-0.29+0.28 from des y1 alone, and (σ0,μ0)=(0.0 6-0.07+0.08,-0.1 1-0.46+0.42) from des y1 combined with external data. these modified-gravity constraints are consistent with predictions from general relativity.
dark energy survey year 1 results: constraints on extended cosmological models from galaxy clustering and weak lensing
this paper describes the processing applied to the cleaned, time-ordered information obtained from the planck high frequency instrument (hfi) with the aim of producing photometrically calibrated maps in temperature and (for the first time) in polarization. the data from the entire 2.5-year hfi mission include almost five full-sky surveys. hfi observes the sky over a broad range of frequencies, from 100 to 857 ghz. to obtain the best accuracy on the calibration over such a large range, two different photometric calibration schemes have been used. the 545 and 857 ghz data are calibrated using models of planetary atmospheric emission. the lower frequencies (from 100 to 353 ghz) are calibrated using the time-variable cosmological microwave background dipole, which we call the orbital dipole. this source of calibration only depends on the satellite velocity with respect to the solar system. using a cmb temperature of tcmb = 2.7255 ± 0.0006 k, it permits an independent measurement of the amplitude of the cmb solar dipole (3364.3 ± 1.5 μk), which is approximatively 1σ higher than the wmap measurement with a direction that is consistent between the two experiments. we describe the pipeline used to produce the maps ofintensity and linear polarization from the hfi timelines, and the scheme used to set the zero level of the maps a posteriori. we also summarize the noise characteristics of the hfi maps in the 2015 planck data release and present some null tests to assess their quality. finally, we discuss the major systematic effects and in particular the leakage induced by flux mismatch between the detectors that leads to spurious polarization signal.
planck 2015 results. viii. high frequency instrument data processing: calibration and maps
we present radio observations of 23 optically-discovered tidal disruption events (tdes) on timescales of about 500-3200 days post-discovery. we detect 9 new tdes that did not have detectable radio emission at earlier times, indicating a late-time brightening after several hundred (and up to 2300 days); an additional 6 tdes exhibit radio emission whose origin is ambiguous or may be attributed to the host galaxy or an agn. we also report new rising components in two tdes previously detected in the radio (iptf16fnl and at2019dsg) at ~1000 days. while the radio emission in some of the detected tdes peaked on a timescale of ~2-4 years, more than half of the sample still shows rising emission. the range of luminosities for the sample is 10^37-10^39 erg/s, about two orders of magnitude below the radio luminosity of the relativistic tde sw1644+57. our data set indicates that about 40% of all optical tdes are detected in the radio hundreds to thousands of days after discovery, and that this is probably more common than early radio emission peaking at ~100 days. using an equipartition analysis, we find evidence for a delayed launch of the radio-emitting outflows, with delay timescales of ~500-2000 days, inferred velocities of ~0.02-0.15c, and kinetic energies of ~10^47-10^49 erg. we rule out off-axis relativistic jets as a viable explanation for this population, and conclude delayed outflows are a more likely explanation, such as from delayed disk formation. finally, we find comparable densities in the circumnuclear environments of these tdes as for those with early radio emission, and find the tdes still rising in luminosity are consistent with free expansion. we conclude that late radio emission marks a fairly ubiquitous but heretofore overlooked phase of tde evolution.
ubiquitous late radio emission from tidal disruption events
mapping nearby galaxies at apache point observatory (manga) is an integral-field spectroscopic survey that is one of three core programs in the fourth-generation sloan digital sky survey (sdss-iv). manga’s 17 pluggable optical fiber-bundle integral field units (ifus) will observe a sample of 10,000 nearby galaxies distributed throughout the sdss imaging footprint (focusing particularly on the north galactic cap). in each pointing these ifus are deployed across a 3° field; they yield spectral coverage 3600-10300 å at a typical resolution r ∼ 2000, and sample the sky with 2″ diameter fiber apertures with a total bundle fill factor of 56%. observing over such a large field and range of wavelengths is particularly challenging for obtaining uniform and integral spatial coverage and resolution at all wavelengths and across each entire fiber array. data quality is affected by the ifu construction technique, chromatic and field differential refraction, the adopted dithering strategy, and many other effects. we use numerical simulations to constrain the hardware design and observing strategy for the survey with the aim of ensuring consistent data quality that meets the survey science requirements while permitting maximum observational flexibility. we find that manga science goals are best achieved with ifus composed of a regular hexagonal grid of optical fibers with rms displacement of 5 μm or less from their nominal packing position; this goal is met by the manga hardware, which achieves 3 μm rms fiber placement. we further show that manga observations are best obtained in sets of three 15 minute exposures dithered along the vertices of a 1.44 arcsec equilateral triangle; these sets form the minimum observational unit, and are repeated as needed to achieve a combined signal-to-noise ratio of 5 å-1 per fiber in the r-band continuum at a surface brightness of 23 ab arcsec-2. in order to ensure uniform coverage and delivered image quality, we require that the exposures in a given set be obtained within a 60 minute interval of each other in hour angle, and that all exposures be obtained at airmass ≲ 1.2 (i.e., within 1-3 hr of transit depending on the declination of a given field).
observing strategy for the sdss-iv/manga ifu galaxy survey
neutrinos remain mysterious. as an example, enhanced self-interactions (ν si ), which would have broad implications, are allowed. at the high neutrino densities within core-collapse supernovae, ν si should be important, but robust observables have been lacking. we show that ν si make neutrinos form a tightly coupled fluid that expands under relativistic hydrodynamics. the outflow becomes either a burst or a steady-state wind; which occurs here is uncertain. though the diffusive environment where neutrinos are produced may make a wind more likely, further work is needed to determine when each case is realized. in the burst-outflow case, ν si increase the duration of the neutrino signal, and even a simple analysis of sn 1987a data has powerful sensitivity. for the wind-outflow case, we outline several promising ideas that may lead to new observables. combined, these results are important steps toward solving the 35-year-old puzzle of how ν si affect supernovae.
toward powerful probes of neutrino self-interactions in supernovae
gleam, the galactic and extragalactic all-sky mwa survey, is a survey of the entire radio sky south of declination + 25° at frequencies between 72 and 231 mhz, made with the mwa using a drift scan method that makes efficient use of the mwa's very large field-of-view. we present the observation details, imaging strategies, and theoretical sensitivity for gleam. the survey ran for two years, the first year using 40-khz frequency resolution and 0.5-s time resolution; the second year using 10-khz frequency resolution and 2 s time resolution. the resulting image resolution and sensitivity depends on observing frequency, sky pointing, and image weighting scheme. at 154 mhz, the image resolution is approximately 2.5 × 2.2/cos (δ + 26.7°) arcmin with sensitivity to structures up to ~ 10° in angular size. we provide tables to calculate the expected thermal noise for gleam mosaics depending on pointing and frequency and discuss limitations to achieving theoretical noise in stokes i images. we discuss challenges, and their solutions, that arise for gleam including ionospheric effects on source positions and linearly polarised emission, and the instrumental polarisation effects inherent to the mwa's primary beam.
gleam: the galactic and extragalactic all-sky mwa survey
the two-point correlation function (2pcf) is the most widely used tool for quantifying the spatial distribution of galaxies. since the distribution of galaxies is determined by galaxy formation physics as well as the underlying cosmology, fitting an observed correlation function yields valuable insights into both. the calculation for a 2pcf involves computing pair-wise separations and consequently, the computing time-scales quadratically with the number of galaxies. the next-generation galaxy surveys are slated to observe many millions of galaxies, and computing the 2pcf for such surveys would be prohibitively time-consuming. additionally, modern modelling techniques require the 2pcf to be calculated thousands of times on simulated galaxy catalogues of at least equal size to the data and would be completely unfeasible for the next-generation surveys. thus, calculating the 2pcf forms a substantial bottleneck in improving our understanding of the fundamental physics of the universe, and we need high-performance software to compute the correlation function. in this paper, we present corrfunc - a suite of highly optimized, openmp parallel clustering codes. the improved performance of corrfunc arises from both efficient algorithms as well as software design that suits the underlying hardware of modern cpus. corrfunc can compute a wide range of 2d and 3d correlation functions in either simulation (cartesian) space or on-sky coordinates. corrfunc runs efficiently in both single- and multithreaded modes and can compute a typical two-point projected correlation function [wp(rp)] for ∼1 million galaxies within a few seconds on a single thread. corrfunc is designed to be both user-friendly and fast and is publicly available at https://github.com/manodeep/corrfunc.
corrfunc - a suite of blazing fast correlation functions on the cpu
the hubble constant ($h_0$) tension is one of the major open problems in modern cosmology. this tension is the discrepancy, ranging from 4 to 6 $\sigma$, between the $h_0$ value estimated locally with the combination of supernovae ia (sne ia) + cepheids and the cosmological $h_0$ obtained through the study of the cosmic microwave background (cmb) radiation. the approaches adopted in dainotti et al. 2021 (apj) and dainotti et al. 2022 (galaxies) are introduced. through a binning division of the pantheon sample of sne ia (scolnic et al. 2018), the value of $h_0$ has been estimated in each of the redshift-ordered bins and fitted with a function lowering with the redshift. the results show a decreasing trend of $h_0$ with redshift. if this is not due to astrophysical biases or residual redshift evolution of the sne ia parameters, it can be explained in light of modified gravity theories, e.g., the $f(r)$ scenarios. we also briefly describe the possible impact of high-$z$ probes on the hubble constant tension, such as gamma-ray bursts (grbs) and quasars (qsos), reported in dainotti et al. 2022 (galaxies) and lenart et al. 2022 (apj), respectively.
the hubble constant tension: current status and future perspectives through new cosmological probes
the rotation curves of spiral galaxies exhibit a diversity that has been difficult to understand in the cold dark matter (cdm) paradigm. we show that the self-interacting dark matter (sidm) model provides excellent fits to the rotation curves of a sample of galaxies with asymptotic velocities in the 25 - 300 km /s range that exemplify the full range of diversity. we assume only the halo concentration-mass relation predicted by the cdm model and a fixed value of the self-interaction cross section. in dark-matter-dominated galaxies, thermalization due to self-interactions creates large cores and reduces dark matter densities. in contrast, thermalization leads to denser and smaller cores in more luminous galaxies and naturally explains the flatness of rotation curves of the highly luminous galaxies at small radii. our results demonstrate that the impact of the baryons on the sidm halo profile and the scatter from the assembly history of halos as encoded in the concentration-mass relation can explain the diverse rotation curves of spiral galaxies.
self-interacting dark matter can explain diverse galactic rotation curves
single phonon excitations are sensitive probes of light-dark matter in the kev-gev mass window. for anisotropic target materials, the signal depends on the direction of the incoming dark matter wind and exhibits a daily modulation. we discuss in detail the various sources of anisotropy and carry out a comparative study of 26 crystal targets, focused on sub-mev dark matter benchmarks. we compute the modulation reach for the most promising targets, corresponding to the cross section where the daily modulation can be observed for a given exposure, which allows us to combine the strength of dark matter-phonon couplings and the amplitude of daily modulation. we highlight al2o3 (sapphire), cawo4 , and h-bn (hexagonal boron nitride) as the best polar materials for recovering a daily modulation signal, which feature o (1 - 100 )% variations of detection rates throughout the day, depending on the dark matter mass and interaction. the directional nature of single phonon excitations offers a useful handle to mitigate backgrounds, which is crucial for fully realizing the discovery potential of near future experiments.
directional detectability of dark matter with single phonon excitations: target comparison
we present the discovery of nine quasars at z∼ 6 identified in the sloan digital sky survey (sdss) imaging data. this completes our survey of z∼ 6 quasars in the sdss footprint. our final sample consists of 52 quasars at 5.7\lt z≤slant 6.4, including 29 quasars with {z}{ab}≤slant 20 mag selected from 11,240 deg2 of the sdss single-epoch imaging survey (the main survey), 10 quasars with 20≤slant {z}{ab}≤slant 20.5 selected from 4223 deg2 of the sdss overlap regions (regions with two or more imaging scans), and 13 quasars down to {z}{ab}≈ 22 mag from the 277 deg2 in stripe 82. they span a wide luminosity range of -29.0≤slant {m}1450≤slant -24.5. this well-defined sample is used to derive the quasar luminosity function (qlf) at z∼ 6. after combining our sdss sample with two faint ({m}1450≥slant -23 mag) quasars from the literature, we obtain the parameters for a double power-law fit to the qlf. the bright-end slope β of the qlf is well constrained to be β =-2.8+/- 0.2. due to the small number of low-luminosity quasars, the faint-end slope α and the characteristic magnitude {m}1450*are less well constrained, with α =-{1.90}-0.44+0.58 and {m}* =-{25.2}-3.8+1.2 mag. the spatial density of luminous quasars, parametrized as ρ ({m}1450\lt -26,z)=ρ (z=6){10}k(z-6), drops rapidly from z∼ 5 to 6, with k=-0.72+/- 0.11. based on our fitted qlf and assuming an intergalactic medium (igm) clumping factor of c = 3, we find that the observed quasar population cannot provide enough photons to ionize the z∼ 6 igm at ∼90% confidence. quasars may still provide a significant fraction of the required photons, although much larger samples of faint quasars are needed for more stringent constraints on the quasar contribution to reionization.
the final sdss high-redshift quasar sample of 52 quasars at z>5.7
aims: this paper describes the polarimetric and helioseismic imager on the solar orbiter mission (so/phi), the first magnetograph and helioseismology instrument to observe the sun from outside the sun-earth line. it is the key instrument meant to address the top-level science question: how does the solar dynamo work and drive connections between the sun and the heliosphere? so/phi will also play an important role in answering the other top-level science questions of solar orbiter, while hosting the potential of a rich return in further science.methods: so/phi measures the zeeman effect and the doppler shift in the fe i 617.3 nm spectral line. to this end, the instrument carries out narrow-band imaging spectro-polarimetry using a tunable linbo3 fabry-perot etalon, while the polarisation modulation is done with liquid crystal variable retarders. the line and the nearby continuum are sampled at six wavelength points and the data are recorded by a 2k × 2k cmos detector. to save valuable telemetry, the raw data are reduced on board, including being inverted under the assumption of a milne-eddington atmosphere, although simpler reduction methods are also available on board. so/phi is composed of two telescopes; one, the full disc telescope, covers the full solar disc at all phases of the orbit, while the other, the high resolution telescope, can resolve structures as small as 200 km on the sun at closest perihelion. the high heat load generated through proximity to the sun is greatly reduced by the multilayer-coated entrance windows to the two telescopes that allow less than 4% of the total sunlight to enter the instrument, most of it in a narrow wavelength band around the chosen spectral line.results: so/phi was designed and built by a consortium having partners in germany, spain, and france. the flight model was delivered to airbus defence and space, stevenage, and successfully integrated into the solar orbiter spacecraft. a number of innovations were introduced compared with earlier space-based spectropolarimeters, thus allowing so/phi to fit into the tight mass, volume, power and telemetry budgets provided by the solar orbiter spacecraft and to meet the (e.g. thermal) challenges posed by the mission's highly elliptical orbit.
the polarimetric and helioseismic imager on solar orbiter
dark matter in the milky way may annihilate directly into γ rays, producing a monoenergetic spectral line. therefore, detecting such a signature would be strong evidence for dark matter annihilation or decay. we search for spectral lines in the fermi large area telescope observations of the milky way halo in the energy range 200 mev-500 gev using analysis methods from our most recent line searches. the main improvements relative to previous works are our use of 5.8 years of data reprocessed with the pass 8 event-level analysis and the additional data resulting from the modified observing strategy designed to increase exposure of the galactic center region. we search in five sky regions selected to optimize sensitivity to different theoretically motivated dark matter scenarios and find no significant detections. in addition to presenting the results from our search for lines, we also investigate the previously reported tentative detection of a line at 133 gev using the new pass 8 data.
updated search for spectral lines from galactic dark matter interactions with pass 8 data from the fermi large area telescope
we present cosmopower, a suite of neural cosmological power spectrum emulators providing orders-of-magnitude acceleration for parameter estimation from two-point statistics analyses of large-scale structure (lss) and cosmic microwave background (cmb) surveys. the emulators replace the computation of matter and cmb power spectra from boltzmann codes; thus, they do not need to be re-trained for different choices of astrophysical nuisance parameters or redshift distributions. the matter power spectrum emulation error is less than $0.4{{\ \rm per\ cent}}$ in the wavenumber range $k \in [10^{-5}, 10] \, \mathrm{mpc}^{-1}$ for redshift z ∈ [0, 5]. cosmopower emulates cmb temperature, polarization, and lensing potential power spectra in the 5-σ region of parameter space around the planck best-fitting values with an error ${\lesssim}10{{\ \rm per\ cent}}$ of the expected shot noise for the forthcoming simons observatory. cosmopower is showcased on a joint cosmic shear and galaxy clustering analysis from the kilo-degree survey, as well as on a stage iv euclid-like simulated cosmic shear analysis. for the cmb case, cosmopower is tested on a planck 2018 cmb temperature and polarization analysis. the emulators always recover the fiducial cosmological constraints with differences in the posteriors smaller than sampling noise, while providing a speed-up factor up to o(104) to the complete inference pipeline. this acceleration allows posterior distributions to be recovered in just a few seconds, as we demonstrate in the planck likelihood case. cosmopower is written entirely in python, can be interfaced with all commonly used cosmological samplers, and is publicly available at: https://github.com/alessiospuriomancini/cosmopower.
cosmopower: emulating cosmological power spectra for accelerated bayesian inference from next-generation surveys
aims: we estimate the mass of the inner (< 20 kpc) milky way and the axis ratio of its inner dark matter halo using globular clusters as tracers. at the same time, we constrain the distribution in phase-space of the globular cluster system around the galaxy.methods: we use the gaia data release 2 catalogue of 75 globular clusters' proper motions and recent measurements of the proper motions of another 20 distant clusters obtained with the hubble space telescope. we describe the globular cluster system with a distribution function (df) with two components: a flat, rotating disc-like one and a rounder, more extended halo-like one. while fixing the milky way's disc and bulge, we let the mass and shape of the dark matter halo and we fit these two parameters, together with six others describing the df, with a bayesian method.results: we find the mass of the galaxy within 20 kpc to be m(<20 kpc) = 1.91-0.17+0.18×1011 m⊙, of which mdm(<20 kpc) = 1.37-0.17+0.18×1011 m⊙ is in dark matter, and the density axis ratio of the dark matter halo to be q = 1.30 ± 0.25. assuming a concentration-mass relation, this implies a virial mass mvir = 1.3±0.3×1012 m⊙. our analysis rules out oblate (q < 0.8) and strongly prolate halos (q > 1.9) with 99% probability. our preferred model reproduces well the observed phase-space distribution of globular clusters and has a disc component that closely resembles that of the galactic thick disc. the halo component follows a power-law density profile ρ ∝ r-3.3, has a mean rotational velocity of vrot ≃ -14km s-1 at 20 kpc, and has a mildly radially biased velocity distribution (β ≃ 0.2 ± 0.07, which varies significantly with radius only within the inner 15 kpc). we also find that our distinction between disc and halo clusters resembles, although not fully, the observed distinction in metal-rich ([fe/h] > -0.8) and metal-poor ([fe/h] ≤ -0.8) cluster populations.
mass and shape of the milky way's dark matter halo with globular clusters from gaia and hubble
above a critical dark matter-nucleus scattering cross section any terrestrial direct detection experiment loses sensitivity to dark matter, since the earth crust, atmosphere, and potential shielding layers start to block off the dark matter particles. this critical cross section is commonly determined by describing the average energy loss of the dark matter particles analytically. however, this treatment overestimates the stopping power of the earth crust. therefore the obtained bounds should be considered as conservative. we perform monte carlo simulations to determine the precise value of the critical cross section for various direct detection experiments and compare them to other dark matter constraints in the low mass regime. in this region we find parameter space where typical underground and surface detectors are completely blind to dark matter. this "hole" in the parameter space can hardly be closed with an increase in the detector exposure. dedicated surface or high-altitude experiments may be the only way to directly probe this part of the parameter space.
how blind are underground and surface detectors to strongly interacting dark matter?
the silcc (simulating the life-cycle of molecular clouds) project aims to self-consistently understand the small-scale structure of the interstellar medium (ism) and its link to galaxy evolution. we simulate the evolution of the multiphase ism in a (500 pc)2 × ±5 kpc region of a galactic disc, with a gas surface density of σ _{_gas} = 10 m_{⊙} pc^{-2}. the flash 4 simulations include an external potential, self-gravity, magnetic fields, heating and radiative cooling, time-dependent chemistry of h2 and co considering (self-) shielding, and supernova (sn) feedback but omit shear due to galactic rotation. we explore sn explosions at different rates in high-density regions (peak), in random locations with a gaussian distribution in the vertical direction (random), in a combination of both (mixed), or clustered in space and time (clus/clus2). only models with self-gravity and a significant fraction of sne that explode in low-density gas are in agreement with observations. without self-gravity and in models with peak driving the formation of h2 is strongly suppressed. for decreasing sn rates, the h2 mass fraction increases significantly from <10 per cent for high sn rates, i.e. 0.5 dex above kennicutt-schmidt, to 70-85 per cent for low sn rates, i.e. 0.5 dex below ks. for an intermediate sn rate, clustered driving results in slightly more h2 than random driving due to the more coherent compression of the gas in larger bubbles. magnetic fields have little impact on the final disc structure but affect the dense gas (n ≳ 10 cm-3) and delay h2 formation. most of the volume is filled with hot gas (∼80 per cent within ±150 pc). for all but peak driving a vertically expanding warm component of atomic hydrogen indicates a fountain flow. we highlight that individual chemical species populate different ism phases and cannot be accurately modelled with temperature-/density-based phase cut-offs.
the silcc (simulating the lifecycle of molecular clouds) project - i. chemical evolution of the supernova-driven ism
with the physical higgs mass the standard model symmetry restoration phase transition is a smooth cross-over. we study the thermodynamics of the cross-over using numerical lattice monte carlo simulations of an effective su (2 )×u (1 ) gauge+higgs theory, significantly improving on previously published results. we measure the higgs field expectation value, thermodynamic quantities like pressure, energy density, speed of sound and heat capacity, and screening masses associated with the higgs and z fields. while the cross-over is smooth, it is very well defined with a width of only ∼5 gev . we measure the cross-over temperature from the maximum of the susceptibility of the higgs condensate, with the result tc=159.5 ±1.5 gev . outside of the narrow cross-over region the perturbative results agree well with nonperturbative ones.
standard model cross-over on the lattice
the intersection of the cosmic and neutrino frontiers is a rich field where much discovery space still remains. neutrinos play a pivotal role in the hot big bang cosmology, influencing the dynamics of the universe over numerous decades in cosmological history. recent studies have made tremendous progress in understanding some properties of cosmological neutrinos, primarily their energy density. upcoming cosmological probes will measure the energy density of relativistic particles with higher precision, but could also start probing other properties of the neutrino spectra. when convolved with results from terrestrial experiments, cosmology can become even more acute at probing new physics related to neutrinos or even beyond the standard model (bsm). any discordance between laboratory and cosmological data sets may reveal new bsm physics and/or suggest alternative models of cosmology. we give examples of the intersection between terrestrial and cosmological probes in the neutrino sector, and briefly discuss the possibilities of what different laboratory experiments may see in conjunction with cosmological observatories.
synergy between cosmological and laboratory searches in neutrino physics
we study the abundance of substructure in the matter density near galaxies using alma science verification observations of the strong lensing system sdp.81. we present a method to measure the abundance of subhalos around galaxies using interferometric observations of gravitational lenses. using simulated alma observations we explore the effects of various systematics, including antenna phase errors and source priors, and show how such errors may be measured or marginalized. we apply our formalism to alma observations of sdp.81. we find evidence for the presence of a m = 108.96±0.12 m ⊙ subhalo near one of the images, with a significance of 6.9σ in a joint fit to data from bands 6 and 7; the effect of the subhalo is also detected in both bands individually. we also derive constraints on the abundance of dark matter (dm) subhalos down to m ∼ 2 × 107 m ⊙, pushing down to the mass regime of the smallest detected satellites in the local group, where there are significant discrepancies between the observed population of luminous galaxies and predicted dm subhalos. we find hints of additional substructure, warranting further study using the full sdp.81 data set (including, for example, the spectroscopic imaging of the lensed carbon monoxide emission). we compare the results of this search to the predictions of λcdm halos, and find that given current uncertainties in the host halo properties of sdp.81, our measurements of substructure are consistent with theoretical expectations. observations of larger samples of gravitational lenses with alma should be able to improve the constraints on the abundance of galactic substructure.
detection of lensing substructure using alma observations of the dusty galaxy sdp.81
the existence of optical-ultraviolet tidal disruption events (tdes) could be considered surprising because their electromagnetic output was originally predicted to be dominated by x-ray emission from an accretion disk. yet over the last decade, the growth of optical transient surveys has led to the identification of a new class of optical transients occurring exclusively in galaxy centers, many of which are considered to be tdes. here we review the observed properties of these events, identified based on a shared set of both photometric and spectroscopic properties. we present a homogeneous analysis of 33 sources that we classify as robust tdes, and which we divide into classes. the criteria used here to classify tdes will possibly get updated as new samples are collected and potential additional diversity of tdes is revealed. we also summarize current measurements of the optical-ultraviolet tde rate, as well as the mass function and luminosity function. many open questions exist regarding the current sample of events. we anticipate that the search for answers will unlock new insights in a variety of fields, from accretion physics to galaxy evolution.
optical-ultraviolet tidal disruption events
we show that the evolution of interacting massive particles in the de sitter bulk can be understood at leading order as a series of resonant decay and production events. from this perspective, we classify the cosmological collider signals into local and nonlocal categories with drastically different physical origins. this further allows us to derive a cutting rule for efficiently extracting these cosmological collider signals in an analytical fashion. our cutting rule is a practical way for extracting cosmological collider signals in model building, and can be readily implemented as symbolic computational packages in the future.
cutting rule for cosmological collider signals: a bulk evolution perspective
the non-linear process of cosmic structure formation produces gravitationally bound overdensities of dark matter known as halos. the abundances, density profiles, ellipticities, and spins of these halos can be tied to the underlying fundamental particle physics that governs dark matter at microscopic scales. thus, macroscopic measurements of dark matter halos offer a unique opportunity to determine the underlying properties of dark matter across the vast landscape of dark matter theories. this white paper summarizes the ongoing rapid development of theoretical and experimental methods, as well as new opportunities, to use dark matter halo measurements as a pillar of dark matter physics.
snowmass2021 cosmic frontier white paper: dark matter physics from halo measurements
in this paper, we argue and show numerically that the threshold to form primordial black holes from an initial spherically symmetric perturbation is, to an excellent approximation, universal, whenever given in terms of the compaction function averaged over a sphere of radius rm, where rm is the scale on which the compaction function is maximum. this can be understood as the requirement that, for a black hole to form, each shell of the averaged compaction function should have an amplitude exceeding the so-called harada-yoo-kohri limit. for a radiation dominated universe we argued, supported by the numerical simulations, that this limit is δc=0.40 , which is slightly below the one quoted in the literature. additionally, we show that the profile dependence of the threshold for the compaction function is only sensitive to its curvature at the maximum. we use these results to provide an analytic formula for the threshold amplitude of the compaction function at its maximum in terms of the normalized compaction function curvature at rm.
universal threshold for primordial black hole formation
several statistics have been proposed for measuring the ksz effect by combining the small-scale cmb with galaxy surveys. we review five such statistics, and show that they are all mathematically equivalent to the optimal bispectrum estimator of type $\langle ggt \rangle$. reinterpreting these ksz statistics as special cases of bispectrum estimation makes many aspects transparent, for example optimally weighting the estimator, or incorporating photometric redshift errors. we analyze the information content of the bispectrum and show that there are two observables: the small-scale galaxy-electron power spectrum $p_{ge}(k_s)$, and the large-scale galaxy-velocity power spectrum $p_{gv}(k)$. the cosmological constraining power of the ksz arises from its sensitivity to fluctuations on large length scales, where its effective noise level can be much better than galaxy surveys.
ksz tomography and the bispectrum
the inflationary origin of primordial black holes (pbhs) relies on a large enhancement of the power spectrum δζ of the curvature fluctuation ζ at wavelengths much shorter than those of the cosmic microwave background anisotropies. this is typically achieved in models where ζ evolves without interacting significantly with additional (isocurvature) scalar degrees of freedom. however, quantum gravity inspired models are characterized by moduli spaces with highly curved geometries and a large number of scalar fields that could vigorously interact with ζ (as in the cosmological collider picture). here we show that isocurvature fluctuations can mix with ζ inducing large enhancements of its amplitude. this occurs whenever the inflationary trajectory experiences rapid turns in the field space of the model leading to amplifications that are exponentially sensitive to the total angle swept by the turn, which induce characteristic observable signatures on δζ. we derive accurate analytical predictions and show that the large enhancements required for pbhs demand noncanonical kinetic terms in the action of the multifield system.
seeding primordial black holes in multifield inflation
we study the stochastic gravitational wave (gw) background induced by the primordial scalar perturbation with the spectrum having a lognormal peak of width δ at k=k*. we derive an analytical formula for the gw spectrum ωgw for both narrow (δll1) and broad (δgtrsim 1) peaks. in the narrow-peak case, the spectrum has a double peak feature with the sharper peak at k= 2k*/√3. on the infrared (ir) side of the spectrum, we find power-law behavior with a break at k=kb in the power-law index where it chages from k3 on the far ir side to k2 on the near ir side. we find the ratio of the break frequency to the peak frequency is determined by δ as fb/fp≈√3δ, where fb and fp are the break and peak frequencies, respectively. in the broad-peak case, we find the gw spectrum also has a lognormal peak at k=k* but with a smaller width of δ/√2. using these derived analytic formulae, we also present expressions for the maximum values of ωgw for both narrow and broad cases. our results will provide a useful tool in searching for the induced gw signals in the coming decades.
gravitational waves induced by scalar perturbations with a lognormal peak
we examine the impact of baryon acoustic oscillation (bao) scale measurements on the discrepancy between the value of the hubble constant (h 0) inferred from the local distance ladder and that from planck cosmic microwave background (cmb) data. while the bao data alone cannot constrain h 0, we show that combining the latest bao results with wmap, atacama cosmology telescope (act), or south pole telescope (spt) cmb data produces values of h 0 that are 2.4{--}3.1σ lower than the distance ladder, independent of planck, and that this downward pull was less apparent in some earlier analyses that used only angle-averaged bao scale constraints rather than full anisotropic information. at the same time, the combination of bao and cmb data also disfavors the lower values of h 0 preferred by the planck high-multipole temperature power spectrum. combining galaxy and lyα forest bao with a precise estimate of the primordial deuterium abundance produces {h}0=66.98+/- 1.18 km s-1 mpc-1 for the flat {{λ }}{cdm} model. this value is completely independent of cmb anisotropy constraints and is 3.0σ lower than the latest distance ladder constraint, although 2.4σ tension also exists between the galaxy bao and lyα bao. these results show that it is not possible to explain the h 0 disagreement solely with a systematic error specific to the planck data. the fact that tensions remain even after the removal of any single data set makes this intriguing puzzle all the more challenging to resolve.
elucidating λcdm: impact of baryon acoustic oscillation measurements on the hubble constant discrepancy
we explore gravitational wave signals arising from first-order phase transitions occurring in a secluded hidden sector, allowing for the possibility that the hidden sector may have a different temperature than the standard model sector. we present the sensitivity to such scenarios for both current and future gravitational wave detectors in a model-independent fashion. since secluded hidden sectors are of particular interest for dark matter models at the mev scale or below, we pay special attention to the reach of pulsar timing arrays. cosmological constraints on light degrees of freedom restrict the number of sub-mev particles in a hidden sector, as well as the hidden sector temperature. nevertheless, we find that observable first-order phase transitions can occur. to illustrate our results, we consider two minimal benchmark models: a model with two gauge singlet scalars and a model with a spontaneously broken u(1) gauge symmetry in the hidden sector.
dark, cold, and noisy: constraining secluded hidden sectors with gravitational waves
the cosmic web is one of the most striking features of the distribution of galaxies and dark matter on the largest scales in the universe. it is composed of dense regions packed full of galaxies, long filamentary bridges, flattened sheets and vast low-density voids. the study of the cosmic web has focused primarily on the identification of such features, and on understanding the environmental effects on galaxy formation and halo assembly. as such, a variety of different methods have been devised to classify the cosmic web - depending on the data at hand, be it numerical simulations, large sky surveys or other. in this paper, we bring 12 of these methods together and apply them to the same data set in order to understand how they compare. in general, these cosmic-web classifiers have been designed with different cosmological goals in mind, and to study different questions. therefore, one would not a priori expect agreement between different techniques; however, many of these methods do converge on the identification of specific features. in this paper, we study the agreements and disparities of the different methods. for example, each method finds that knots inhabit higher density regions than filaments, etc. and that voids have the lowest densities. for a given web environment, we find a substantial overlap in the density range assigned by each web classification scheme. we also compare classifications on a halo-by-halo basis; for example, we find that 9 of 12 methods classify around a third of group-mass haloes (i.e. mhalo ∼ 1013.5 h-1 m⊙) as being in filaments. lastly, so that any future cosmic-web classification scheme can be compared to the 12 methods used here, we have made all the data used in this paper public.
tracing the cosmic web
recently an extraordinarily bright gamma-ray burst, grb 221009a, was observed by several facilities covering the whole electromagnetic spectrum. gamma rays with energies up to 18 tev were detected, as well as a possible photon with 251 tev. such energetic events are not expected because they would be attenuated by pair-production interactions with the extragalactic background light. this tension is, however, only apparent, and does not call for any unconventional explanation. here i show that these observations can be interpreted as the result of ultra-high-energy cosmic rays (uhecrs) interacting with cosmological radiation fields during their journey to earth, provided that intergalactic magnetic fields are reasonably weak. if this hypothesis is correct, it would establish bursts like grb 221009a as uhecr sources.
grb 221009a: a potential source of ultra-high-energy cosmic rays
it is often said that asymmetric dark matter is light compared to typical weakly interacting massive particles. here we point out a simple scheme with a neutrino portal and o (60 gev ) asymmetric dark matter which may be "added" to any standard electroweak baryogenesis scenario. the dark sector contains a copy of the standard model gauge group, as well as one matter family (at least), higgs, and right-handed neutrino. after baryogenesis, some lepton asymmetry is transferred to the dark sector through the neutrino portal where dark sphalerons convert it into a dark baryon asymmetry. dark hadrons form asymmetric dark matter and may be directly detected due to the vector portal. surprisingly, even dark anti-neutrons may be directly detected if they have a sizeable electric dipole moment. the dark photons visibly decay in current and future experiments which probe complementary parameter space to dark matter direct detection searches. exotic higgs decays are excellent signals at future e+e- higgs factories.
asymmetric dark matter may not be light
the eft coefficients in any gapped, scalar, lorentz invariant field theory must satisfy positivity requirements if there is to exist a local, analytic wilsonian uv completion. we apply these bounds to the tree level scattering amplitudes for a massive galileon. the addition of a mass term, which does not spoil the non-renormalization theorem of the galileon and preserves the galileon symmetry at loop level, is necessary to satisfy the lowest order positivity bound. we further show that a careful choice of successively higher derivative corrections are necessary to satisfy the higher order positivity bounds. there is then no obstruction to a local uv completion from considerations of tree level 2-to-2 scattering alone. to demonstrate this we give an explicit example of such a uv completion.
massive galileon positivity bounds
the precise localization (<1 arcsec) of multiple fast radio bursts (frbs) to z > 0.1 galaxies has confirmed that the dispersion measures (dms) of these enigmatic sources afford a new opportunity to probe the diffuse ionized gas around and in between galaxies. in this manuscript, we examine the signatures of gas in dark matter haloes (aka halo gas) on dm observations in current and forthcoming frb surveys. combining constraints from observations of the high-velocity clouds, o vii absorption, and the dm to the large magellanic cloud with hydrostatic models of halo gas, we estimate that our galactic halo will contribute {dm_mw,halo}≈ 50-80 pc cm^{-3} from the sun to 200 kpc independent of any contribution from the galactic ism. extending analysis to the local group, we demonstrate that m31's halo will be easily detected by high-sample frb surveys (e.g. chime) although signatures from a putative local group medium may compete. we then review current empirical constraints on halo gas in distant galaxies and discuss the implications for their dm contributions. we further examine the dm probability distribution function of a population of frbs at z ≫ 0 using an updated halo mass function and new models for the halo density profile. lastly, we illustrate the potential of frb experiments for resolving the baryonic fraction of haloes by analysing simulated sightlines through the casbah survey. all of the codes and data products of our analysis are available at https://github.com/frbs.
probing galactic haloes with fast radio bursts
the darkside-50 direct-detection dark matter experiment is a dual-phase argon time projection chamber operating at laboratori nazionali del gran sasso. this paper reports on the blind analysis of a (16 660 ±270 ) kg d exposure using a target of low-radioactivity argon extracted from underground sources. we find no events in the dark matter selection box and set a 90% c.l. upper limit on the dark matter-nucleon spin-independent cross section of 1.14 ×10-44 cm2 (3.78 ×10-44 cm2 , 3.43 ×10-43 cm2) for a wimp mass of 100 gev /c2 (1 tev /c2 , 10 tev /c2 ).
darkside-50 532-day dark matter search with low-radioactivity argon
within the framework of scalar-tensor theories, we study the conditions that allow single field inflation dynamics on small cosmological scales to significantly differ from that of the large scales probed by the observations of cosmic microwave background. the resulting single field double inflation scenario is characterised by two consequent inflation eras, usually separated by a period where the slow-roll approximation fails. at large field values the dynamics of the inflaton is dominated by the interplay between its non-minimal coupling to gravity and the radiative corrections to the inflaton self-coupling. for small field values the potential is, instead, dominated by a polynomial that results in a hilltop inflation. without relying on the slow-roll approximation, which is invalidated by the appearance of the intermediate stage, we propose a concrete model that matches the current measurements of inflationary observables and employs the freedom granted by the framework on small cosmological scales to give rise to a sizeable population of primordial black holes generated by large curvature fluctuations. we find that these features generally require a potential with a local minimum. we show that the associated primordial black hole mass function is only approximately lognormal.
single field double inflation and primordial black holes
as the statistical power of galaxy weak lensing reaches per cent level precision, large, realistic, and robust simulations are required to calibrate observational systematics, especially given the increased importance of object blending as survey depths increase. to capture the coupled effects of blending in both shear and photometric redshift calibration, we define the effective redshift distribution for lensing, nγ(z), and describe how to estimate it using image simulations. we use an extensive suite of tailored image simulations to characterize the performance of the shear estimation pipeline applied to the dark energy survey (des) year 3 data set. we describe the multiband, multi-epoch simulations, and demonstrate their high level of realism through comparisons to the real des data. we isolate the effects that generate shear calibration biases by running variations on our fiducial simulation, and find that blending-related effects are the dominant contribution to the mean multiplicative bias of approximately $-2{{\ \rm per\ cent}}$. by generating simulations with input shear signals that vary with redshift, we calibrate biases in our estimation of the effective redshift distribution, and demonstrate the importance of this approach when blending is present. we provide corrected effective redshift distributions that incorporate statistical and systematic uncertainties, ready for use in des year 3 weak lensing analyses.
dark energy survey y3 results: blending shear and redshift biases in image simulations
we describe the open-source global fitting package gambit: the global and modular beyond-the-standard-model inference tool. gambit combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in gambit. accompanying papers deal with individual modules and present first gambit results. gambit can be downloaded from gambit.hepforge.org.
gambit: the global and modular beyond-the-standard-model inference tool
active galactic nuclei (agns) are known to show flux variability over all observable timescales and across the entire electromagnetic spectrum. over the past decade, a growing number of sources have been observed to show dramatic flux and spectral changes, in both the x-ray and the optical/ultraviolet regimes. such events, commonly described as `changing-look agns', can be divided into two well-defined classes. changing-obscuration objects show strong variability of the line-of-sight column density, mostly associated with clouds or outflows eclipsing the central engine of the agn. changing-state agns are instead objects in which the continuum emission and broad emission lines appear or disappear, and are typically triggered by strong changes in the accretion rate of the supermassive black hole. here we review our current understanding of these two classes of changing-look agns, and discuss open questions and future prospects.
changing-look active galactic nuclei
the event horizon telescope (eht), a global submillimeter wavelength very long baseline interferometry array, unveiled event-horizon-scale images of the supermassive black hole m87* as an asymmetric bright emission ring with a diameter of 42 ± 3 μas, and it is consistent with the shadow of a kerr black hole of general relativity. a kerr black hole is also a solution of some alternative theories of gravity, while several modified theories of gravity admit non-kerr black holes. while earlier estimates for the m87* black hole mass, depending on the method used, fall in the range ≈ 3× 109 m⊙ - 7×109 m⊙, the eht data indicated a mass for the m87* black hole of (6.5 ± 0.7) × 109 m⊙. this offers another promising tool to estimate black hole parameters and to probe theories of gravity in its most extreme region near the event horizon. the important question arises: is it possible by a simple technique to estimate black hole parameters from its shadow, for arbitrary models? in this paper, we present observables, expressed in terms of ordinary integrals, characterizing a haphazard shadow shape to estimate the parameters associated with black holes, and then illustrate its relevance to four different models: kerr, kerr-newman, and two rotating regular models. our method is robust, accurate, and consistent with the results obtained from existing formalism, and it is applicable to more general shadow shapes that may not be circular due to noisy data.
black hole parameter estimation from its shadow
the effective field theory of large-scale structure (eftoflss) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (lss) in the mildly non-linear regime. here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. we compare the multipoles $\ell=0,2$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. for the samples we compare to, which have a number density of $\bar n=3.8 \cdot 10^{-2}(h \ {\rm mpc}^{-1})^3$ and $\bar n=3.9 \cdot 10^{-4}(h \ {\rm mpc}^{-1})^3$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $k\simeq 0.43 \ h \ {\rm mpc}^{-1}$, a significant improvement with respect to former techniques. by performing the so-called ir-resummation, we find that the baryon acoustic oscillation peak is accurately reproduced. based on the results presented here, long-wavelength statistics that are routinely observed in lss surveys can be finally computed in the eftoflss. this formalism thus is ready to start to be compared directly to observational data.
biased tracers in redshift space in the eft of large-scale structure
the inner region of the milky way halo harbors a large amount of dark matter (dm). given its proximity, it is one of the most promising targets to look for dm. we report on a search for the annihilations of dm particles using γ -ray observations towards the inner 300 pc of the milky way, with the h.e.s.s. array of ground-based cherenkov telescopes. the analysis is based on a 2d maximum likelihood method using galactic center (gc) data accumulated by h.e.s.s. over the last 10 years (2004-2014), and does not show any significant γ -ray signal above background. assuming einasto and navarro-frenk-white dm density profiles at the gc, we derive upper limits on the annihilation cross section ⟨σ v ⟩. these constraints are the strongest obtained so far in the tev dm mass range and improve upon previous limits by a factor 5. for the einasto profile, the constraints reach ⟨σ v ⟩ values of 6 ×10-26 cm3 s-1 in the w+w- channel for a dm particle mass of 1.5 tev, and 2 ×10-26 cm3 s-1 in the τ+τ- channel for a 1 tev mass. for the first time, ground-based γ -ray observations have reached sufficient sensitivity to probe ⟨σ v ⟩ values expected from the thermal relic density for tev dm particles.
search for dark matter annihilations towards the inner galactic halo from 10 years of observations with h.e.s.s.
we investigate the impact on cosmological observables of f (q )-gravity, a specific class of modified gravity models in which gravity is described by the nonmetricity scalar, q . in particular we focus on a specific model which is indistinguishable from the λ -cold-dark-matter (λ cdm ) model at the background level, while showing peculiar and measurable signatures at linear perturbation level. these are attributed to a time-dependent planck mass and are regulated by a single dimensionless parameter, α . in comparison to the λ cdm model, we find for positive values of α a suppressed matter power spectrum and lensing effect on the cosmic microwave background radiation (cmb) angular power spectrum and an enhanced integrated-sachs-wolfe tail of cmb temperature anisotropies. the opposite behaviors are present when the α parameter is negative. we also investigate the modified gravitational waves (gws) propagation and show the prediction of the gws luminosity distance compared to the standard electromagnetic one. finally, we infer the accuracy on the free parameter of the model with standard sirens at future gws detectors.
signatures of f (q ) gravity in cosmology
over 3 billion astronomical sources have been detected in the more than 22 million orthogonal transfer ccd images obtained as part of the pan-starrs1 3π survey. over 85 billion instances of those sources have been automatically detected and characterized by the pan-starrs image processing pipeline photometry software, psphot. this fast, automatic, and reliable software was developed for the pan-starrs project but is easily adaptable to images from other telescopes. we describe the analysis of the astronomical sources by psphot in general as well as for the specific case of the third processing version used for the first two public releases of the pan-starrs 3π survey data.
pan-starrs pixel analysis: source detection and characterization
the hitran database is a compilation of molecular spectroscopic parameters. it was established in the early 1970s and is used by various computer codes to predict and simulate the transmission and emission of light in gaseous media (with an emphasis on terrestrial and planetary atmospheres). the hitran compilation is composed of five major components: the line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, experimental infrared absorption cross-sections (for molecules where it is not yet feasible for representation in a line-by-line form), collision-induced absorption data, aerosol indices of refraction, and general tables (including partition sums) that apply globally to the data. this paper describes the contents of the 2020 quadrennial edition of hitran. the hitran2020 edition takes advantage of recent experimental and theoretical data that were meticulously validated, in particular, against laboratory and atmospheric spectra. the new edition replaces the previous hitran edition of 2016 (including its updates during the intervening years). all five components of hitran have undergone major updates. in particular, the extent of the updates in the hitran2020 edition range from updating a few lines of specific molecules to complete replacements of the lists, and also the introduction of additional isotopologues and new (to hitran) molecules: so, ch3f, geh4, cs2, ch3i and nf3. many new vibrational bands were added, extending the spectral coverage and completeness of the line lists. also, the accuracy of the parameters for major atmospheric absorbers has been increased substantially, often featuring sub-percent uncertainties. broadening parameters associated with the ambient pressure of water vapor were introduced to hitran for the first time and are now available for several molecules. the hitran2020 edition continues to take advantage of the relational structure and efficient interface available at www.hitran.org and the hitran application programming interface (hapi). the functionality of both tools has been extended for the new edition.
the hitran2020 molecular spectroscopic database
subduction zones are home to the most seismically active faults on the planet. the shallow megathrust interfaces of subduction zones host earth’s largest earthquakes and are likely the only faults capable of magnitude 9+ ruptures. despite these facts, our knowledge of subduction zone geometry—which likely plays a key role in determining the spatial extent and ultimately the size of subduction zone earthquakes—is incomplete. we calculated the three-dimensional geometries of all seismically active global subduction zones. the resulting model, called slab2, provides a uniform geometrical analysis of all currently subducting slabs.
slab2, a comprehensive subduction zone geometry model
google earth engine (gee) is a cloud-based geospatial processing platform for large-scale environmental monitoring and analysis. the free-to-use gee platform provides access to (1) petabytes of publicly available remote sensing imagery and other ready-to-use products with an explorer web app; (2) high-speed parallel processing and machine learning algorithms using google's computational infrastructure; and (3) a library of application programming interfaces (apis) with development environments that support popular coding languages, such as javascript and python. together these core features enable users to discover, analyze and visualize geospatial big data in powerful ways without needing access to supercomputers or specialized coding expertise. the development of gee has created much enthusiasm and engagement in the remote sensing and geospatial data science fields. yet after a decade since gee was launched, its impact on remote sensing and geospatial science has not been carefully explored. thus, a systematic review of gee that can provide readers with the "big picture" of the current status and general trends in gee is needed. to this end, the decision was taken to perform a meta-analysis investigation of recent peer-reviewed gee articles focusing on several features, including data, sensor type, study area, spatial resolution, application, strategy, and analytical methods. a total of 349 peer-reviewed articles published in 146 different journals between 2010 and october 2019 were reviewed. publications and geographical distribution trends showed a broad spectrum of applications in environmental analyses at both regional and global scales. remote sensing datasets were used in 90% of studies while 10% of the articles utilized ready-to-use products for analyses. optical satellite imagery with medium spatial resolution, particularly landsat data with an archive exceeding 40 years, has been used extensively. linear regression and random forest were the most frequently used algorithms for satellite imagery processing. among ready-to-use products, the normalized difference vegetation index (ndvi) was used in 27% of studies for vegetation, crop, land cover mapping and drought monitoring. the results of this study confirm that gee has and continues to make substantive progress on global challenges involving process of geo-big data.
google earth engine for geo-big data applications: a meta-analysis and systematic review
the food system is a major driver of climate change, changes in land use, depletion of freshwater resources, and pollution of aquatic and terrestrial ecosystems through excessive nitrogen and phosphorus inputs. here we show that between 2010 and 2050, as a result of expected changes in population and income levels, the environmental effects of the food system could increase by 50-90% in the absence of technological changes and dedicated mitigation measures, reaching levels that are beyond the planetary boundaries that define a safe operating space for humanity. we analyse several options for reducing the environmental effects of the food system, including dietary changes towards healthier, more plant-based diets, improvements in technologies and management, and reductions in food loss and waste. we find that no single measure is enough to keep these effects within all planetary boundaries simultaneously, and that a synergistic combination of measures will be needed to sufficiently mitigate the projected increase in environmental pressures.
options for keeping the food system within environmental limits
human pressures on the environment are changing spatially and temporally, with profound implications for the planet's biodiversity and human economies. here we use recently available data on infrastructure, land cover and human access into natural areas to construct a globally standardized measure of the cumulative human footprint on the terrestrial environment at 1 km2 resolution from 1993 to 2009. we note that while the human population has increased by 23% and the world economy has grown 153%, the human footprint has increased by just 9%. still, 75% the planet's land surface is experiencing measurable human pressures. moreover, pressures are perversely intense, widespread and rapidly intensifying in places with high biodiversity. encouragingly, we discover decreases in environmental pressures in the wealthiest countries and those with strong control of corruption. clearly the human footprint on earth is changing, yet there are still opportunities for conservation gains.
sixteen years of change in the global terrestrial human footprint and implications for biodiversity conservation
the size of a planet is an observable property directly connected to the physics of its formation and evolution. we used precise radius measurements from the california-kepler survey to study the size distribution of 2025 kepler planets in fine detail. we detect a factor of ≥2 deficit in the occurrence rate distribution at 1.5-2.0 {r}\oplus . this gap splits the population of close-in (p < 100 days) small planets into two size regimes: {r}{{p}}< 1.5 {r}\oplusand {r}{{p}}=2.0{--}3.0 {r}\oplus , with few planets in between. planets in these two regimes have nearly the same intrinsic frequency based on occurrence measurements that account for planet detection efficiencies. the paucity of planets between 1.5 and 2.0 {r}\oplussupports the emerging picture that close-in planets smaller than neptune are composed of rocky cores measuring 1.5 {r}\oplusor smaller with varying amounts of low-density gas that determine their total sizes. based on observations obtained at the w. m. keck observatory, which is operated jointly by the university of california and the california institute of technology. keck time was granted for this project by the university of california, and california institute of technology, the university of hawaii, and nasa.
the california-kepler survey. iii. a gap in the radius distribution of small planets
plastic pollution is a planetary threat, affecting nearly every marine and freshwater ecosystem globally. in response, multilevel mitigation strategies are being adopted but with a lack of quantitative assessment of how such strategies reduce plastic emissions. we assessed the impact of three broad management strategies, plastic waste reduction, waste management, and environmental recovery, at different levels of effort to estimate plastic emissions to 2030 for 173 countries. we estimate that 19 to 23 million metric tons, or 11%, of plastic waste generated globally in 2016 entered aquatic ecosystems. considering the ambitious commitments currently set by governments, annual emissions may reach up to 53 million metric tons per year by 2030. to reduce emissions to a level well below this prediction, extraordinary efforts to transform the global plastics economy are needed.
predicted growth in plastic waste exceeds efforts to mitigate plastic pollution
the majority of the earth’s terrestrial carbon is stored in the soil. if anthropogenic warming stimulates the loss of this carbon to the atmosphere, it could drive further planetary warming. despite evidence that warming enhances carbon fluxes to and from the soil, the net global balance between these responses remains uncertain. here we present a comprehensive analysis of warming-induced changes in soil carbon stocks by assembling data from 49 field experiments located across north america, europe and asia. we find that the effects of warming are contingent on the size of the initial soil carbon stock, with considerable losses occurring in high-latitude areas. by extrapolating this empirical relationship to the global scale, we provide estimates of soil carbon sensitivity to warming that may help to constrain earth system model projections. our empirical relationship suggests that global soil carbon stocks in the upper soil horizons will fall by 30 ± 30 petagrams of carbon to 203 ± 161 petagrams of carbon under one degree of warming, depending on the rate at which the effects of warming are realized. under the conservative assumption that the response of soil carbon to warming occurs within a year, a business-as-usual climate scenario would drive the loss of 55 ± 50 petagrams of carbon from the upper soil horizons by 2050. this value is around 12-17 per cent of the expected anthropogenic emissions over this period. despite the considerable uncertainty in our estimates, the direction of the global soil carbon response is consistent across all scenarios. this provides strong empirical support for the idea that rising temperatures will stimulate the net loss of soil carbon to the atmosphere, driving a positive land carbon-climate feedback that could accelerate climate change.
quantifying global soil carbon losses in response to warming
sudden stratospheric warmings (ssws) are impressive fluid dynamical events in which large and rapid temperature increases in the winter polar stratosphere (∼10-50 km) are associated with a complete reversal of the climatological wintertime westerly winds. ssws are caused by the breaking of planetary scale waves that propagate upwards from the troposphere. during an ssw, the polar vortex breaks down, accompanied by rapid descent and warming of air in polar latitudes, mirrored by ascent and cooling above the warming. the rapid warming and descent of the polar air column affect tropospheric weather, shifting jet streams, storm tracks, and the northern annular mode, making cold air outbreaks over north america and eurasia more likely. ssws affect the atmosphere above the stratosphere, producing widespread effects on atmospheric chemistry, temperatures, winds, neutral (nonionized) particles and electron densities, and electric fields. these effects span both hemispheres. given their crucial role in the whole atmosphere, ssws are also seen as a key process to analyze in climate change studies and subseasonal to seasonal prediction. this work reviews the current knowledge on the most important aspects of ssws, from the historical background to dynamical processes, modeling, chemistry, and impact on other atmospheric layers.
sudden stratospheric warmings
flux footprint models are often used for interpretation of flux tower measurements, to estimate position and size of surface source areas, and the relative contribution of passive scalar sources to measured fluxes. accurate knowledge of footprints is of crucial importance for any upscaling exercises from single site flux measurements to local or regional scale. hence, footprint models are ultimately also of considerable importance for improved greenhouse gas budgeting. with increasing numbers of flux towers within large monitoring networks such as fluxnet, icos (integrated carbon observation system), neon (national ecological observatory network), or ameriflux, and with increasing temporal range of observations from such towers (of the order of decades) and availability of airborne flux measurements, there has been an increasing demand for reliable footprint estimation. even though several sophisticated footprint models have been developed in recent years, most are still not suitable for application to long time series, due to their high computational demands. existing fast footprint models, on the other hand, are based on surface layer theory and hence are of restricted validity for real-case applications.to remedy such shortcomings, we present the two-dimensional parameterisation for flux footprint prediction (ffp), based on a novel scaling approach for the crosswind distribution of the flux footprint and on an improved version of the footprint parameterisation of kljun et al. (2004b). compared to the latter, ffp now provides not only the extent but also the width and shape of footprint estimates, and explicit consideration of the effects of the surface roughness length. the footprint parameterisation has been developed and evaluated using simulations of the backward lagrangian stochastic particle dispersion model lpdm-b (kljun et al., 2002). like lpdm-b, the parameterisation is valid for a broad range of boundary layer conditions and measurement heights over the entire planetary boundary layer. thus, it can provide footprint estimates for a wide range of real-case applications.the new footprint parameterisation requires input that can be easily determined from, for example, flux tower measurements or airborne flux data. ffp can be applied to data of long-term monitoring programmes as well as be used for quick footprint estimates in the field, or for designing new sites.
a simple two-dimensional parameterisation for flux footprint prediction (ffp)
we explore the risk that self-reinforcing feedbacks could push the earth system toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “hothouse earth” pathway even as human emissions are reduced. crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the holocene. we examine the evidence that such a threshold might exist and where it might be. if the threshold is crossed, the resulting trajectory would likely cause serious disruptions to ecosystems, society, and economies. collective human action is required to steer the earth system away from a potential threshold and stabilize it in a habitable interglacial-like state. such action entails stewardship of the entire earth system—biosphere, climate, and societies—and could include decarbonization of the global economy, enhancement of biosphere carbon sinks, behavioral changes, technological innovations, new governance arrangements, and transformed social values.
trajectories of the earth system in the anthropocene
aiming to promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rotating machinery. among these studies, the methods based on artificial neural networks (anns) are commonly used, which employ signal processing techniques for extracting features and further input the features to anns for classifying faults. though these methods did work in intelligent fault diagnosis of rotating machinery, they still have two deficiencies. (1) the features are manually extracted depending on much prior knowledge about signal processing techniques and diagnostic expertise. in addition, these manual features are extracted according to a specific diagnosis issue and probably unsuitable for other issues. (2) the anns adopted in these methods have shallow architectures, which limits the capacity of anns to learn the complex non-linear relationships in fault diagnosis issues. as a breakthrough in artificial intelligence, deep learning holds the potential to overcome the aforementioned deficiencies. through deep learning, deep neural networks (dnns) with deep architectures, instead of shallow ones, could be established to mine the useful information from raw data and approximate complex non-linear functions. based on dnns, a novel intelligent method is proposed in this paper to overcome the deficiencies of the aforementioned intelligent diagnosis methods. the effectiveness of the proposed method is validated using datasets from rolling element bearings and planetary gearboxes. these datasets contain massive measured signals involving different health conditions under various operating conditions. the diagnosis results show that the proposed method is able to not only adaptively mine available fault characteristics from the measured signals, but also obtain superior diagnosis accuracy compared with the existing methods.
deep neural networks: a promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data
the tundra is warming more rapidly than any other biome on earth, and the potential ramifications are far-reaching because of global feedback effects between vegetation and climate. a better understanding of how environmental factors shape plant structure and function is crucial for predicting the consequences of environmental change for ecosystem functioning. here we explore the biome-wide relationships between temperature, moisture and seven key plant functional traits both across space and over three decades of warming at 117 tundra locations. spatial temperature-trait relationships were generally strong but soil moisture had a marked influence on the strength and direction of these relationships, highlighting the potentially important influence of changes in water availability on future trait shifts in tundra plant communities. community height increased with warming across all sites over the past three decades, but other traits lagged far behind predicted rates of change. our findings highlight the challenge of using space-for-time substitution to predict the functional consequences of future warming and suggest that functions that are tied closely to plant height will experience the most rapid change. they also reveal the strength with which environmental factors shape biotic communities at the coldest extremes of the planet and will help to improve projections of functional changes in tundra ecosystems with climate warming.
plant functional trait change across a warming tundra biome
drylands are home to more than 38% of the world's population and are one of the most sensitive areas to climate change and human activities. this review describes recent progress in dryland climate change research. recent findings indicate that the long-term trend of the aridity index (ai) is mainly attributable to increased greenhouse gas emissions, while anthropogenic aerosols exert small effects but alter its attributions. atmosphere-land interactions determine the intensity of regional response. the largest warming during the last 100 years was observed over drylands and accounted for more than half of the continental warming. the global pattern and interdecadal variability of aridity changes are modulated by oceanic oscillations. the different phases of those oceanic oscillations induce significant changes in land-sea and north-south thermal contrasts, which affect the intensity of the westerlies and planetary waves and the blocking frequency, thereby altering global changes in temperature and precipitation. during 1948-2008, the drylands in the americas became wetter due to enhanced westerlies, whereas the drylands in the eastern hemisphere became drier because of the weakened east asian summer monsoon. drylands as defined by the ai have expanded over the last 60 years and are projected to expand in the 21st century. the largest expansion of drylands has occurred in semiarid regions since the early 1960s. dryland expansion will lead to reduced carbon sequestration and enhanced regional warming. the increasing aridity, enhanced warming, and rapidly growing population will exacerbate the risk of land degradation and desertification in the near future in developing countries.
dryland climate change: recent progress and challenges
about half of present-day cloud condensation nuclei originate from atmospheric nucleation, frequently appearing as a burst of new particles near midday. atmospheric observations show that the growth rate of new particles often accelerates when the diameter of the particles is between one and ten nanometres. in this critical size range, new particles are most likely to be lost by coagulation with pre-existing particles, thereby failing to form new cloud condensation nuclei that are typically 50 to 100 nanometres across. sulfuric acid vapour is often involved in nucleation but is too scarce to explain most subsequent growth, leaving organic vapours as the most plausible alternative, at least in the planetary boundary layer. although recent studies predict that low-volatility organic vapours contribute during initial growth, direct evidence has been lacking. the accelerating growth may result from increased photolytic production of condensable organic species in the afternoon, and the presence of a possible kelvin (curvature) effect, which inhibits organic vapour condensation on the smallest particles (the nano-köhler theory), has so far remained ambiguous. here we present experiments performed in a large chamber under atmospheric conditions that investigate the role of organic vapours in the initial growth of nucleated organic particles in the absence of inorganic acids and bases such as sulfuric acid or ammonia and amines, respectively. using data from the same set of experiments, it has been shown that organic vapours alone can drive nucleation. we focus on the growth of nucleated particles and find that the organic vapours that drive initial growth have extremely low volatilities (saturation concentration less than 10-4.5 micrograms per cubic metre). as the particles increase in size and the kelvin barrier falls, subsequent growth is primarily due to more abundant organic vapours of slightly higher volatility (saturation concentrations of 10-4.5 to 10-0.5 micrograms per cubic metre). we present a particle growth model that quantitatively reproduces our measurements. furthermore, we implement a parameterization of the first steps of growth in a global aerosol model and find that concentrations of atmospheric cloud concentration nuclei can change substantially in response, that is, by up to 50 per cent in comparison with previously assumed growth rate parameterizations.
the role of low-volatility organic compounds in initial particle growth in the atmosphere
overfishing is the primary cause of marine defaunation, yet declines in and increasing extinction risks of individual species are difficult to measure, particularly for the largest predators found in the high seas1-3. here we calculate two well-established indicators to track progress towards aichi biodiversity targets and sustainable development goals4,5: the living planet index (a measure of changes in abundance aggregated from 57 abundance time-series datasets for 18 oceanic shark and ray species) and the red list index (a measure of change in extinction risk calculated for all 31 oceanic species of sharks and rays). we find that, since 1970, the global abundance of oceanic sharks and rays has declined by 71% owing to an 18-fold increase in relative fishing pressure. this depletion has increased the global extinction risk to the point at which three-quarters of the species comprising this functionally important assemblage are threatened with extinction. strict prohibitions and precautionary science-based catch limits are urgently needed to avert population collapse6,7, avoid the disruption of ecological functions and promote species recovery8,9.
half a century of global decline in oceanic sharks and rays
observations of circumstellar environments that look for the direct signal of exoplanets and the scattered light from disks have significant instrumental implications. in the past 15 years, major developments in adaptive optics, coronagraphy, optical manufacturing, wavefront sensing, and data processing, together with a consistent global system analysis have brought about a new generation of high-contrast imagers and spectrographs on large ground-based telescopes with much better performance. one of the most productive imagers is the spectro-polarimetic high contrast imager for exoplanets research (sphere), which was designed and built for the eso very large telescope (vlt) in chile. sphere includes an extreme adaptive optics system, a highly stable common path interface, several types of coronagraphs, and three science instruments. two of them, the integral field spectrograph (ifs) and the infra-red dual-band imager and spectrograph (irdis), were designed to efficiently cover the near-infrared range in a single observation for an efficient search of young planets. the third instrument, zimpol, was designed for visible polarimetric observation to look for the reflected light of exoplanets and the light scattered by debris disks. these three scientific instruments enable the study of circumstellar environments at unprecedented angular resolution, both in the visible and the near-infrared. in this work, we thoroughly present sphere and its on-sky performance after four years of operations at the vlt.
sphere: the exoplanet imager for the very large telescope
we propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. the approach leverages input perturbations commonly used in computer vision tasks to regularize the value function. existing model-free approaches, such as soft actor-critic (sac), are not able to train deep networks effectively from image pixels. however, the addition of our augmentation method dramatically improves sac's performance, enabling it to reach state-of-the-art performance on the deepmind control suite, surpassing model-based (dreamer, planet, and slac) methods and recently proposed contrastive learning (curl). our approach can be combined with any model-free reinforcement learning algorithm, requiring only minor modifications. an implementation can be found at https://sites.google.com/view/data-regularized-q.
image augmentation is all you need: regularizing deep reinforcement learning from pixels
seawater is one of the most abundant natural resources on our planet. electrolysis of seawater is not only a promising approach to produce clean hydrogen energy, but also of great significance to seawater desalination. the implementation of seawater electrolysis requires robust and efficient electrocatalysts that can sustain seawater splitting without chloride corrosion, especially for the anode. here we report a three-dimensional core-shell metal-nitride catalyst consisting of nifen nanoparticles uniformly decorated on nimon nanorods supported on ni foam, which serves as an eminently active and durable oxygen evolution reaction catalyst for alkaline seawater electrolysis. combined with an efficient hydrogen evolution reaction catalyst of nimon nanorods, we have achieved the industrially required current densities of 500 and 1000 ma cm-2 at record low voltages of 1.608 and 1.709 v, respectively, for overall alkaline seawater splitting at 60 °c. this discovery significantly advances the development of seawater electrolysis for large-scale hydrogen production.
non-noble metal-nitride based electrocatalysts for high-performance alkaline seawater electrolysis
the high-resolution rapid refresh (hrrr) is a convection-allowing implementation of the advanced research version of the weather research and forecasting (wrf-arw) model with hourly data assimilation that covers the conterminous united states and alaska and runs in real time at the noaa/national centers for environmental prediction (ncep). implemented operationally at noaa/ncep in 2014, the hrrr features 3-km horizontal grid spacing and frequent forecasts (hourly for conus and 3-hourly for alaska). hrrr initialization is designed for optimal short-range forecast skill with a particular focus on the evolution of precipitating systems. key components of the initialization are radar-reflectivity data assimilation, hybrid ensemble-variational assimilation of conventional weather observations, and a cloud analysis to initialize stratiform cloud layers. from this initial state, hrrr forecasts are produced out to 18 h every hour, and out to 48 h every 6 h, with boundary conditions provided by the rapid refresh system. between 2014 and 2020, hrrr development was focused on reducing model bias errors and improving forecast realism and accuracy. improved representation of the planetary boundary layer, subgrid-scale clouds, and land surface contributed extensively to overall hrrr improvements. the final version of the hrrr (hrrrv4), implemented in late 2020, also features hybrid data assimilation using flow-dependent covariances from a 3-km, 36-member ensemble ("hrrrdas") with explicit convective storms. hrrrv4 also includes prediction of wildfire smoke plumes. the hrrr provides a baseline capability for evaluating noaa's next-generation rapid refresh forecast system, now under development. significance statement noaa's operational hourly updating, convection-allowing model, the high-resolution rapid refresh (hrrr), is a key tool for short-range weather forecasting and situational awareness. improvements in assimilation of weather observations, as well as in physics parameterizations, have led to improvements in simulated radar reflectivity and quantitative precipitation forecasts since the initial implementation of hrrr in september 2014. other targeted development has focused on improved representation of the diurnal cycle of the planetary boundary layer, resulting in improved near-surface temperature and humidity forecasts. additional physics and data assimilation changes have led to improved treatment of the development and erosion of low-level clouds, including subgrid-scale clouds. the final version of hrrr features storm-scale ensemble data assimilation and explicit prediction of wildfire smoke plumes.
the high-resolution rapid refresh (hrrr): an hourly updating convection-allowing forecast model. part i: motivation and system description
thousands of transiting exoplanets have been discovered, but spectral analysis of their atmospheres has so far been dominated by a small number of exoplanets and data spanning relatively narrow wavelength ranges (such as 1.1-1.7 micrometres). recent studies show that some hot-jupiter exoplanets have much weaker water absorption features in their near-infrared spectra than predicted. the low amplitude of water signatures could be explained by very low water abundances, which may be a sign that water was depleted in the protoplanetary disk at the planet’s formation location, but it is unclear whether this level of depletion can actually occur. alternatively, these weak signals could be the result of obscuration by clouds or hazes, as found in some optical spectra. here we report results from a comparative study of ten hot jupiters covering the wavelength range 0.3-5 micrometres, which allows us to resolve both the optical scattering and infrared molecular absorption spectroscopically. our results reveal a diverse group of hot jupiters that exhibit a continuum from clear to cloudy atmospheres. we find that the difference between the planetary radius measured at optical and infrared wavelengths is an effective metric for distinguishing different atmosphere types. the difference correlates with the spectral strength of water, so that strong water absorption lines are seen in clear-atmosphere planets and the weakest features are associated with clouds and hazes. this result strongly suggests that primordial water depletion during formation is unlikely and that clouds and hazes are the cause of weaker spectral signatures.
a continuum from clear to cloudy hot-jupiter exoplanets without primordial water depletion
the atlantic meridional overturning circulation (amoc)—one of earth's major ocean circulation systems—redistributes heat on our planet and has a major impact on climate. here, we compare a variety of published proxy records to reconstruct the evolution of the amoc since about ad 400. a fairly consistent picture of the amoc emerges: after a long and relatively stable period, there was an initial weakening starting in the nineteenth century, followed by a second, more rapid, decline in the mid-twentieth century, leading to the weakest state of the amoc occurring in recent decades.
current atlantic meridional overturning circulation weakest in last millennium
we present a narrative of the eruptive events culminating in the cataclysmic january 15, 2022 eruption of hunga tonga-hunga ha'apai volcano by synthesizing diverse preliminary seismic, volcanological, sound wave, and lightning data available within the first few weeks after the eruption occurred. the first hour of eruptive activity produced fast-propagating tsunami waves, long-period seismic waves, loud audible sound waves, infrasonic waves, exceptionally intense volcanic lightning and an unsteady volcanic plume that transiently reached-at 58 km-the earth's mesosphere. energetic seismic signals were recorded worldwide and the globally stacked seismogram showed episodic seismic events within the most intense periods of phreatoplinian activity, and they correlated well with the infrasound pressure waveform recorded in fiji. gravity wave signals were strong enough to be observed over the entire planet in just the first few hours, with some circling the earth multiple times subsequently. these large-amplitude, long-wavelength atmospheric disturbances come from the earth's atmosphere being forced by the magmatic mixture of tephra, melt and gasses emitted by the unsteady but quasi-continuous eruption from 0402±1-1800 utc on january 15, 2022. atmospheric forcing lasted much longer than rupturing from large earthquakes recorded on modern instruments, producing a type of shock wave that originated from the interaction between compressed air and ambient (wavy) sea surface. this scenario differs from conventional ideas of earthquake slip, landslides, or caldera collapse-generated tsunami waves because of the enormous (∼1000x) volumetric change due to the supercritical nature of volatiles associated with the hot, volatile-rich phreatoplinian plume. the time series of plume altitude can be translated to volumetric discharge and mass flow rate. for an eruption duration of ∼12 h, the eruptive volume and mass are estimated at 1.9 km3 and ∼2 900 tg, respectively, corresponding to a vei of 5-6 for this event. the high frequency and intensity of lightning was enhanced by the production of fine ash due to magma-seawater interaction with concomitant high charge per unit mass and the high pre-eruptive concentration of dissolved volatiles. analysis of lightning flash frequencies provides a rapid metric for plume activity and eruption magnitude. many aspects of this eruption await further investigation by multidisciplinary teams. it represents a unique opportunity for fundamental research regarding the complex, non-linear behavior of high energetic volcanic eruptions and attendant phenomena, with critical implications for hazard mitigation, volcano forecasting, and first-response efforts in future disasters.
under the surface: pressure-induced planetary-scale waves, volcanic lightning, and gaseous clouds caused by the submarine eruption of hunga tonga-hunga ha'apai volcano
atmospheric greenhouse gas (ghg) concentrations are at unprecedented, record-high levels compared to the last 800 000 years. those elevated ghg concentrations warm the planet and - partially offset by net cooling effects by aerosols - are largely responsible for the observed warming over the past 150 years. an accurate representation of ghg concentrations is hence important to understand and model recent climate change. so far, community efforts to create composite datasets of ghg concentrations with seasonal and latitudinal information have focused on marine boundary layer conditions and recent trends since the 1980s. here, we provide consolidated datasets of historical atmospheric concentrations (mole fractions) of 43 ghgs to be used in the climate model intercomparison project - phase 6 (cmip6) experiments. the presented datasets are based on agage and noaa networks, firn and ice core data, and archived air data, and a large set of published studies. in contrast to previous intercomparisons, the new datasets are latitudinally resolved and include seasonality. we focus on the period 1850-2014 for historical cmip6 runs, but data are also provided for the last 2000 years. we provide consolidated datasets in various spatiotemporal resolutions for carbon dioxide (co2), methane (ch4) and nitrous oxide (n2o), as well as 40 other ghgs, namely 17 ozone-depleting substances, 11 hydrofluorocarbons (hfcs), 9 perfluorocarbons (pfcs), sulfur hexafluoride (sf6), nitrogen trifluoride (nf3) and sulfuryl fluoride (so2f2). in addition, we provide three equivalence species that aggregate concentrations of ghgs other than co2, ch4 and n2o, weighted by their radiative forcing efficiencies. for the year 1850, which is used for pre-industrial control runs, we estimate annual global-mean surface concentrations of co2 at 284.3 ppm, ch4 at 808.2 ppb and n2o at 273.0 ppb. the data are available at <a href="https://esgf-node.llnl.gov/search/input4mips/" target="_blank">https://esgf-node.llnl.gov/search/input4mips/</a> and <a href="http://www.climatecollege.unimelb.edu.au/cmip6" target="_blank">http://www.climatecollege.unimelb.edu.au/cmip6</a>. while the minimum cmip6 recommendation is to use the global- and annual-mean time series, modelling groups can also choose our monthly and latitudinally resolved concentrations, which imply a stronger radiative forcing in the northern hemisphere winter (due to the latitudinal gradient and seasonality).
historical greenhouse gas concentrations for climate modelling (cmip6)
the transiting exoplanet survey satellite (tess) will conduct a search for earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with rp < 4 r⊕ and measure the masses of at least 50 of these small worlds. the science processing operations center (spoc) is being developed at nasa ames research center based on the kepler science pipeline and will generate calibrated pixels and light curves on the nasa advanced supercomputing division's pleiades supercomputer. the spoc will also search for periodic transit events and generate validation products for the transit-like features in the light curves. all tess spoc data products will be archived to the mikulski archive for space telescopes (mast).
the tess science processing operations center
what was the early atmosphere made of? we review what is known during the archean eon, 4 to 2.5 billion years ago.the atmosphere of the archean eon—one-third of earth's history—is important for understanding the evolution of our planet and earth-like exoplanets. new geological proxies combined with models constrain atmospheric composition. they imply surface o2levels <10−6times present, n2levels that were similar to today or possibly a few times lower, and co2and ch4levels ranging ~10 to 2500 and 102to 104times modern amounts, respectively. the greenhouse gas concentrations were sufficient to offset a fainter sun. climate moderation by the carbon cycle suggests average surface temperatures between 0° and 40°c, consistent with occasional glaciations. isotopic mass fractionation of atmospheric xenon through the archean until atmospheric oxygenation is best explained by drag of xenon ions by hydrogen escaping rapidly into space. these data imply that substantial loss of hydrogen oxidized the earth. despite these advances, detailed understanding of the coevolving solid earth, biosphere, and atmosphere remains elusive, however.
the archean atmosphere
the important roles of the planetary boundary layer (pbl) in climate, weather and air quality have long been recognized, but little is known about the pbl climatology in china. using the fine-resolution sounding observations made across china and reanalysis data, we conducted a comprehensive investigation of the pbl in china from january 2011 to july 2015. the boundary layer height (blh) is found to be generally higher in spring and summer than that in fall and winter. the comparison of seasonally averaged blhs derived from observations and reanalysis, on average, shows good agreement, despite the pronounced inconsistence in some regions. the blh, derived from soundings conducted three or four times daily in summer, tends to peak in the early afternoon, and the diurnal amplitude of blh is higher in the northern and western subregions of china than other subregions. the meteorological influence on the annual cycle of blh is investigated as well, showing that blh at most sounding sites is negatively associated with the surface pressure and lower tropospheric stability, but positively associated with the near-surface wind speed and temperature. in addition, cloud tends to suppress the development of pbl, particularly in the early afternoon. this indicates that meteorology plays a significant role in the pbl processes. overall, the key findings obtained from this study lay a solid foundation for us to gain a deep insight into the fundamentals of pbl in china, which helps to understand the roles that the pbl plays in the air pollution, weather and climate of china.
the climatology of planetary boundary layer height in china derived from radiosonde and reanalysis data
plate tectonics, involving a globally linked system of lateral motion of rigid surface plates, is a characteristic feature of our planet, but estimates of how long it has been the modus operandi of lithospheric formation and interactions range from the hadean to the neoproterozoic. in this paper, we review sedimentary, igneous and metamorphic proxies along with palaeomagnetic data to infer both the development of rigid lithospheric plates and their independent relative motion, and conclude that significant changes in earth behaviour occurred in the mid- to late archaean, between 3.2 ga and 2.5 ga. these data include: sedimentary rock associations inferred to have accumulated in passive continental margin settings, marking the onset of sea-floor spreading; the oldest foreland basin deposits associated with lithospheric convergence; a change from thin, new continental crust of mafic composition to thicker crust of intermediate composition, increased crustal reworking and the emplacement of potassic and peraluminous granites, indicating stabilization of the lithosphere; replacement of dome and keel structures in granite-greenstone terranes, which relate to vertical tectonics, by linear thrust imbricated belts; the commencement of temporally paired systems of intermediate and high dt/dp gradients, with the former interpreted to represent subduction to collisional settings and the latter representing possible hinterland back-arc settings or ocean plateau environments. palaeomagnetic data from the kaapvaal and pilbara cratons for the interval 2780-2710 ma and from the superior, kaapvaal and kola-karelia cratons for 2700-2440 ma suggest significant relative movements. we consider these changes in the behaviour and character of the lithosphere to be consistent with a gestational transition from a non-plate tectonic mode, arguably with localized subduction, to the onset of sustained plate tectonics. this article is part of a discussion meeting issue `earth dynamics and the development of plate tectonics'.
geological archive of the onset of plate tectonics
land use and related pressures have reduced local terrestrial biodiversity, but it is unclear how the magnitude of change relates to the recently proposed planetary boundary (“safe limit”). we estimate that land use and related pressures have already reduced local biodiversity intactness—the average proportion of natural biodiversity remaining in local ecosystems—beyond its recently proposed planetary boundary across 58.1% of the world’s land surface, where 71.4% of the human population live. biodiversity intactness within most biomes (especially grassland biomes), most biodiversity hotspots, and even some wilderness areas is inferred to be beyond the boundary. such widespread transgression of safe limits suggests that biodiversity loss, if unchecked, will undermine efforts toward long-term sustainable development.
has land use pushed terrestrial biodiversity beyond the planetary boundary? a global assessment
planetary warming may be exacerbated if it accelerates loss of soil carbon to the atmosphere. this carbon-cycle-climate feedback is included in climate projections. yet, despite ancillary data supporting a positive feedback, there is limited evidence for soil carbon loss under warming. the low confidence engendered in feedback projections is reduced further by the common representation in models of an outdated knowledge of soil carbon turnover. 'model-knowledge integration' -- representing in models an advanced understanding of soil carbon stabilization -- is the first step to build confidence. this will inform experiments that further increase confidence by resolving competing mechanisms that most influence projected soil-carbon stocks. improving feedback projections is an imperative for establishing greenhouse gas emission targets that limit climate change.
managing uncertainty in soil carbon feedbacks to climate change
libradtran is a widely used software package for radiative transfer calculations. it allows one to compute (polarized) radiances, irradiance, and actinic fluxes in the solar and thermal spectral regions. libradtran has been used for various applications, including remote sensing of clouds, aerosols and trace gases in the earth's atmosphere, climate studies, e.g., for the calculation of radiative forcing due to different atmospheric components, for uv forecasting, the calculation of photolysis frequencies, and for remote sensing of other planets in our solar system. the package has been described in mayer and kylling (2005). since then several new features have been included, for example polarization, raman scattering, a new molecular gas absorption parameterization, and several new parameterizations of cloud and aerosol optical properties. furthermore, a graphical user interface is now available, which greatly simplifies the usage of the model, especially for new users. this paper gives an overview of libradtran version 2.0.1 with a focus on new features. applications including these new features are provided as examples of use. a complete description of libradtran and all its input options is given in the user manual included in the libradtran software package, which is freely available at <a href="http://www.libradtran.org" target="_blank">http://www.libradtran.org</a>.
the libradtran software package for radiative transfer calculations (version 2.0.1)
environmentally transformative human use of land accelerated with the emergence of agriculture, but the extent, trajectory, and implications of these early changes are not well understood. an empirical global assessment of land use from 10,000 years before the present (yr b.p.) to 1850 ce reveals a planet largely transformed by hunter-gatherers, farmers, and pastoralists by 3000 years ago, considerably earlier than the dates in the land-use reconstructions commonly used by earth scientists. synthesis of knowledge contributed by more than 250 archaeologists highlighted gaps in archaeological expertise and data quality, which peaked for 2000 yr b.p. and in traditionally studied and wealthier regions. archaeological reconstruction of global land-use history illuminates the deep roots of earth’s transformation and challenges the emerging anthropocene paradigm that large-scale anthropogenic global environmental change is mostly a recent phenomenon.
archaeological assessment reveals earth’s early transformation through land use
the geisa database (gestion et etude des informations spectroscopiques atmosphériques: management and study of atmospheric spectroscopic information) has been developed and maintained by the http://ara.abct.lmd.polytechnique.fr. the "line parameters database" contains 52 molecular species (118 isotopologues) and transitions in the spectral range from 10-6 to 35,877.031 cm-1, representing 5,067,351 entries, against 3,794,297 in geisa-2011. among the previously existing molecules, 20 molecular species have been updated. a new molecule (so3) has been added. hdo, isotopologue of h2o, is now identified as an independent molecular species. seven new isotopologues have been added to the geisa-2015 database. the "cross section sub-database" has been enriched by the addition of 43 new molecular species in its infrared part, 4 molecules (ethane, propane, acetone, acetonitrile) are also updated; they represent 3% of the update. a new section is added, in the near-infrared spectral region, involving 7 molecular species: ch3cn, ch3i, ch3o2, h2co, ho2, hono, nh3. the "microphysical and optical properties of atmospheric aerosols sub-database" has been updated for the first time since 2003. it contains more than 40 species originating from ncar and 20 from the http://eodg.atm.ox.ac.uk/aria/introduction_nocol.html. as for the previous versions, this new release of geisa and associated management software facilities are implemented and freely accessible on the http://cds-espri.ipsl.fr/ethertypo/?id=950.
the 2015 edition of the geisa spectroscopic database
the india-eurasia collision zone is the largest deforming region on the planet; direct measurements of present-day deformation from global positioning system (gps) have the potential to discriminate between competing models of continental tectonics. but the increasing spatial resolution and accuracy of observations have only led to increasingly complex realizations of competing models. here we present the most complete, accurate, and up-to-date velocity field for india-eurasia available, comprising 2576 velocities measured during 1991-2015. the core of our velocity field is from the crustal movement observation network of china-i/ii: 27 continuous stations observed since 1999; 56 campaign stations observed annually during 1998-2007; 1000 campaign stations observed in 1999, 2001, 2004, and 2007; 260 continuous stations operating since late 2010; and 2000 campaign stations observed in 2009, 2011, 2013, and 2015. we process these data and combine the solutions in a consistent reference frame with stations from the global strain rate model compilation, then invert for continuous velocity and strain rate fields. we update geodetic slip rates for the major faults (some vary along strike), and find that those along the major tibetan strike-slip faults are in good agreement with recent geological estimates. the velocity field shows several large undeforming areas, strain focused around some major faults, areas of diffuse strain, and dilation of the high plateau. we suggest that a new generation of dynamic models incorporating strength variations and strain-weakening mechanisms is required to explain the key observations. seismic hazard in much of the region is elevated, not just near the major faults.
crustal deformation in the india-eurasia collision zone from 25 years of gps measurements
the miocene epoch (23.03-5.33 ma) was a time interval of global warmth, relative to today. continental configurations and mountain topography transitioned toward modern conditions, and many flora and fauna evolved into the same taxa that exist today. miocene climate was dynamic: long periods of early and late glaciation bracketed a ∼2 myr greenhouse interval—the miocene climatic optimum (mco). floras, faunas, ice sheets, precipitation, pco2, and ocean and atmospheric circulation mostly (but not ubiquitously) covaried with these large changes in climate. with higher temperatures and moderately higher pco2 (∼400-600 ppm), the mco has been suggested as a particularly appropriate analog for future climate scenarios, and for assessing the predictive accuracy of numerical climate models—the same models that are used to simulate future climate. yet, miocene conditions have proved difficult to reconcile with models. this implies either missing positive feedbacks in the models, a lack of knowledge of past climate forcings, or the need for re-interpretation of proxies, which might mitigate the model-data discrepancy. our understanding of miocene climatic, biogeochemical, and oceanic changes on broad spatial and temporal scales is still developing. new records documenting the physical, chemical, and biotic aspects of the earth system are emerging, and together provide a more comprehensive understanding of this important time interval. here, we review the state-of-the-art in miocene climate, ocean circulation, biogeochemical cycling, ice sheet dynamics, and biotic adaptation research as inferred through proxy observations and modeling studies.plain language summaryduring the miocene time period (∼23-5 million years ago), planet earth looked similar to today, with some important differences: the climate was generally warmer and highly variable, while atmospheric co2 was not much higher. continental-sized ice sheets were only present on antarctica, but not in the northern hemisphere. the continents drifted to near their modern-day positions, and plants and animals evolved into the many (near) modern species. scientists study the miocene because present-day and projected future co2 levels are in the same range as those reconstructed for the miocene. therefore, if we can understand climate changes and their biotic responses from the miocene past, we are able to better predict current and future global changes. by comparing miocene climate reconstructions from fossil and chemical data to climate simulations produced by computer models, scientists are able to test their understanding of the earth system under higher co2 and warmer conditions than those of today. this helps in constraining future warming scenarios for the coming decades. in this study, we summarize the current understanding of the miocene world from data and models. we also identify gaps in our understanding that need further research attention in the future.key points miocene floras, faunas, and paleogeography were similar to today and provide plausible analogs for future climatic warming the miocene saw great dynamism in biotic and climate systems, but the reasons for these shifts are still not well understood the pco2-temperature-ice relationships during major miocene climate oscillations and transitions warrant further research
the miocene: the future of the past
interactions between crustal and mantle reservoirs dominate the surface inventory of volatile elements over geological time, moderating atmospheric composition and maintaining a life-supporting planet. while volcanoes expel volatile components into surface reservoirs, subduction of oceanic crust is responsible for replenishment of mantle reservoirs. many natural, ‘superdeep’ diamonds originating in the deep upper mantle and transition zone host mineral inclusions, indicating an affinity to subducted oceanic crust. here we show that the majority of slab geotherms will intersect a deep depression along the melting curve of carbonated oceanic crust at depths of approximately 300 to 700 kilometres, creating a barrier to direct carbonate recycling into the deep mantle. low-degree partial melts are alkaline carbonatites that are highly reactive with reduced ambient mantle, producing diamond. many inclusions in superdeep diamonds are best explained by carbonate melt-peridotite reaction. a deep carbon barrier may dominate the recycling of carbon in the mantle and contribute to chemical and isotopic heterogeneity of the mantle reservoir.
slab melting as a barrier to deep carbon subduction