title
string | abstract
string | url
string | arxiv_id
string | date
string | category
string |
|---|---|---|---|---|---|
$20'$ Five-Point Function from $AdS_5\times S^5$ Supergravity
|
We develop new techniques to compute five-point correlation functions from
IIB supergravity on $AdS_5\times S^5$. Our methods rely entirely on symmetry
and general consistency conditions, and eschew detailed knowledge of the
supergravity effective action. We demonstrate our methods by computing the
five-point function of the $\mathbf{20'}$ operator, which is the superconformal
primary of the stress tensor multiplet. We also develop systematic methods to
compute the five-point conformal blocks in series expansions. Using the
explicit expressions of the conformal blocks, we perform a Euclidean OPE
analysis of the $\mathbf{20'}$ five-point function. We find expected agreement
with non-renormalized quantities and also extract new CFT data at strong
coupling.
|
http://arxiv.org/abs/1906.05305v2
|
1906.05305
|
2019-10-21
|
cybersecurity
|
20-fold Accelerated 7T fMRI Using Referenceless Self-Supervised Deep Learning Reconstruction
|
High spatial and temporal resolution across the whole brain is essential to accurately resolve neural activities in fMRI. Therefore, accelerated imaging techniques target improved coverage with high spatio-temporal resolution. Simultaneous multi-slice (SMS) imaging combined with in-plane acceleration are used in large studies that involve ultrahigh field fMRI, such as the Human Connectome Project. However, for even higher acceleration rates, these methods cannot be reliably utilized due to aliasing and noise artifacts. Deep learning (DL) reconstruction techniques have recently gained substantial interest for improving highly-accelerated MRI. Supervised learning of DL reconstructions generally requires fully-sampled training datasets, which is not available for high-resolution fMRI studies. To tackle this challenge, self-supervised learning has been proposed for training of DL reconstruction with only undersampled datasets, showing similar performance to supervised learning. In this study, we utilize a self-supervised physics-guided DL reconstruction on a 5-fold SMS and 4-fold in-plane accelerated 7T fMRI data. Our results show that our self-supervised DL reconstruction produce high-quality images at this 20-fold acceleration, substantially improving on existing methods, while showing similar functional precision and temporal effects in the subsequent analysis compared to a standard 10-fold accelerated acquisition.
|
https://arxiv.org/abs/2105.05827v1
|
2105.05827
|
2021-05-12
|
cybersecurity
|
20 GHz fiber-integrated femtosecond pulse and supercontinuum generation with a resonant electro-optic frequency comb
|
Frequency combs with mode spacing in the range of 10 to 20 gigahertz (GHz) are critical for increasingly important applications such as astronomical spectrograph calibration, high-speed dual-comb spectroscopy, and low-noise microwave generation. While electro-optic modulators and microresonators can provide narrowband comb sources at this repetition rate, a significant remaining challenge is a means to produce pulses with sufficient peak power to initiate nonlinear supercontinuum generation spanning hundreds of terahertz (THz) as required for self-referencing in these applications. Here, we provide a simple, robust, and universal solution to this problem using off-the-shelf polarization-maintaining (PM) amplification and nonlinear fiber components. This fiber-integrated approach for nonlinear temporal compression and supercontinuum generation is demonstrated with a resonant electro-optic frequency comb at 1550 nm. We show how to readily achieve pulses shorter than 60 fs at a repetition rate of 20 GHz and with peak powers in excess of 2 kW. The same technique can be applied to picosecond pulses at 10 GHz to demonstrate temporal compression by a factor of 9x yielding 50 fs pulses with peak power of 5.5 kW. These compressed pulses enable flat supercontinuum generation spanning more than 600 nm after propagation through multi-segment dispersion-tailored anomalous-dispersion highly nonlinear fiber (HNLF) or tantala waveguides. The same 10 GHz source can readily achieve an octave-spanning spectrum for self-referencing in dispersion-engineered silicon nitride waveguides. This simple all-fiber approach to nonlinear spectral broadening fills a critical gap for transforming any narrowband 10 to 20 GHz frequency comb into a broadband spectrum for a wide range of applications that benefit from the high pulse rate and require access to the individual comb modes.
|
https://arxiv.org/abs/2303.11523v1
|
2303.11523
|
2023-03-21
|
cybersecurity
|
20 GHZ Low Noise LLRF System
|
A 20 GHz LLRF system is being built using a two-board(RF Front End +
ADC/DAC/FPGA) architecture. The RF Front End provides 8 down-converting
channels and 3 up-converting channels (5.5-20 GHz RF to 0.05-3 GHz IF).
Separate, phase locked, low-noise input and output LO's are generated on-board
with an independent programmable frequency range of 4-20 GHz. A user input is
provided so that both LO's as well as all ADC, DAC, and FPGA clocks can be
locked to a supplied reference source with a frequency range from 100 MHz to 20
GHz. The IF is processed with a commercial board (HiTech Global ZRF8) based on
the Xilinx ZYNQ RFSoC FPGA. The RFSoC FPGA incorporates eight 4-GSPS 12-bit
ADC's with a 4 GHz analog bandwidth and eight 6.4-GSPS 14-bit DAC's. The ZRF8
is a PCIe-standard board that provides low noise ADC/DAC/FPGA clocking, 16 GB
of memory, a FMC+ socket, and a 1 Gbps Ethernet port. The complete system will
be housed in a standard 2U 19" rack.
|
http://arxiv.org/abs/1910.11936v1
|
1910.11936
|
2019-10-23
|
cybersecurity
|
20 K superconductivity in heavily electron doped surface layer of FeSe bulk crystal
|
A superconducting transition temperature Tc as high as 100 K was recently
discovered in 1 monolayer (1ML) FeSe grown on SrTiO3 (STO). The discovery
immediately ignited efforts to identify the mechanism for the dramatically
enhanced Tc from its bulk value of 7 K. Currently, there are two main views on
the origin of the enhanced Tc; in the first view, the enhancement comes from an
interfacial effect while in the other it is from excess electrons with strong
correlation strength. The issue is controversial and there are evidences that
support each view. Finding the origin of the Tc enhancement could be the key to
achieving even higher Tc and to identifying the microscopic mechanism for the
superconductivity in iron-based materials. Here, we report the observation of
20 K superconductivity in the electron doped surface layer of FeSe. The
electronic state of the surface layer possesses all the key spectroscopic
aspects of the 1ML FeSe on STO. Without any interface effect, the surface layer
state is found to have a moderate Tc of 20 K with a smaller gap opening of 4
meV. Our results clearly show that excess electrons with strong correlation
strength alone cannot induce the maximum Tc, which in turn strongly suggests
need for an interfacial effect to reach the enhanced Tc found in 1ML FeSe/STO.
|
http://arxiv.org/abs/1511.07950v2
|
1511.07950
|
2015-12-15
|
cybersecurity
|
20-MAD -- 20 Years of Issues and Commits of Mozilla and Apache Development
|
Data of long-lived and high profile projects is valuable for research on
successful software engineering in the wild. Having a dataset with different
linked software repositories of such projects, enables deeper diving
investigations. This paper presents 20-MAD, a dataset linking the commit and
issue data of Mozilla and Apache projects. It includes over 20 years of
information about 765 projects, 3.4M commits, 2.3M issues, and 17.3M issue
comments, and its compressed size is over 6 GB. The data contains all the
typical information about source code commits (e.g., lines added and removed,
message and commit time) and issues (status, severity, votes, and summary). The
issue comments have been pre-processed for natural language processing and
sentiment analysis. This includes emoticons and valence and arousal scores.
Linking code repository and issue tracker information, allows studying
individuals in two types of repositories and provide more accurate time zone
information for issue trackers as well. To our knowledge, this the largest
linked dataset in size and in project lifetime that is not based on GitHub.
|
http://arxiv.org/abs/2003.14015v1
|
2003.14015
|
2020-03-31
|
cybersecurity
|
20min-XD: A Comparable Corpus of Swiss News Articles
|
We present 20min-XD (20 Minuten cross-lingual document-level), a French-German, document-level comparable corpus of news articles, sourced from the Swiss online news outlet 20 Minuten/20 minutes. Our dataset comprises around 15,000 article pairs spanning 2015 to 2024, automatically aligned based on semantic similarity. We detail the data collection process and alignment methodology. Furthermore, we provide a qualitative and quantitative analysis of the corpus. The resulting dataset exhibits a broad spectrum of cross-lingual similarity, ranging from near-translations to loosely related articles, making it valuable for various NLP applications and broad linguistically motivated studies. We publicly release the dataset in document- and sentence-aligned versions and code for the described experiments.
|
https://arxiv.org/abs/2504.21677v1
|
2504.21677
|
2025-04-30
|
cybersecurity
|
20-Mode Universal Quantum Photonic Processor
|
Integrated photonics is an essential technology for optical quantum computing. Universal, phase-stable, reconfigurable multimode interferometers (quantum photonic processors) enable manipulation of photonic quantum states and are one of the main components of photonic quantum computers in various architectures. In this paper, we report the realization of the largest quantum photonic processor to date. The processor enables arbitrary unitary transformations on its 20 input modes with an amplitude fidelity of $F_{\text{Haar}} = 97.4\%$ and $F_{\text{Perm}} = 99.5\%$ for Haar-random and permutation matrices, respectively, an optical loss of 2.9 dB averaged over all modes, and high-visibility quantum interference with $V_{\text{HOM}}=98\%$. The processor is realized in $\mathrm{Si_3N_4}$ waveguides and is actively cooled by a Peltier element.
|
https://arxiv.org/abs/2203.01801v5
|
2203.01801
|
2022-03-03
|
cybersecurity
|
20 open questions about deformations of compactifiable manifolds
|
Deformation theory of complex manifolds is a classical subject with recent
new advances in the noncompact case using both algebraic and analytic methods.
In this note, we recall some concepts of the existing theory and introduce
new notions of deformations for manifolds with boundary, for compactifiable
manifolds, and for $q$-concave spaces. We highlight some of the possible
applications and give a list of open questions which we intend as a guide for
further research in this rich and beautiful subject.
|
http://arxiv.org/abs/2004.11299v1
|
2004.11299
|
2020-04-23
|
cybersecurity
|
20 ps Time Resolution with a Fully-Efficient Monolithic Silicon Pixel Detector without Internal Gain Layer
|
A second monolithic silicon pixel prototype was produced for the MONOLITH project. The ASIC contains a matrix of hexagonal pixels with 100 {\mu}m pitch, readout by a low-noise and very fast SiGe HBT frontend electronics. Wafers with 50 {\mu}m thick epilayer of 350 {\Omega}cm resistivity were used to produce a fully depleted sensor. Laboratory and testbeam measurements of the analog channels present in the pixel matrix show that the sensor has a 130 V wide bias-voltage operation plateau at which the efficiency is 99.8%. Although this prototype does not include an internal gain layer, the design optimised for timing of the sensor and the front-end electronics provides a time resolutions of 20 ps.
|
https://arxiv.org/abs/2301.12244v1
|
2301.12244
|
2023-01-28
|
cybersecurity
|
20 T Dipole Magnet Based on Hybrid HTS/LTS Cos-Theta Coils with Stress Management
|
This paper presents the design concept of the dipole magnet with 50 mm aperture, 20 T nominal field and 13% margin based on a six-layer cos-theta (CT) hybrid coil design. Due to the high stresses and strains in the coil at high field, Stress Management (SM) elements are implemented in the CT coil geometry. The results of magnet magnetic analysis are presented and discussed. The key parameters of this design are compared with the parameters of similar magnets based on block-type and canted cos-theta coils.
|
https://arxiv.org/abs/2305.06776v1
|
2305.06776
|
2023-05-11
|
cybersecurity
|
(2,0) theory on $S^5 \times S^1$ and quantum M2 branes
|
The superconformal index $Z$ of the 6d (2,0) theory on $S^5 \times S^1$ (which is related to the localization partition function of 5d SYM on $S^5$) should be captured at large $N$ by the quantum M2 brane theory in the dual M-theory background. Generalizing the type IIA string theory limit of this relation discussed in arXiv:2111.15493 and arXiv:2304.12340, we consider semiclassically quantized M2 branes in a half-supersymmetric 11d background which is a twisted product of thermal AdS$_7$ and $S^4$. We show that the leading non-perturbative term at large $N$ is reproduced precisely by the 1-loop partition function of an "instanton" M2 brane wrapped on $S^1\times S^2$ with $S^2\subset S^4$. Similarly, the (2,0) theory analog of the BPS Wilson loop expectation value is reproduced by the partition function of a "defect" M2 brane wrapped on thermal AdS$_3\subset$ AdS$_7$. We comment on a curious analogy of these results with similar computations in arXiv:2303.15207 and arXiv:2307.14112 of the partition function of quantum M2 branes in AdS$_4 \times S^7/\mathbb Z_k$ which reproduced the corresponding localization expressions in the ABJM 3d gauge theory.
|
https://arxiv.org/abs/2309.10786v4
|
2309.10786
|
2023-09-19
|
cybersecurity
|
20 Years of ACE Data: How Superposed Epoch Analyses Reveal Generic Features in Interplanetary CME Profiles
|
Interplanetary coronal mass ejections (ICMEs) are magnetic structures
propagating from the Sun's corona to the interplanetary medium. With over 20
years of observations at the L1 libration point, ACE offers hundreds of ICMEs
detected at different times during several solar cycles and with different
features such as the propagation speed. We investigate a revisited catalog of
more than 400 ICMEs using the superposed epoch method on the mean, median, and
the most probable values of the distribution of magnetic and plasma parameters.
We also investigate the effects of the speed of ICMEs relative to the solar
wind, the solar cycle, and the existence of a magnetic cloud on the generic
ICME profile. We find that fast-propagating ICMEs (relatively to the solar wind
in front) still show signs of compression at 1 au, as seen by the compressed
sheath and the asymmetric profile of the magnetic field. While the solar cycle
evolution does not impact the generic features of ICMEs, there are more extreme
events during the active part of the cycle, widening the distributions of all
parameters. Finally, we find that ICMEs with or without a detected magnetic
cloud show similar profiles, which confirms the hypothesis that ICMEs with no
detected magnetic clouds are crossed further away from the flux rope core. Such
a study provides a generic understanding of processes that shape the overall
features of ICMEs in the solar wind and can be extended with future missions at
different locations in the solar system.
|
http://arxiv.org/abs/2011.05050v1
|
2011.05050
|
2020-11-10
|
cybersecurity
|
20 Years of DDoS: a Call to Action
|
Botnet Distributed Denial of Service (DDoS) attacks are now 20 years old;
what has changed in that time? Their disruptive presence, their volume,
distribution across the globe, and the relative ease of launching them have all
been trending in favor of attackers. Our increases in network capacity and our
architectural design principles are making our online world richer, but are
favoring attackers at least as much as Internet services. The DDoS mitigation
techniques have been evolving but they are losing ground to the increasing
sophistication and diversification of the attacks that have moved from the
network to the application level, and we are operationally falling behind
attackers. It is time to ask fundamental questions: are there core design
issues in our network architecture that fundamentally enable DDoS attacks? How
can our network infrastructure be enhanced to address the principles that
enable the DDoS problem? How can we incentivize the development and deployment
of the necessary changes? In this article, we want to sound an alarm and issue
a call to action to the research community. We propose that basic research and
principled analyses are badly needed, because the status quo does not paint a
pretty picture for the future.
|
http://arxiv.org/abs/1904.02739v2
|
1904.02739
|
2019-04-21
|
cybersecurity
|
20 years of developments in optical frequency comb technology and applications
|
Optical frequency combs were developed nearly two decades ago to support the
world's most precise atomic clocks. Acting as precision optical synthesizers,
frequency combs enable the precise transfer of phase and frequency information
from a high-stability reference to hundreds of thousands of tones in the
optical domain. This versatility, coupled with near-continuous spectroscopic
coverage from the terahertz to the extreme ultra-violet, has enabled precision
measurement capabilities in both fundamental and applied contexts. This review
takes a tutorial approach to illustrate how 20 years of source development and
technology has facilitated the journey of optical frequency combs from the lab
into the field.
|
http://arxiv.org/abs/1909.05384v1
|
1909.05384
|
2019-09-11
|
cybersecurity
|
20 years of disk winds in 4U 1630-47 -- I. Long-term behavior and influence of hard X-rays
|
Highly ionized X-ray wind signatures have been found in the soft states of high-inclination Black Hole Low Mass X-ray Binaries (BHLMXBs) for more than two decades. Yet signs of a systematic evolution of the outflow itself along the outburst remain elusive, due to the limited sampling of individual sources and the necessity to consider the broad-band evolution of the Spectral Energy Distribution (SED). We perform an holistic analysis of archival X-ray wind signatures in the most observed wind-emitting transient BHLMXB to date, 4U 1630-47 . The combination of Chandra, NICER, NuSTAR, Suzaku, and XMM-Newton, complemented in hard X-rays by Swift/BAT and INTEGRAL, spans more than 200 individual days over 9 individual outbursts, and provides a near complete broad-band coverage of the brighter portion of the outburst. Our results show that the hard X-rays allow to define "soft" states with ubiquitous wind detections, and their contribution is strongly correlated with the Equivalent Width (EW) of the lines. We then constrain the evolution of the outflow in a set of representative observations, using thermal stability curves and photoionization modeling. The former confirms that the switch to unstable SEDs occurs well after the wind signatures disappear, to the point where the last canonical hard states are thermally stable. The latter shows that intrinsic changes in the outflow are required to explain the main correlations of the line EWs, be it with luminosity or the hard X-rays. These behaviors are seen systematically over all outbursts and confirm individual links between the wind properties, the thermal disk, and the corona.
|
https://arxiv.org/abs/2504.00991v1
|
2504.00991
|
2025-04-01
|
cybersecurity
|
20 Years of Evolution from Cognitive to Intelligent Communications
|
It has been 20 years since the concept of cognitive radio (CR) was proposed,
which is an efficient approach to provide more access opportunities to connect
massive wireless devices. To improve the spectrum efficiency, CR enables
unlicensed usage of licensed spectrum resources. It has been regarded as the
key enabler for intelligent communications. In this article, we will provide an
overview on the intelligent communication in the past two decades to illustrate
the revolution of its capability from cognition to artificial intelligence
(AI). Particularly, this article starts from a comprehensive review of typical
spectrum sensing and sharing, followed by the recent achievements on the
AI-enabled intelligent radio. Moreover, research challenges in the future
intelligent communications will be discussed to show a path to the real
deployment of intelligent radio. After witnessing the glorious developments of
CR in the past 20 years, we try to provide readers a clear picture on how
intelligent radio could be further developed to smartly utilize the limited
spectrum resources as well as to optimally configure wireless devices in the
future communication systems.
|
http://arxiv.org/abs/1909.11562v1
|
1909.11562
|
2019-09-25
|
cybersecurity
|
20 years of Greedy Randomized Adaptive Search Procedures with Path Relinking
|
This is a comprehensive review of the Greedy Randomized Adaptive Search Procedure (GRASP) metaheuristic and its hybridization with Path Relinking (PR) over the past two decades. GRASP with PR has become a widely adopted approach for solving hard optimization problems since its proposal in 1999. The paper covers the historical development of GRASP with PR and its theoretical foundations, as well as recent advances in its implementation and application. The review includes a critical analysis of variants of PR, including memory-based and randomized designs, with a total of ten different implementations. It describes these advanced designs both theoretically and practically on two well-known optimization problems, linear ordering and max-cut. The paper also explores the hybridization of GRASP with PR and other metaheuristics, such as Tabu Search and Scatter Search. Overall, this review provides valuable insights for researchers and practitioners seeking to utilize GRASP with PR for solving optimization problems.
|
https://arxiv.org/abs/2312.12663v1
|
2312.12663
|
2023-12-19
|
cybersecurity
|
20 Years of Light Pentaquark Searches
|
In this paper, I pay tribute to my exceptional colleagues and friends Dmitri Diakonov, Victor Petrov, and Maxim Polyakov by examining the experimental progress and current status of the searches of the $\Theta^+$ pentaquark from its inception to the present.
|
https://arxiv.org/abs/2503.21545v2
|
2503.21545
|
2025-03-27
|
cybersecurity
|
20 Years of Mobility Modeling & Prediction: Trends, Shortcomings & Perspectives
|
In this paper, we present a comprehensive survey of human-mobility modeling
based on 1680 articles published between 1999 and 2019, which can serve as a
roadmap for research and practice in this area. Mobility modeling research has
accelerated the advancement of several fields of studies such as urban
planning, epidemic modeling, traffic engineering and contributed to the
development of location-based services. However, while the application of
mobility models in different domains has increased, the credibility of the
research results has decreased. We highlight two significant shortfalls
commonly observed in our reviewed studies: (1) data-agnostic model selection
resulting in a poor tradeoff between accuracy vs. complexity, and (2) failure
to identify the source of empirical gains, due to adoption of inaccurate
validation methodologies. We also observe troubling trends with respect to
application of Markov model variants for modeling mobility, despite the
questionable association of Markov processes and human-mobility dynamics. To
this end, we propose a data-driven mobility-modeling framework that quantifies
the characteristics of a dataset based on four mobility meta-attributes, in
order to select the most appropriate prediction algorithm. Experimental
evaluations on three real-world mobility datasets based on a rigorous
validation methodology demonstrate our frameworks ability to correctly analyze
the model accuracy vs. complexity tradeoff. We offer these results to the
community along with the tools and the literature meta-data in order to improve
the reliability and credibility of human mobility modeling research.
|
http://arxiv.org/abs/1906.07451v1
|
1906.07451
|
2019-06-18
|
cybersecurity
|
20 years of network community detection
|
A fundamental technical challenge in the analysis of network data is the automated discovery of communities - groups of nodes that are strongly connected or that share similar features or roles. In this commentary we review progress in the field over the last 20 years.
|
https://arxiv.org/abs/2208.00111v2
|
2208.00111
|
2022-07-30
|
cybersecurity
|
20 years of ordinal patterns: Perspectives and challenges
|
In 2002, in a seminal article, Christoph Bandt and Bernd Pompe proposed a new methodology for the analysis of complex time series, now known as Ordinal Analysis. The ordinal methodology is based on the computation of symbols (known as ordinal patterns) which are defined in terms of the temporal ordering of data points in a time series, and whose probabilities are known as ordinal probabilities. With the ordinal probabilities, the Shannon entropy can be calculated, which is the permutation entropy. Since it was proposed, the ordinal method has found applications in fields as diverse as biomedicine and climatology. However, some properties of ordinal probabilities are still not fully understood, and how to combine the ordinal approach of feature extraction with machine learning techniques for model identification, time series classification or forecasting remains a challenge. The objective of this perspective article is to present some recent advances and to discuss some open problems.
|
https://arxiv.org/abs/2204.12883v1
|
2204.12883
|
2022-04-27
|
cybersecurity
|
20 years of photometric microlensing events predicted by Gaia DR2: Potential planet-hosting lenses within 100 pc
|
Context. Gaia DR2 offers unparalleled precision on stars' parallaxes and
proper motions. This allows the prediction of microlensing events for which the
lens stars (and any planets they possess) are nearby and may be well studied
and characterised. Aims. We identify a number of potential microlensing events
that will occur before the year 2035.5, 20 years from the Gaia DR2 reference
epoch. Methods. We query Gaia DR2 for potential lenses within 100 pc, extract
parallaxes and proper motions of the lenses and background sources, and
identify potential lensing events. We estimate the lens masses from Priam
effective temperatures, and use these to calculate peak magnifications and the
size of the Einstein radii relative to the lens stars' habitable zones.
Results. We identify 7 future events with a probability > 10% of an alignment
within one Einstein radius. Of particular interest is DR2 5918299904067162240
(WISE J175839.20-583931.6), magnitude G = 14.9, which will lens a G = 13.9
background star in early 2030, with a median 23% net magnification. Other pairs
are typically fainter, hampering characterisation of the lens (if the lens is
faint) or the ability to accurately measure the magnification (if the source is
much fainter than the lens). Of timely interest is DR2 4116504399886241792
(2MASS J17392440-2327071), which will lens a background star in July 2020,
albeit with weak net magnification (0.03%). Median magnifications for the other
5 high-probability events range from 0.3% to 5.3%. The Einstein radii for these
lenses are 1-10 times the radius of the habitable zone, allowing these lensing
events to pick out cold planets around the ice line, and filling a gap between
transit and current microlensing detections of planets around very low-mass
stars. Conclusions. We provide a catalogue of the predicted events to aid
future characterisation efforts... [abridged]
|
http://arxiv.org/abs/1805.11638v2
|
1805.11638
|
2018-07-10
|
cybersecurity
|
$^{210}$Pb measurements at the André E. Lalonde AMS Laboratory for the radioassay of materials used in rare event search detectors
|
Naturally occurring radionuclide $^{210}$Pb ($T_{1/2}$=22.2 y) is an important source of background in rare event searches, such as neutrinoless double-$\beta$ decay and dark matter direct detection experiments. When a sample mass of hundreds of grams is available, $\gamma$-counting measurements can be performed. However, there are other cases where only grams of sample can be used. For these cases, better sensitivities are required. In this paper, in collaboration with the Astroparticle Physics group at Carleton University, the capabilities of the A.E. Lalonde AMS Laboratory at the University of Ottawa for $^{210}$Pb measurements are discussed. PbF$_{2}$ and PbO targets were used, selecting in the low energy sector, respectively, (PbF$_{3}$)$^{-}$ or (PbO$_{2}$)$^{-}$ ions. For fluoride targets, the blank $^{210}$Pb/$^{206}$Pb ratio was in the 10$^{-14}$ to 10$^{-13}$ range, but current output was lower and less stable. For oxide targets, current output showed better stability, despite a significant difference in current output for commercial PbO and processed samples, and background studies suggested a background not much higher than that of the fluoride targets. Both target materials showed, therefore, good performance for $^{210}$Pb AMS assay. Measurements of Kapton films, an ultra-thin polymer material, where masses available are typically just several grams, were performed. 90% C.L. upper limits for the $^{210}$Pb specific activity in the range of 0.74-2.8 Bq/kg were established for several Kapton HN films.
|
https://arxiv.org/abs/2102.06776v2
|
2102.06776
|
2021-02-15
|
cybersecurity
|
$2^{1296}$ Exponentially Complex Quantum Many-Body Simulation via Scalable Deep Learning Method
|
For decades, people are developing efficient numerical methods for solving the challenging quantum many-body problem, whose Hilbert space grows exponentially with the size of the problem. However, this journey is far from over, as previous methods all have serious limitations. The recently developed deep learning methods provide a very promising new route to solve the long-standing quantum many-body problems. We report that a deep learning based simulation protocol can achieve the solution with state-of-the-art precision in the Hilbert space as large as $2^{1296}$ for spin system and $3^{144}$ for fermion system , using a HPC-AI hybrid framework on the new Sunway supercomputer. With highly scalability up to 40 million heterogeneous cores, our applications have measured 94% weak scaling efficiency and 72% strong scaling efficiency. The accomplishment of this work opens the door to simulate spin models and Fermion models on unprecedented lattice size with extreme high precision.
|
https://arxiv.org/abs/2204.07816v1
|
2204.07816
|
2022-04-16
|
cybersecurity
|
2-16 GHz Multifrequency X-Cut Lithium Niobate NEMS Resonators on a Single Chip
|
This work presents the design, fabrication, and testing of X-Cut Lithium Niobate (LN) acoustic nanoelectromechanical (NEMS) Laterally Vibrating Resonators (LVRs) and Degenerate LVRs (d-LVRs) operating in the S0 (YZ30) and SH0 (YZ-10) modes between 2 to 16 GHz range, monolithically fabricated on a single chip. The NEMS topology is optimized to extend the aforementioned fundamental modes in the C-, X-, and Ku-bands while preserving performance and mass manufacturability. The devices present acoustic wavelengths ({\lambda}) varying between 1800 and 400 nm and are fabricated on a 100 nm ultra-thin LN film on high resistivity silicon with a 3-mask process. Experimental results highlighted quality factor at resonance (Qs) and mechanical quality factors (Qm) as high as 477 and 1750, respectively, and electromechanical coupling (kt2) as high as 32.7%. Large kt2 (>10%) are recorded over a broad range of frequencies (2 - 8 GHz), while Qm exceeding 100 are measured up to 15 GHz. Further enhancement to performance and range of operation on the same chip can be achieved by decreasing {\lambda}, refining the fabrication process, and optimizing device topology. These additional steps can help pave the way for manufacturing high-performance resonators on a single chip covering the entire 1 - 25 GHz spectrum.
|
https://arxiv.org/abs/2405.05547v1
|
2405.05547
|
2024-05-09
|
cybersecurity
|
(216) Kleopatra, a low density critically rotating M-type asteroid
|
Context. The recent estimates of the 3D shape of the M/Xe-type triple asteroid system (216) Kleopatra indicated a density of 5 g.cm$^{-3}$. Such a high density implies a high metal content and a low porosity which is not easy to reconcile with its peculiar dumbbell shape. Aims. Given the unprecedented angular resolution of the VLT/SPHERE/ZIMPOL camera, we aim to constrain the mass and the shape of Kleopatra with high accuracy, hence its density. Methods. We combined our new VLT/SPHERE observations of Kleopatra recorded in 2017 and 2018 with archival data, as well as lightcurve, occultation, and delay-Doppler images, to derive its 3D shape model using two different algorithms (ADAM, MPCD). Furthermore, an N-body dynamical model allowed us to retrieve the orbital elements of the two moons as explained in the accompanying paper. Results. The shape of Kleopatra is very close to an equilibrium dumbbell figure with two lobes and a thick neck. Its volume equivalent diameter (118.75$\pm$1.40) km and mass (2.97$\pm$0.32) 10$^{18}$ kg imply a bulk density of (3.38$\pm$0.50) g cm$^{-3}$. Such a low density for a supposedly metal-rich body indicates a substantial porosity within the primary. This porous structure along with its near-equilibrium shape is compatible with a formation scenario including a giant impact followed by reaccumulation. Kleopatra's current rotation period and dumbbell shape imply that it is in a critically rotating state. The low effective gravity along the equator of the body, together with the equatorial orbits of the moons and possibly rubble-pile structure, opens the possibility that the moons formed via mass shedding. Conclusions. Kleopatra is a puzzling multiple system due to the unique characteristics of the primary. It deserves particular attention in the future, with the Extremely Large Telescopes and possibly a dedicated space mission.
|
https://arxiv.org/abs/2108.07207v1
|
2108.07207
|
2021-08-16
|
cybersecurity
|
21 Balmer Jump Street: The Nebular Continuum at High Redshift and Implications for the Bright Galaxy Problem, UV Continuum Slopes, and Early Stellar Populations
|
We study, from both a theoretical and observational perspective, the physical origin and spectroscopic impact of extreme nebular emission in high-redshift galaxies. The nebular continuum, which can appear during extreme starbursts, is of particular importance as it tends to redden UV slopes and has a significant contribution to the UV luminosities of galaxies. Furthermore, its shape can be used to infer the gas density and temperature of the ISM. First, we provide a theoretical background, showing how different stellar populations (SPS models, IMFs, and stellar temperatures) and nebular conditions impact observed galaxy spectra. We demonstrate that, for systems with strong nebular continuum emission, 1) UV fluxes can increase by up to 0.7~magnitudes (or more in the case of hot/massive stars) above the stellar continuum, which may help reconcile the surprising abundance of bright high-redshift galaxies and the elevated UV luminosity density at $z>10$, 2) at high gas densities, UV slopes can redden from $\beta\lesssim-2.5$ to $\beta\sim-1$, 3) observational measurements of $\xi_{ion}$ are grossly underestimated, and 4) UV downturns from two-photon emission can masquerade as DLAs. Second, we present a dataset of 58 galaxies observed with NIRSpec on JWST at $2.5<z<9.0$ that are selected to have strong nebular continuum emission via the detection of the Balmer jump. Five of the 58 spectra are consistent with being dominated by nebular emission, exhibiting both a Balmer jump and a UV downturn consistent with two-photon emission. For some galaxies, this may imply the presence of hot massive stars and a top-heavy IMF. We conclude by exploring the properties of spectroscopically confirmed $z>10$ galaxies, finding that UV slopes and UV downturns are in some cases redder or steeper than expected from SPS models, which may hint at more exotic (e.g. hotter/more massive stars or AGN) ionizing sources.
|
https://arxiv.org/abs/2408.03189v2
|
2408.03189
|
2024-08-06
|
cybersecurity
|
21-cm Constraints on Dark Matter Annihilation after an Early Matter-Dominated Era
|
Although it is commonly assumed that relativistic particles dominate the energy density of the universe quickly after inflation, a variety of well-motivated scenarios predict an early matter-dominated era (EMDE) before the onset of Big Bang nucleosynthesis. Subhorizon dark matter density perturbations grow faster during an EMDE than during a radiation-dominated era, leading to the formation of "microhalos" far earlier than in standard models of structure formation. This enhancement of small-scale structure boosts the dark-matter annihilation rate, which contributes to the heating of the intergalactic medium (IGM). We compute how the dark matter annihilation rate evolves after an EMDE and forecast how well measurements of the 21-cm background can detect dark matter annihilation in cosmologies with EMDEs. We find that future measurements of the global 21-cm signal at a redshift of $z\sim 17$ are unlikely to improve on bounds derived from observations of the isotropic gamma-ray background, but measurements of the 21-cm power spectrum have the potential to detect dark matter annihilation following an EMDE. Moreover, dark matter annihilation and astrophysical X-rays produce distinct heating signatures in the 21-cm power spectrum at redshifts around 14, potentially allowing differentiation between these two IGM heating mechanisms.
|
https://arxiv.org/abs/2502.08719v1
|
2502.08719
|
2025-02-12
|
cybersecurity
|
21-cm constraints on spinning primordial black holes
|
Hawking radiation from primordial black holes (PBH) can ionize and heat up neutral gas during the cosmic dark ages, leaving imprints on the global 21-cm signal of neutral hydrogen. We use the global 21-cm signal to constrain the abundance of spinning PBHs in mass range of $[2 \times 10^{13}, 10^{18}]$ grams. We consider several extended PBH distribution models. Our results show that 21-cm can set the most stringent PBH bounds in our mass window. Compared with constraints set by {\it Planck} cosmic microwave background (CMB) data, 21-cm limits are more stringent by about two orders of magnitudes. PBHs with higher spin are typically more strongly constrained. Our 21-cm constraints for the monochromatic mass distribution rule out spinless PBHs with initial mass below $1.5 \times 10^{17}\ \r{g}$, whereas extreme Kerr PBHs with reduced initial spin of $a_0=0.999$ are excluded as the dominant dark matter component for masses below $6 \times 10^{17}\ \r{g}$. We also derived limits for the log-normal, power-law and critical collapse PBH mass distributions.
|
https://arxiv.org/abs/2108.13256v2
|
2108.13256
|
2021-08-30
|
cybersecurity
|
21 cm cosmology and spin temperature reduction via spin-dependent dark matter interactions
|
The EDGES low-band experiment has measured an absorption feature in the cosmic microwave background radiation (CMB), corresponding to the 21 cm hyperfine transition of hydrogen at redshift $z \simeq 17$, before the era of cosmic reionization. The amplitude of this absorption is connected to the ratio of singlet and triplet hyperfine states in the hydrogen gas, which can be parametrized by a spin temperature. The EDGES result suggests that the spin temperature is lower than the expected temperatures of both the CMB and the hydrogen gas. A variety of mechanisms have been proposed in order to explain this signal, for example by lowering the kinetic temperature of the hydrogen gas via dark matter interactions. We introduce an alternative mechanism, by which a sub-GeV dark matter particle with spin-dependent coupling to nucleons or electrons can cause hyperfine transitions and lower the spin temperature directly, with negligible reduction of the kinetic temperature of the hydrogen gas. We consider a model with an asymmetric dark matter fermion and a light pseudo-vector mediator. Significant reduction of the spin temperature by this simple model is excluded, most strongly by coupling constant bounds coming from stellar cooling. Perhaps an alternative dark sector model, subject to different sets of constraints, can lower the spin temperature by the same mechanism.
|
https://arxiv.org/abs/1902.09552v2
|
1902.09552
|
2019-02-25
|
cybersecurity
|
21cmEMU: an emulator of 21cmFAST summary observables
|
Recent years have witnessed rapid progress in observations of the Epoch of Reionization (EoR). These have enabled high-dimensional inference of galaxy and intergalactic medium (IGM) properties during the first billion years of our Universe. However, even using efficient, semi-numerical simulations, traditional inference approaches that compute 3D lightcones on-the-fly can take $10^5$ core hours. Here we present 21cmEMU: an emulator of several summary observables from the popular 21cmFAST simulation code. 21cmEMU takes as input nine parameters characterizing EoR galaxies, and outputs the following summary statistics: (i) the IGM mean neutral fraction; (ii) the 21-cm power spectrum; (iii) the mean 21-cm spin temperature; (iv) the sky-averaged (global) 21-cm signal; (v) the ultraviolet (UV) luminosity functions (LFs); and (vi) the Thomson scattering optical depth to the cosmic microwave background (CMB). All observables are predicted with sub-percent median accuracy, with a reduction of the computational cost by a factor of over 10$^4$. After validating inference results, we showcase a few applications, including: (i) quantifying the relative constraining power of different observational datasets; (ii) seeing how recent claims of a late EoR impact previous inferences; and (iii) forecasting upcoming constraints from the sixth observing season of the Hydrogen Epoch of Reionization Array (HERA) telescope. 21cmEMU is publicly-available, and is included as an alternative simulator in the public 21CMMC sampler.
|
https://arxiv.org/abs/2309.05697v3
|
2309.05697
|
2023-09-11
|
cybersecurity
|
21cm Epoch of Reionisation Power Spectrum with Closure Phase using the Murchison Widefield Array
|
The radio interferometric closure phases can be a valuable tool for studying cosmological {H\scriptsize{I}}~from the early Universe. Closure phases have the advantage of being immune to element-based gains and associated calibration errors. Thus, calibration and errors therein, which are often sources of systematics limiting standard visibility-based approaches, can be avoided altogether in closure phase analysis. In this work, we present the first results of the closure phase power spectrum of {H\scriptsize{I}}~21-cm fluctuations using the Murchison Widefield Array (MWA), with $\sim 12$ hours of MWA-phase II observations centered around redshift, $z\approx 6.79$, during the Epoch of Reionisation. On analysing three redundant classes of baselines -- 14~m, 24~m, and 28~m equilateral triads, our estimates of the $2\sigma$ ($95\%$ confidence interval) 21-cm power spectra are $\lesssim (184)^2 pseudo \rm ~mK^2$ at ${k}_{||} = 0.36 $ $pseudo~h {\rm Mpc^{-1}}$ in the EoR1 field for the 14~m baseline triads, and $\lesssim (188)^2 pseudo \rm ~mK^2$ at $k_{||} = 0.18 $ $pseudo~h {\rm Mpc^{-1}}$ in the EoR0 field for the 24~m baseline triads. The ``$pseudo$'' units denote that the length scale and brightness temperature should be interpreted as close approximations. Our best estimates are still 3-4 orders high compared to the fiducial 21-cm power spectrum; however, our approach provides promising estimates of the power spectra even with a small amount of data. These data-limited estimates can be further improved if more datasets are included into the analysis. The evidence for excess noise has a possible origin in baseline-dependent systematics in the MWA data that will require careful baseline-based strategies to mitigate, even in standard visibility-based approaches.
|
https://arxiv.org/abs/2409.02906v1
|
2409.02906
|
2024-09-04
|
cybersecurity
|
21cmFAST: A Fast, Semi-Numerical Simulation of the High-Redshift 21-cm Signal
|
We introduce a powerful semi-numeric modeling tool, 21cmFAST, designed to
efficiently simulate the cosmological 21-cm signal. Our code generates 3D
realizations of evolved density, ionization, peculiar velocity, and spin
temperature fields, which it then combines to compute the 21-cm brightness
temperature. Although the physical processes are treated with approximate
methods, we compare our results to a state-of-the-art large-scale hydrodynamic
simulation, and find good agreement on scales pertinent to the upcoming
observations (>~ 1 Mpc). The power spectra from 21cmFAST agree with those
generated from the numerical simulation to within 10s of percent, down to the
Nyquist frequency. We show results from a 1 Gpc simulation which tracks the
cosmic 21-cm signal down from z=250, highlighting the various interesting
epochs. Depending on the desired resolution, 21cmFAST can compute a redshift
realization on a single processor in just a few minutes. Our code is fast,
efficient, customizable and publicly available, making it a useful tool for
21-cm parameter studies.
|
http://arxiv.org/abs/1003.3878v1
|
1003.3878
|
2010-03-19
|
cybersecurity
|
21cmFAST v3: A Python-integrated C code forgenerating 3D realizations of the cosmic 21cm signal
|
This brief code paper presents a new Python-wrapped version of the popular
21cm cosmology simulator, 21cmFAST. The new version, v3+, maintains the same
core functionality of previous versions of 21cmFAST, but features a simple and
intuitive interface, and a great deal more flexibility. This evolution
represents the work of a formalized collaboration, and the new version,
available publicly on GitHub, provides a single point-of-reference for all
future upgrades and community-added features. In this paper, we describe simple
usage of 21cmFAST, some of its new features, and provide a simple performance
benchmark.
|
http://arxiv.org/abs/2010.15121v1
|
2010.15121
|
2020-10-28
|
cybersecurity
|
21cmFirstCLASS I. Cosmological tool for $Λ$CDM and beyond
|
In this work we present 21cmFirstCLASS, a modified version of 21cmFAST, the most popular code in the literature for computing the anisotropies of the 21-cm signal. Our code uses the public cosmic microwave background (CMB) Boltzmann code CLASS, to establish consistent initial conditions at recombination for any set of cosmological parameters and evolves them throughout the dark ages, cosmic dawn, the epoch of heating and reionization. We account for inhomogeneity in the temperature and ionization fields throughout the evolution, crucial for a robust calculation of both the global 21-cm signal and its fluctuations. We demonstrate how future measurements of the CMB and the 21-cm signal can be combined and analyzed with 21cmFirstCLASS to obtain constraints on both cosmological and astrophysical parameters and examine degeneracies between them. As an example application, we show how 21cmFirstCLASS can be used to study cosmological models that exhibit non-linearities already at the dark ages, such as scattering dark matter (SDM). For the first time, we present self-consistent calculations of the 21-cm power spectrum in the presence of SDM during the non-linear epoch of cosmic dawn. The code is publicly available at https://github.com/jordanflitter/21cmFirstCLASS.
|
https://arxiv.org/abs/2309.03942v4
|
2309.03942
|
2023-09-07
|
cybersecurity
|
21cmFirstCLASS II. Early linear fluctuations of the 21cm signal
|
In a companion paper we introduce 21cmFirstCLASS, a new code for computing the 21-cm anisotropies, assembled from the merger of the two popular codes 21cmFAST and CLASS. Unlike the standard 21cmFAST, which begins at $z=35$ with homogeneous temperature and ionization boxes, our code begins its calculations from recombination, evolves the signal through the dark ages, and naturally yields an inhomogeneous box at $z=35$. In this paper, we validate the output of 21cmFirstCLASS by developing a new theoretical framework which is simple and intuitive on the one hand, but is robust and precise on the other hand. As has been recently claimed, using consistent inhomogeneous initial conditions mitigates inaccuracies, which according to our analysis can otherwise reach the $\mathcal O\left(20\%\right)$ level. On top of that, we also show for the first time that 21cmFAST over-predicts the 21-cm power spectrum at $z\gtrsim20$ by another $\mathcal O\left(20\%\right)$, due to the underlying assumption that $\delta_b=\delta_c$, namely that the density fluctuations in baryons and cold dark matter are indistinguishable. We propose an elegant solution to this discrepancy by introducing an appropriate scale-dependent growth factor into the evolution equations. Our analysis shows that this modification will ensure sub-percent differences between 21cmFirstCLASS and the Boltzmann solver CAMB at $z\leq50$ for all scales between the horizon and the Jeans scale. This will enable 21cmFirstCLASS to consistently and reliably simulate the 21-cm anisotropies both in the dark ages and cosmic dawn, for any cosmology. The code is publicly available at https://github.com/jordanflitter/21cmFirstCLASS.
|
https://arxiv.org/abs/2309.03948v3
|
2309.03948
|
2023-09-07
|
cybersecurity
|
21cmfish: Fisher-matrix framework for fast parameter forecasts from the cosmic 21-cm signal
|
The 21-cm signal from neutral hydrogen in the early universe will provide unprecedented information about the first stars and galaxies. Extracting this information, however, requires accounting for many unknown astrophysical processes. Semi-numerical simulations are key for exploring the vast parameter space of said processes. These simulations use approximate techniques such as excursion-set and perturbation theory to model the 3D evolution of the intergalactic medium, at a fraction of the computational cost of hydrodynamic and/or radiative transfer simulations. However, exploring the enormous parameter space of the first galaxies can still be computationally expensive. Here we introduce 21cmfish, a Fisher-matrix wrapper for the semi-numerical simulation 21cmFAST. 21cmfish facilitates efficient parameter forecasts, scaling to significantly higher dimensionalities than MCMC approaches, assuming a multi-variate Gaussian posterior. Our method produces comparable parameter uncertainty forecasts to previous MCMC analyses but requires ~10$^4$x fewer simulations. This enables a rapid way to prototype analyses adding new physics and/or additional parameters. We carry out a forecast for HERA using the largest astrophysical parameter space to-date, with 10 free parameters, spanning both population II and III star formation. We find X-ray parameters for the first galaxies could be measured to sub-percent precision, and, though they are highly degenerate, the stellar-to-halo mass relation and ionizing photon escape fraction for population II and III galaxies can be constrained to ~10% precision (logarithmic quantities). Using a principal component analysis we find HERA is most sensitive to the product of the ionizing escape fraction and the stellar-to-halo mass fraction for population II galaxies.
|
https://arxiv.org/abs/2212.09797v2
|
2212.09797
|
2022-12-19
|
cybersecurity
|
21-cm fluctuations from primordial magnetic fields
|
The fluid forces associated with primordial magnetic fields (PMFs) generate small-scale fluctuations in the primordial density field, which add to the $\mathrm{\Lambda CDM}$ linear matter power spectrum on small scales. These enhanced small-scale fluctuations lead to earlier formation of galactic halos and stars and thus affect cosmic reionization. We study the consequences of these effects on 21 cm observables using the semi-numerical code 21cmFAST v3.1.3. We find the excess small-scale structure generates strong stellar radiation backgrounds in the early Universe, resulting in altered 21 cm global signals and power spectra commensurate with earlier reionization. We restrict the allowed PMF models using the CMB optical depth to reionization. Lastly, we probe parameter degeneracies and forecast experimental sensitivities with an information matrix analysis subject to the CMB optical depth bound. Our forecasts show that interferometers like HERA are sensitive to PMFs of order $\sim \mathrm{pG}$, nearly an order of magnitude stronger than existing and next-generation experiments.
|
https://arxiv.org/abs/2308.04483v2
|
2308.04483
|
2023-08-08
|
cybersecurity
|
21-cm foreground removal using AI and frequency-difference technique
|
The deep learning technique has been employed in removing foreground contaminants from 21 cm intensity mapping, but its effectiveness is limited by the large dynamic range of the foreground amplitude. In this study, we develop a novel foreground removal technique grounded in U-Net networks. The essence of this technique lies in introducing an innovative data preprocessing step specifically, utilizing the temperature difference between neighboring frequency bands as input, which can substantially reduce the dynamic range of foreground amplitudes by approximately two orders of magnitude. This reduction proves to be highly advantageous for the U-Net foreground removal. We observe that the HI signal can be reliably recovered, as indicated by the cross-correlation power spectra showing unity agreement at the scale of $k < 0.3 h^{-1}$Mpc in the absence of instrumental effects. Moreover, accounting for the systematic beam effects, our reconstruction displays consistent auto-correlation and cross-correlation power spectrum ratios at the $1\sigma$ level across scales $k \lesssim 0.1 h^{-1}$Mpc, with only a 10% reduction observed in the cross-correlation power spectrum at $k\simeq0.2 h^{-1}$Mpc. The effects of redshift-space distortion are also reconstructed successfully, as evidenced by the quadrupole power spectra matching. In comparison, our method outperforms the traditional Principal Component Analysis method, which derived cross-correlation ratios are underestimated by around 60%. We simulated various white noise levels in the map and found that the mean cross-correlation ratio $\bar{R}_\mathrm{cross} \gtrsim 0.8$ when the level of the thermal noise is smaller than or equal to that of the HI signal. We conclude that the proposed frequency-difference technique can significantly enhance network performance by reducing the amplitude range of foregrounds and aiding in the prevention of HI loss.
|
https://arxiv.org/abs/2310.06518v2
|
2310.06518
|
2023-10-10
|
cybersecurity
|
21cm foregrounds and polarization leakage: a user's guide on cleaning and mitigation strategies
|
The success of HI intensity mapping is largely dependent on how well 21cm foreground contamination can be controlled. In order to progress our understanding further, we present a range of simulated foreground data from four different $\sim3000$\,deg$^2$ sky regions, with and without effects from polarization leakage. Combining these with underlying cosmological HI simulations creates a range of single-dish intensity mapping test cases that require different foreground treatments. This allows us to conduct the most generalized study to date into 21cm foregrounds and their cleaning techniques for the post-reionization era. We first provide a pedagogical review of the most commonly used blind foreground removal techniques (PCA/SVD, FASTICA, GMCA). We also trial a non-blind parametric fitting technique and discuss potential hybridization of methods. We highlight the similarities and differences in these techniques finding that the blind methods produce near equivalent results, and we explain the fundamental reasons for this. The simulations allow an exact decomposition of the resulting cleaned data and we analyse the contribution from foreground residuals. Our results demonstrate that polarized foreground residuals should be generally subdominant to HI on small scales ($k\gtrsim0.1\,h\,\text{Mpc}^{-1}$). However, on larger scales, results are more region dependent. In some cases, aggressive cleans severely damp HI power but still leave dominant foreground residuals. We also demonstrate the gain from cross-correlations with optical galaxy surveys, where extreme levels of residual foregrounds can be circumvented. However, these residuals still contribute to errors and we discuss the optimal balance between over- and under-cleaning.
|
https://arxiv.org/abs/2010.02907v2
|
2010.02907
|
2020-10-06
|
cybersecurity
|
21 cm Forest Constraints on Primordial Black Holes
|
Primordial black holes (PBHs) as part of the Dark Matter (DM) would modify the evolution of large-scale structures and the thermal history of the universe. Future 21 cm forest observations, sensitive to small scales and the thermal state of the Inter Galactic Medium (IGM), could probe the existence of such PBHs. In this article, we show that the shot noise isocurvature mode on small scales induced by the presence of PBHs can enhance the amount of low mass halos, or minihalos, and thus, the number of 21 cm absorption lines. However, if the mass of PBHs is as large as $M_{\rm PBH}\gtrsim 10 \, M_\odot$, with an abundant enough fraction of PBHs as DM, $f_{\rm PBH}$, the IGM heating due to accretion onto the PBHs counteracts the enhancement due to the isocurvature mode, reducing the number of absorption lines instead. The concurrence of both effects imprints distinctive signatures in the number of absorbers, allowing to bound the abundance of PBHs. We compute the prospects for constraining PBHs with future 21 cm forest observations, finding achievable competitive upper limits on the abundance as low as $f_{\rm PBH} \sim 10^{-3}$ at $M_{\rm PBH}= 100 \, M_\odot$, or even lower at larger masses, in unexplored regions of the parameter space by current probes. The impact of astrophysical X-ray sources on the IGM temperature is also studied, which could potentially weaken the bounds.
|
https://arxiv.org/abs/2104.10695v1
|
2104.10695
|
2021-04-21
|
cybersecurity
|
21cm forest probes on the axion dark matter in the post-inflationary Peccei-Quinn symmetry breaking scenarios
|
We study the future prospects of the 21cm forest observations on the
axion-like dark matter when the spontaneous breaking of the global Peccei-Quinn
(PQ) symmetry occurs after the inflation. The large isocurvature perturbations
of order unity sourced from axion-like particles can result in the enhancement
of minihalo formation, and the subsequent hierarchical structure formation can
affect the minihalo abundance whose masses can exceed ${\cal O}(10^4)
M_{\odot}$ relevant for the 21cm forest observations. We show that the 21cm
forest observations are capable of probing the axion-like particle mass in the
range $10^{-18}\lesssim m_a \lesssim 10^{-12}$ eV for the temperature
independent axion mass. For the temperature dependent axion mass, the zero
temperature axion mass scale for which the 21cm forest measurements can be
affected is extended further to as big as of order $10^{-6}$ eV.
|
http://arxiv.org/abs/2005.05589v3
|
2005.05589
|
2020-07-14
|
cybersecurity
|
21cm Forest with the SKA
|
An alternative to both the tomography technique and the power spectrum
approach is to search for the 21cm forest, that is the 21cm absorption features
against high-z radio loud sources caused by the intervening cold neutral
intergalactic medium (IGM) and collapsed structures. Although the existence of
high-z radio loud sources has not been confirmed yet, SKA-low would be the
instrument of choice to find such sources as they are expected to have spectra
steeper than their lower-z counterparts. Since the strongest absorption
features arise from small scale structures (few tens of physical kpc, or even
lower), the 21cm forest can probe the HI density power spectrum on small scales
not amenable to measurements by any other means. Also, it can be a unique probe
of the heating process and the thermal history of the early universe, as the
signal is strongly dependent on the IGM temperature. Here we show what SKA1-low
could do in terms of detecting the 21cm forest in the redshift range z =
7.5-15.
|
http://arxiv.org/abs/1501.04425v1
|
1501.04425
|
2015-01-19
|
cybersecurity
|
21cm Global Signal Extraction: Extracting the 21cm Global Signal using Artificial Neural Networks
|
The study of the cosmic Dark Ages, Cosmic Dawn, and Epoch of Reionization (EoR) using the all-sky averaged redshifted HI 21cm signal, are some of the key science goals of most of the ongoing or upcoming experiments, for example, EDGES, SARAS, and the SKA. This signal can be detected by averaging over the entire sky, using a single radio telescope, in the form of a Global signal as a function of only redshifted HI 21cm frequencies. One of the major challenges faced while detecting this signal is the dominating, bright foreground. The success of such detection lies in the accuracy of the foreground removal. The presence of instrumental gain fluctuations, chromatic primary beam, radio frequency interference (RFI) and the Earth's ionosphere corrupts any observation of radio signals from the Earth. Here, we propose the use of Artificial Neural Networks (ANN) to extract the faint redshifted 21cm Global signal buried in a sea of bright Galactic foregrounds and contaminated by different instrumental models. The most striking advantage of using ANN is the fact that, when the corrupted signal is fed into a trained network, we can simultaneously extract the signal as well as foreground parameters very accurately. Our results show that ANN can detect the Global signal with $\gtrsim 92 \%$ accuracy even in cases of mock observations where the instrument has some residual time-varying gain across the spectrum.
|
https://arxiv.org/abs/1911.02580v1
|
1911.02580
|
2019-11-06
|
cybersecurity
|
21cm Intensity Mapping cross-correlation with galaxy surveys: current and forecasted cosmological parameters estimation for the SKAO
|
We present a comprehensive set of forecasts for the cross-correlation signal between 21cm intensity mapping and galaxy redshift surveys. We focus on the data sets that will be provided by the SKAO for the 21cm signal, DESI and Euclid for galaxy clustering. We build a likelihood which takes into account the effect of the beam for the radio observations, the Alcock-Paczynski effect, a simple parameterization of astrophysical nuisances, and fully exploit the tomographic power of such observations in the range $z=0.7-1.8$ at linear and mildly non-linear scales ($k<0.25 h/$Mpc). The forecasted constraints, obtained with Monte Carlo Markov Chains techniques in a Bayesian framework, in terms of the six base parameters of the standard $\Lambda$CDM model, are promising. The predicted signal-to-noise ratio for the cross-correlation can reach $\sim 50$ for $z\sim 1$ and $k\sim 0.1 h/$ Mpc. When the cross-correlation signal is combined with current Cosmic Microwave Background (CMB) data from Planck, the error bar on $\Omega_{\rm c}\,h^2$ and $H_0$ is reduced by a factor 3 and 6, respectively, compared to CMB only data, due to the measurement of matter clustering provided by the two observables. The cross-correlation signal has a constraining power that is comparable to the auto-correlation one and combining all the clustering measurements a sub-percent error bar of 0.33% on $H_0$ can be achieved, which is about a factor 2 better than CMB only measurement. Finally, as a proof-of-concept, we test the full pipeline on the real data measured by the MeerKat collaboration (Cunnington et al. 2022) presenting some (weak) constraints on cosmological parameters.
|
https://arxiv.org/abs/2309.00710v2
|
2309.00710
|
2023-09-01
|
cybersecurity
|
21 cm Intensity Mapping with the DSA-2000
|
Line intensity mapping is a promising probe of the universe's large-scale structure. We explore the sensitivity of the DSA-2000, a forthcoming array consisting of over 2000 dishes, to the statistical power spectrum of neutral hydrogen's 21 cm emission line. These measurements would reveal the distribution of neutral hydrogen throughout the near-redshift universe without necessitating resolving individual sources. The success of these measurements relies on the instrument's sensitivity and resilience to systematics. We show that the DSA-2000 will have the sensitivity needed to detect the 21 cm power spectrum at z=0.5 and across power spectrum modes of 0.03-35.12 h/Mpc with 0.1 h/Mpc resolution. We find that supplementing the nominal array design with a dense core of 200 antennas will expand its sensitivity at low power spectrum modes and enable measurement of Baryon Acoustic Oscillations (BAOs). Finally, we present a qualitative discussion of the DSA-2000's unique resilience to sources of systematic error that can preclude 21 cm intensity mapping.
|
https://arxiv.org/abs/2311.00896v2
|
2311.00896
|
2023-11-01
|
cybersecurity
|
21cm Limits on Decaying Dark Matter and Primordial Black Holes
|
Recently the Experiment to Detect the Global Epoch of Reionization Signature
(EDGES) reported the detection of a 21cm absorption signal stronger than
astrophysical expectations. In this paper we study the impact of radiation from
dark matter (DM) decay and primordial black holes (PBH) on the 21cm radiation
temperature in the reionization epoch, and impose a constraint on the decaying
dark matter and PBH energy injection in the intergalactic medium, which can
heat up neutral hydrogen gas and weaken the 21cm absorption signal. We consider
decay channels DM$\rightarrow e^+e^-, \gamma\gamma$, $\mu^+\mu^-$, $b\bar{b}$
and the $10^{15-17}$g mass range for primordial black holes, and require the
heating of the neutral hydrogen does not negate the 21cm absorption signal. For
$e^+e^-$, $\gamma\gamma$ final states and PBH cases we find strong 21cm bounds
that can be more stringent than the current extragalactic diffuse photon
bounds. For the DM$\rightarrow e^+e^-$ channel, the lifetime bound is
$\tau_{\rm DM}> 10^{27}$s for sub-GeV dark matter. The bound is $\tau_{\rm
DM}\ge 10^{26}$s for sub-GeV DM$\rightarrow \gamma\gamma$ channel and reaches
$10^{27}$s at MeV DM mass. For $b\bar{b}$ and $\mu^+\mu^-$ cases, the 21 cm
constraint is better than all the existing constraints for $m_{\rm DM}<20$ GeV
where the bound on $\tau_{\rm DM}\ge10^{26}$s. For both DM decay and primordial
black hole cases, the 21cm bounds significantly improve over the CMB damping
limits from Planck data.
|
http://arxiv.org/abs/1803.09390v1
|
1803.09390
|
2018-03-26
|
cybersecurity
|
21-cm line Anomaly: A brief Status
|
In this short review I present the status of the global 21-cm signal detected
by EDGES in March 2018. It is organized in three parts. First, I present the
EDGES experiment and the fitting procedure used by the collaboration to extract
the tiny 21-cm signal from large foregrounds of galactic synchrotron emission.
Then, I review the physics behind the global 21-cm signature and I explain why
the measured absorption feature is anomalous with respect to the predictions
from standard astrophysics. I conclude with the implications for Beyond
Standard Model (BSM) physics coming from the EDGES discovery.
|
http://arxiv.org/abs/1907.13384v2
|
1907.13384
|
2019-09-13
|
cybersecurity
|
21 cm Line Astronomy and Constraining New Physics
|
The 21 cm signal appears to be a treasure trove to provide an insight into the period when the first generation of luminous objects formed in the Universe. Hydrogen is the predominating fraction of the total baryonic matter during cosmic dawn (CD). Therefore, it is convenient and advantageous to study physics during CD using the 21 cm signal. The presence of any exotic source of energy can inject energy into the intergalactic medium (IGM) and heat the gas. Subsequently, it can modify the absorption amplitude in the global 21 cm signal. This feature can provide a robust bound on such sources of energy injection into the IGM gas.
|
https://arxiv.org/abs/2301.02655v1
|
2301.02655
|
2023-01-06
|
cybersecurity
|
21 cm line signal from magnetic modes
|
The Lorentz term raises the linear matter power on small scale which leads to
interesting signatures in the 21 cm signal. Numerical simulations of the
resuting nonlinear density field, the distribution of ionized hydrogen and the
21 cm signal at different values of redshift are presented for magnetic fields
with field strength B=5 nG, and spectral indices $n_B=-2.9, -2.2$ and -1.5
together with the adiabatic mode for the best fit data of Planck13+WP.
Comparing the averaged global 21 cm signal with the projected SKA1-LOW
sensitivities of the Square Kilometre Array (SKA) it might be possible to
constrain the magnetic field parameters.
|
http://arxiv.org/abs/1805.10943v1
|
1805.10943
|
2018-05-28
|
cybersecurity
|
21CMMC: an MCMC analysis tool enabling astrophysical parameter studies of the cosmic 21 cm signal
|
We introduce 21CMMC: a parallelized, Monte Carlo Markov Chain analysis tool,
incorporating the epoch of reionization (EoR) seminumerical simulation
21CMFAST. 21CMMC estimates astrophysical parameter constraints from 21 cm EoR
experiments, accommodating a variety of EoR models, as well as priors on model
parameters and the reionization history. To illustrate its utility, we consider
two different EoR scenarios, one with a single population of galaxies (with a
mass-independent ionizing efficiency) and a second, more general model with two
different, feedback-regulated populations (each with mass-dependent ionizing
efficiencies). As an example, combining three observations (z=8, 9 and 10) of
the 21 cm power spectrum with a conservative noise estimate and uniform model
priors, we find that interferometers with specifications like the Low Frequency
Array/Hydrogen Epoch of Reionization Array (HERA)/Square Kilometre Array 1
(SKA1) can constrain common reionization parameters: the ionizing efficiency
(or similarly the escape fraction), the mean free path of ionizing photons and
the log of the minimum virial temperature of star-forming haloes to within
45.3/22.0/16.7, 33.5/18.4/17.8 and 6.3/3.3/2.4 per cent, ~$1\sigma$ fractional
uncertainty, respectively. Instead, if we optimistically assume that we can
perfectly characterize the EoR modelling uncertainties, we can improve on these
constraints by up to a factor of ~few. Similarly, the fractional uncertainty on
the average neutral fraction can be constrained to within $\lesssim10$ per cent
for HERA and SKA1. By studying the resulting impact on astrophysical
constraints, 21CMMC can be used to optimize (i) interferometer designs; (ii)
foreground cleaning algorithms; (iii) observing strategies; (iv) alternative
statistics characterizing the 21 cm signal; and (v) synergies with other
observational programs.
|
http://arxiv.org/abs/1501.06576v2
|
1501.06576
|
2015-01-26
|
cybersecurity
|
21CMMC with a 3D light-cone: the impact of the co-evolution approximation on the astrophysics of reionisation and cosmic dawn
|
We extend 21CMMC, a Monte Carlo Markov Chain sampler of 3D reionisation
simulations, to perform parameter estimation directly on 3D light-cones of the
cosmic 21cm signal. This brings theoretical analysis closer to the tomographic
21-cm observations achievable with next generation interferometers like HERA
and the SKA. Parameter recovery can therefore account for modes which evolve
with redshift/frequency. Additionally, simulated data can be more easily
corrupted to resemble real data. Using the light-cone version of 21CMMC, we
quantify the biases in the recovered astrophysical parameters if we use the
21cm power spectrum from the co-evolution approximation to fit a 3D light-cone
mock observation. While ignoring the light-cone effect under most assumptions
will not significantly bias the recovered astrophysical parameters, it can lead
to an underestimation of the associated uncertainty. However significant biases
($\sim$few -- 10 $\sigma$) can occur if the 21cm signal evolves rapidly (i.e.
the epochs of reionisation and heating overlap significantly) and: (i)
foreground removal is very efficient, allowing large physical scales
($k\lesssim0.1$~Mpc$^{-1}$) to be used in the analysis or (ii) theoretical
modelling is accurate to within $\sim10$ per cent in the power spectrum
amplitude.
|
http://arxiv.org/abs/1801.01592v1
|
1801.01592
|
2018-01-05
|
cybersecurity
|
21-cm observations and warm dark matter models
|
Observations of the redshifted 21-cm signal (in absorption or emission) allow
us to peek into the epoch of "dark ages" and the onset of reionization. These
data can provide a novel way to learn about the nature of dark matter, in
particular about the formation of small size dark matter halos. However, the
connection between the formation of structures and 21-cm signal requires
knowledge of stellar to total mass relation, escape fraction of UV photons, and
other parameters that describe star formation and radiation at early times.
This baryonic physics depends on the properties of dark matter and in
particular in warm-dark-matter (WDM) models, star formation may follow a
completely different scenario, as compared to the cold-dark-matter case. We use
the recent measurements by the EDGES [J. D. Bowman, A. E. E. Rogers, R. A.
Monsalve, T. J. Mozdzen, and N. Mahesh, An absorption profile centred at 78
megahertz in thesky-averaged spectrum,Nature (London) 555, 67 (2018).] to
demonstrate that when taking the above considerations into account, the robust
WDM bounds are in fact weaker than those given by the Lyman-$\alpha$ forest
method and other structure formation bounds. In particular, we show that
resonantly produced 7 keV sterile neutrino dark matter model is consistent with
these data. However, a holistic approach to modelling of the WDM universe holds
great potential and may in the future make 21-cm data our main tool to learn
about dark matter clustering properties.
|
http://arxiv.org/abs/1904.03097v2
|
1904.03097
|
2019-12-10
|
cybersecurity
|
21 cm observations: calibration, strategies, observables
|
This chapter aims to provide a review of the basics of 21 cm interferometric
observations and its methodologies. A summary of the main concepts of radio
interferometry and their connection with the 21 cm observables - power spectra
and images - is presented. I then provide a review of interferometric
calibration and its interplay with foreground separation, including the current
open challenges in calibration of 21 cm observations. Finally, a review of 21
cm instrument designs in the light of calibration choices and observing
strategies follows.
|
http://arxiv.org/abs/1909.11938v1
|
1909.11938
|
2019-09-26
|
cybersecurity
|
21-cm power spectrum and ionization bias as a probe of long-mode modulated non Gaussian sky
|
The observed hemispherical power asymmetry in cosmic microwave background
radiation can be explained by long wavelength mode (long-mode) modulation. In
this work we study the prospect of the detection of this effect in the angular
power spectrum of 21-cm brightness temperature. For this task, we study the
effect of the neutral Hydrogen distribution on the angular power spectrum. This
is done by formulating the bias parameter of ionized fraction to the underlying
matter distribution. We also discuss the possibility that the long mode
modulation is companied with a primordial non-Gaussianity of local type. In
this case, we obtain the angular power spectrum with two effects of primordial
non-Gaussianity and long mode modulation. Finally, we show that the primordial
non-Gaussianity enhances the long mode modulated power of 21-cm signal via the
non-Gaussian scale-dependent bias up to four orders of magnitude. {Accordingly,
the observation of 21-cm signal with upcoming surveys such as the Square
Kilometer Array (SKA) is probably capable of detecting hemispherical power
asymmetry in the context of the long mode modulation.
|
http://arxiv.org/abs/1812.11150v3
|
1812.11150
|
2019-08-15
|
cybersecurity
|
21 cm Power Spectrum for Bimetric Gravity and its Detectability with SKA1-Mid Telescope
|
We consider a modified gravity theory through a special kind of ghost-free bimetric gravity, where one massive spin-2 field interacts with a massless spin-2 field. In this bimetric gravity, the late time cosmic acceleration is achievable. Alongside the background expansion of the Universe, we also study the first-order cosmological perturbations and probe the signature of the bimetric gravity on large cosmological scales. One possible probe is to study the observational signatures of the bimetric gravity through the 21 cm power spectrum. We consider upcoming SKA1-mid antenna telescope specifications to show the prospects of the detectability of the ghost-free bimetric gravity through the 21 cm power spectrum. Depending on the values of the model parameter, there is a possibility to distinguish the ghost-free bimetric gravity from the standard $\Lambda$CDM model with the upcoming SKA1-mid telescope specifications.
|
https://arxiv.org/abs/2306.03875v1
|
2306.03875
|
2023-06-06
|
cybersecurity
|
21 cm power spectrum in interacting cubic Galileon model
|
We show the detectability of interacting and non-interacting cubic Galileon models from the $\Lambda$CDM model through the 21 cm power spectrum. We show that the interferometric observations like the upcoming SKA1-mid can detect both the interacting and the non-interacting cubic Galileon model from the $\Lambda$CDM model depending on the parameter values.
|
https://arxiv.org/abs/2208.11560v1
|
2208.11560
|
2022-08-24
|
cybersecurity
|
21cmSense v2: A modular, open-source 21cm sensitivity calculator
|
The 21cm line of neutral hydrogen is a powerful probe of the high-redshift universe (Cosmic Dawn and the Epoch of Reionization), with an unprecedented potential to inform us about key processes of early galaxy formation, the first stars and even cosmology and structure formation, via intensity mapping. It is the subject of a number of current and upcoming low-frequency radio experiments. This paper presents 21cmSense v2.0, which is a Python package that provides a modular framework for calculating the sensitivity of these experiments, in order to enhance the process of their design and forecasting their power for parameter inference. Version 2.0 of 21cmSense has been re-written from the ground up to be more modular and extensible than its venerable predecessor (Pober et al., 2013, 2014), and to provide a more user-friendly interface. The package is freely available both to use and contribute towards at https://github.com/rasg-affiliates/21cmSense.
|
https://arxiv.org/abs/2406.02415v1
|
2406.02415
|
2024-06-04
|
cybersecurity
|
21-cm signal from cosmic dawn - II: Imprints of the light-cone effects
|
Details of various unknown physical processes during the cosmic dawn and the
epoch of reionization can be extracted from observations of the redshifted
21-cm signal. These observations, however, will be affected by the evolution of
the signal along the line-of-sight which is known as the "light-cone effect".
We model this effect by post-processing a dark matter $N-$body simulation with
an 1-D radiative transfer code. We find that the effect is much stronger and
dramatic in presence of inhomogeneous heating and Ly$\alpha$ coupling compared
to the case where these processes are not accounted for. One finds increase
(decrease) in the spherically averaged power spectrum up to a factor of 3 (0.6)
at large scales ($k \sim 0.05\, \rm Mpc^{-1}$) when the light-cone effect is
included, though these numbers are highly dependent on the source model. The
effect is particularly significant near the peak and dip-like features seen in
the power spectrum. The peaks and dips are suppressed and thus the power
spectrum can be smoothed out to a large extent if the width of the frequency
band used in the experiment is large. We argue that it is important to account
for the light-cone effect for any 21-cm signal prediction during cosmic dawn.
|
http://arxiv.org/abs/1504.05601v2
|
1504.05601
|
2015-08-12
|
cybersecurity
|
21cm signal from Dark Ages collapsing halos with detailed molecular cooling treatment
|
Context. In order to understand the formation of the first stars, which set the transition between the Dark Ages and Cosmic Dawn epochs, it is necessary to provide a detailed description of the physics at work within the first clouds of gas which, during their gravitational collapse, will set the conditions for stars to be form through the mechanism of thermal instability. Aims. Our objective is to study in detail the molecular cooling of gas in the halos preceding the formation of the first stars. We are furthermore assessing the sensitivity of the 21cm hydrogen line to this cooling channel. Results. We present the CHEMFAST code, that we developed to compute the cosmological 21cm neutral hydrogen line inside collapsing matter overdensity. We precisely track evolution in the abundances of ions, atoms and molecules through a network of chemical reactions. Computing the molecular thermal function due to the excitation of the rotational levels of the H2 molecule, we find it strongly affects the gas temperature inside collapsing clouds of $10^8$ M$_\odot$. The gas temperature falls at the end of the collapse, when the molecular cooling takes over the heating due to gravitation. Conclusions. We find that the 21cm brightness temperature inside the collapsing cloud presents an emission feature, different from the one predicted in expansion scenario. It moreover follows the same behavior as the gas temperature, as it is also strongly affected by the molecular cooling. This makes it a promising probe in order to map the collapsing halos and thermal processes at work inside them.
|
https://arxiv.org/abs/2404.08479v1
|
2404.08479
|
2024-04-12
|
cybersecurity
|
21-cm Signal from the Epoch of Reionization: A Machine Learning upgrade to Foreground Removal with Gaussian Process Regression
|
In recent years, a Gaussian Process Regression (GPR) based framework has been developed for foreground mitigation from data collected by the LOw-Frequency ARray (LOFAR), to measure the 21-cm signal power spectrum from the Epoch of Reionization (EoR) and Cosmic Dawn. However, it has been noted that through this method there can be a significant amount of signal loss if the EoR signal covariance is misestimated. To obtain better covariance models, we propose to use a kernel trained on the {\tt GRIZZLY} simulations using a Variational Auto-Encoder (VAE) based algorithm. In this work, we explore the abilities of this Machine Learning based kernel (VAE kernel) used with GPR, by testing it on mock signals from a variety of simulations, exploring noise levels corresponding to $\approx$10 nights ($\approx$141 hours) and $\approx$100 nights ($\approx$1410 hours) of observations with LOFAR. Our work suggests the possibility of successful extraction of the 21-cm signal within 2$\sigma$ uncertainty in most cases using the VAE kernel, with better recovery of both shape and power than with previously used covariance models. We also explore the role of the excess noise component identified in past applications of GPR and additionally analyse the possibility of redshift dependence on the performance of the VAE kernel. The latter allows us to prepare for future LOFAR observations at a range of redshifts, as well as compare with results from other telescopes.
|
https://arxiv.org/abs/2311.16633v2
|
2311.16633
|
2023-11-28
|
cybersecurity
|
21cm signal predictions at Cosmic Dawn and Reionization with coupled radiative-hydrodynamics
|
The process of heating and reionization of the Universe at high redshift links small scale structures/galaxy formation and large scale inter-galactic medium properties. Even if the first one is difficult to observe, an observation window is opening on the second one, with the promising development of current and future radio telescopes. They will permit to observe the 21cm brightness temperature global signal and fluctuations. The need of large scale simulations is therefore strong to understand the properties of the IGM that will be observed. But at the same time the urge to resolve the structures responsible of those process is important. We introduce in this study, a coupled hydro-radiative transfer simulations of the Cosmic Dawn and Reionization with a simple sub-grid star formation process developed and calibrated on the state of the art simulation CoDaII. This scheme permits to follow consistently dark matter, hydrodynamics and radiative transfer evolution's on large scales, while the sub-grid models bridges to the galaxy formation scale. We process the simulation to produce 21cm signal as close as possible to the observations.
|
https://arxiv.org/abs/2103.03061v1
|
2103.03061
|
2021-03-04
|
cybersecurity
|
21cm Signal Recovery via the Robust Principle Component Analysis
|
The redshifted 21~cm signal from neutral hydrogen (HI) is potentially a very
powerful probe for cosmology, but a difficulty in its observation is that it is
much weaker than foreground radiation from the Milky Way as well as
extragalactic radio sources. The foreground radiation at different frequencies
are however coherent along one line of sight, and various methods of foreground
subtraction based on this property have been proposed. In this paper, we
present a new method based on the Robust Principal Component Analysis (RPCA) to
subtract foreground and extract 21~cm signal, which explicitly uses both the
low-rank property of the frequency covariance matrix (i.e. frequency coherence)
of the foreground and the sparsity of the frequency covariance matrix of the
21~cm signal. The low-rank property of the foregrounds frequency covariance has
been exploited in many previous works on foreground subtraction, but to our
knowledge the sparsity of the frequency covariance of the 21~cm signal is first
explored here. By exploiting both properties in the RPCA method, in principle,
the foreground and signal may be separated without the signal loss problem. Our
method is applicable to both small patch of sky with the flat-sky
approximation, and to large area of sky where the sphericity has to be
considered. It is also easy to be extended to deal with more complex conditions
such as sky map with defects.
|
http://arxiv.org/abs/1801.04082v1
|
1801.04082
|
2018-01-12
|
cybersecurity
|
21cm signal sensitivity to dark matter decay
|
The redshifted 21cm signal from the Cosmic Dawn is expected to provide unprecedented insights into early Universe astrophysics and cosmology. Here we explore how dark matter can heat the intergalactic medium before the first galaxies, leaving a distinctive imprint in the 21cm power spectrum. We provide the first dedicated Fisher matrix forecasts on the sensitivity of the Hydrogen Epoch of Reionization Array (HERA) telescope to dark matter decays. We show that with 1000 hours of observation, HERA has the potential to improve current cosmological constraints on the dark matter decay lifetime by up to three orders of magnitude. Even in extreme scenarios with strong X-ray emission from early-forming, metal-free galaxies, the bounds on the decay lifetime would be improved by up to two orders of magnitude. Overall, HERA shall improve on existing limits for dark matter masses below $2$ GeV$/c^2$ for decays into $e^+e^-$ and below few MeV$/c^2$ for decays into photons.
|
https://arxiv.org/abs/2308.16656v2
|
2308.16656
|
2023-08-31
|
cybersecurity
|
21-cm signature of the first sources in the Universe: Prospects of detection with SKA
|
Currently several low-frequency experiments are being planned to study the
nature of the first stars using the redshifted 21-cm signal from the cosmic
dawn and epoch of reionization. Using a one-dimensional radiative transfer
code, we model the 21-cm signal pattern around the early sources for different
source models, i.e., the metal-free Population III (PopIII) stars, primordial
galaxies consisting of Population II (PopII) stars, mini-QSOs and high-mass
X-ray binaries (HMXBs). We investigate the detectability of these sources by
comparing the 21-cm visibility signal with the system noise appropriate for a
telescope like the SKA1-low. Upon integrating the visibility around a typical
source over all baselines and over a frequency interval of 16 MHz, we find that
it will be possible to make a $\sim 9-\sigma$ detection of the isolated sources
like PopII galaxies, mini-QSOs and HMXBs at $z \sim 15$ with the SKA1-low in
1000 hours. The exact value of the signal to noise ratio (SNR) will depend on
the source properties, in particular on the mass and age of the source and the
escape fraction of ionizing photons. The predicted SNR decreases with
increasing redshift. We provide simple scaling laws to estimate the SNR for
different values of the parameters which characterize the source and the
surrounding medium. We also argue that it will be possible to achieve a SNR
$\sim 9$ even in the presence of the astrophysical foregrounds by subtracting
out the frequency-independent component of the observed signal. These
calculations will be useful in planning 21-cm observations to detect the first
sources.
|
http://arxiv.org/abs/1511.07448v2
|
1511.07448
|
2016-05-12
|
cybersecurity
|
21-cm signatures of residual HI inside cosmic HII regions during reionization
|
We investigate the impact of sinks of ionizing radiation on the
reionization-era 21-cm signal, focusing on 1-point statistics. We consider
sinks in both the intergalactic medium and inside galaxies. At a fixed filling
factor of HII regions, sinks will have two main effects on the 21-cm
morphology: (i) as inhomogeneous absorbers of ionizing photons they result in
smaller and more widespread cosmic HII patches; and (ii) as reservoirs of
neutral gas they contribute a non-zero 21-cm signal in otherwise ionized
regions. Both effects damp the contrast between neutral and ionized patches
during reionization, making detection of the epoch of reionization with 21-cm
interferometry more challenging. Here we systematically investigate these
effects using the latest semi-numerical simulations. We find that sinks
dramatically suppress the peak in the redshift evolution of the variance,
corresponding to the midpoint of reionization. As previously predicted,
skewness changes sign at midpoint, but the fluctuations in the residual HI
suppress a late-time rise. Furthermore, large levels of residual HI
dramatically alter the evolution of the variance, skewness and power spectrum
from that seen at lower levels. In general, the evolution of the large-scale
modes provides a better, cleaner, higher signal-to-noise probe of reionization.
|
http://arxiv.org/abs/1501.01970v2
|
1501.01970
|
2015-05-27
|
cybersecurity
|
21cmVAE: A Very Accurate Emulator of the 21-cm Global Signal
|
Considerable observational efforts are being dedicated to measuring the sky-averaged (global) 21-cm signal of neutral hydrogen from Cosmic Dawn and the Epoch of Reionization. Deriving observational constraints on the astrophysics of this era requires modeling tools that can quickly and accurately generate theoretical signals across the wide astrophysical parameter space. For this purpose artificial neural networks were used to create the only two existing global signal emulators, 21cmGEM and globalemu. In this paper we introduce 21cmVAE, a neural network-based global signal emulator, trained on the same dataset of ~30,000 global signals as the other two emulators, but with a more direct prediction algorithm that prioritizes accuracy and simplicity. Using neural networks, we compute derivatives of the signals with respect to the astrophysical parameters and establish the most important astrophysical processes that drive the global 21-cm signal at different epochs. 21cmVAE has a relative rms error of only 0.34 - equivalently 0.54 mK - on average, which is a significant improvement compared to the existing emulators, and a run time of 0.04 seconds per parameter set. The emulator, the code, and the processed datasets are publicly available at https://github.com/christianhbye/21cmVAE and through https://zenodo.org/record/5904939.
|
https://arxiv.org/abs/2107.05581v4
|
2107.05581
|
2021-07-12
|
cybersecurity
|
21-Component Compositionally Complex Ceramics: Discovery of Ultrahigh-Entropy Weberite and Fergusonite Phases and a Pyrochlore-Weberite Transition
|
Two new high-entropy ceramics (HECs) in the weberite and fergusonite structures, along with unexpected formation of ordered pyrochlore phases with ultrahigh-entropy compositions and an abrupt pyrochlore-weberite transition, are discovered in a 21-component oxide system. While the Gibbs phase rule allows 21 equilibrium phases, nine out of the 13 compositions examined possess single HEC phases (with ultrahigh ideal configurational entropies: ~2.7kB per cation or higher on one sublattice in most cases). Notably, (15RE1/15)(Nb1/2Ta1/2)O4 possess a single monoclinic fergusonite (C2/c) phase and (15RE1/15)3(Nb1/2Ta1/2)1O7 form a single orthorhombic (C2221) weberite phase, where 15RE1/15 represents Sc1/15Y1/15La1/15Pr1/15Nd1/15Sm1/15Eu1/15Gd1/15Tb1/15Dy1/15Ho1/15Er1/15Tm1/15Yb1/15Lu1/15. Moreover, a series of eight (15RE1/15)2+x(Ti1/4Zr1/4Ce1/4Hf1/4)2-2x(Nb1/2Ta1/2)xO7 specimens all exhibit single phases, where a pyrochlore-weberite transition occurs within 0.75 < x < 0.8125. This cubic-to-orthorhombic transition does not change the temperature-dependent thermal conductivity appreciably, as the amorphous limit may have already been achieved in the ultrahigh-entropy 21-component oxides. These discoveries expand the diversity and complexity of HECs, towards many-component compositionally complex ceramics (CCCs) and ultrahigh-entropy ceramics.
|
https://arxiv.org/abs/2112.15381v2
|
2112.15381
|
2021-12-31
|
cybersecurity
|
2+1d Compact Lifshitz Theory, Tensor Gauge Theory, and Fractons
|
The 2+1d continuum Lifshitz theory of a free compact scalar field plays a prominent role in a variety of quantum systems in condensed matter physics and high energy physics. It is known that in compact space, it has an infinite ground state degeneracy. In order to understand this theory better, we consider two candidate lattice regularizations of it using the modified Villain formalism. We show that these two lattice theories have significantly different global symmetries (including a dipole global symmetry), anomalies, ground state degeneracies, and dualities. In particular, one of them is self-dual. Given these theories and their global symmetries, we can couple them to corresponding gauge theories. These are two different $U(1)$ tensor gauge theories. The resulting models have excitations with restricted mobility, i.e., fractons. Finally, we give an exact lattice realization of the fracton/lineon-elasticity dualities for the Lifshitz theory, scalar and vector charge gauge theories.
|
https://arxiv.org/abs/2209.10030v2
|
2209.10030
|
2022-09-20
|
cybersecurity
|
2+1 d Georgi Glashow Model Near Critical Temperature
|
We study correlations functions of magnetic vortices $V$ and Polyakov loop
$P$ operators in the 2+1 dimensional Georgi-Glashow model in the vicinity of
the deconfining phase transition. In this regime the (dimensionally reduced)
model is mapped onto a free theory of two massive Majorana fermions. We utilize
this fermionic representation to explicitly calculate the expectation values of
$V$ and $P$ as well as their correlators. In particular we show that the $VV$
correlator is large, and thus the anomalous breaking of the magnetic $U(1)$
symmetry is order one effect in the near critical region. We also calculate the
contribution of magnetic vortices to the entropy and the free energy of the
system.
|
http://arxiv.org/abs/1612.07267v2
|
1612.07267
|
2017-04-11
|
cybersecurity
|
$(2+1)$-dimensional AKNS($-N$) Systems: $ N=3,4$
|
In this work we continue to study negative AKNS($N$) that is AKNS($-N$)
system for $N=3,4$. We obtain all possible local and nonlocal reductions of
these equations. We construct the Hirota bilinear forms of these equations and
find one-soliton solutions. From the reduction formulas we obtain also
one-soliton solutions of all reduced equations.
|
http://arxiv.org/abs/1910.11298v1
|
1910.11298
|
2019-10-24
|
cybersecurity
|
$(2+1)$-Dimensional Black Holes in $f(R,φ)$ Gravity
|
We consider a $f(R)$ gravity theory in $(2+1)$-dimensions with a self-interacting scalar field non-minimally coupled to gravity. Without specifying the form of the $f(R)$ function, solving the field equations we find that the Ricci scalar receives a non-linear correction term which breaks the conformal invariance and leads to a massless black hole solution. When the non-linear term decouples, we get a well known hairy black hole solution with the scalar field conformally coupled to gravity. We also find that the entropy of our black hole may be higher than the corresponding conformal black hole which indicates that our solution may be thermodynamically preferred.
|
https://arxiv.org/abs/2201.00035v2
|
2201.00035
|
2021-12-31
|
cybersecurity
|
(2+1)-dimensional Chern-Simons bi-gravity with AdS Lie bialgebra as an interacting theory of two massless spin-2 fields
|
We introduce a new Lie bialgebra structure for the anti de Sitter (AdS) Lie
algebra in (2+1)-dimensional spacetime. By gauging the resulting \textit{AdS
Lie bialgebra}, we write a Chern-Simons gauge theory of bi-gravity involving
two dreibeins rather than two metrics, which describes two interacting massless
spin-2 fields. Our ghost-free bi-gravity model which has no any local degrees
of freedom, has also a suitable free field limit. By solving its equations of
motion, we obtain a \textit{new black hole} solution which has two curvature
singularities and two horizons. We also study cosmological implications of this
massless bi-gravity model.
|
http://arxiv.org/abs/1706.02129v3
|
1706.02129
|
2018-08-27
|
cybersecurity
|
(2+1) dimensional cosmological models in f(R; T) gravity with $Λ$(R; T)
|
We intend to study a new class of cosmological models in $f(R, T)$ modified theories of gravity, hence define the cosmological constant $\Lambda$ as a function of the trace of the stress energy-momentum-tensor $T$ and the Ricci scalar $R$, and name such a model "$\Lambda(R, T)$ gravity" where we have specified a certain form of $\Lambda(R, T)$. $\Lambda(R, T)$ is also defined in the perfect fluid and dust case. Some physical and geometric properties of the model are also discussed. The pressure, density and energy conditions are studied both when $\Lambda$ is a positive constant and when $\Lambda=\Lambda(t)$, i.e a function of cosmological time, t. We study the behaviour of some cosmological quantities such as Hubble and deceleration parameters. The model is innovative in the sense that it has been described in terms of both $R$ and $T$ and display a better understanding of the cosmological observations.
|
https://arxiv.org/abs/2003.11355v1
|
2003.11355
|
2020-03-19
|
cybersecurity
|
(2 +1)-dimensional Duffin-Kemmer-Petiau oscillator under a magnetic field in the presence of a minimal length in the noncommutative space
|
Using the momentum space representation, we study the (2 +1)-dimensional
Duffin-Kemmer-Petiau oscillator for spin 0 particle under a magnetic field in
the presence of a minimal length in the noncommutative space. The explicit form
of energy eigenvalues are found, the wave functions and the corresponding
probability density are reported in terms of the Jacobi polynomials.
Additionally, we also discuss the special cases and depict the corresponding
numerical results.
|
http://arxiv.org/abs/1706.04298v1
|
1706.04298
|
2017-06-14
|
cybersecurity
|
2+1 dimensional Fermions on the low-buckled honey-comb structured lattice plane and classical Casimir-Polder force
|
We have calculated the Casimir-Polder interaction (CPI) of a micro-particle
with a sheet on the basis of the Klimchitskaya-Mostepanenko theory. We find the
result that for non-trivial susceptibility values of the sheet and
micro-particle, there is crossover between attractive and repulsive behavior.
The transition depends only on the impedance, involving permeability and
permittivity, apart from the ratio of the film thickness and the micro-particle
separation (D/d) and temperature. The approach to calculate CPI of a
micro-particle with a silicene sheet involves replacing the dielectric constant
of the sample by the static dielectric function obtained using the expressions
for the polarization function. The silicene is described by the low-energy
Liu-Yao-Feng-Ezawa (LYFE)Model Hamiltonian involving the Dirac matrices in the
chiral representation obeying the Clifford algebra.We find that the collective
charge excitations at zero doping, i.e., intrinsic plasmons, in this system,
are absent in the Dirac limit. The valley-spin-split intrinsic plasmons,
however, come into being in the case of the massive Dirac particles with
characteristic frequency close to 10 THz.Furthermore, there is a longitudinal
electric field induced topological insulator(TI) to spin-valley polarized metal
(SVPM) transition in silicene, which is also referred to as the topological
phase transition (TPT). The low-energy SVP carriers at TPT possess gap-less
(mass-less) and gapped (massive) energy spectra close to the two nodal points
in the Brillouin zone with maximum spin-polarization. We find that the
magnitude of the Casimir-Polder force at a given ratio of the film thickness
and the separation between the micro-particle and the film is greater at TPT
than at the topological insulator and trivial insulator phases.
|
http://arxiv.org/abs/1505.07036v3
|
1505.07036
|
2016-07-10
|
cybersecurity
|
$2+1$ dimensional Floquet systems and lattice fermions: Exact bulk spectral equivalence
|
A connection has recently been proposed between periodically driven systems known as Floquet insulators in continuous time and static fermion theories in discrete time. This connection has been established in a $(1+1)$-dimensional free theory, where an explicit mapping between the spectra of a Floquet insulator and a discrete-time Dirac fermion theory has been formulated. Here we investigate the potential of static discrete-time theories to capture Floquet physics in higher dimensions, where so-called anomalous Floquet topological insulators can emerge that feature chiral edge states despite having bulk bands with zero Chern number. Starting from a particular model of an anomalous Floquet system, we provide an example of a static discrete-time theory whose bulk spectrum is an exact analytic match for the Floquet spectrum. The spectra with open boundary conditions in a particular strip geometry also match up to finite-size corrections. However, the models differ in several important respects. The discrete-time theory is spatially anisotropic, so that the spectra do not agree for all lattice terminations, e.g. other strip geometries or on half spaces. This difference can be attributed to the fact that the static discrete-time model is quasi-one-dimensional in nature and therefore has a different bulk-boundary correspondence than the Floquet model.
|
https://arxiv.org/abs/2410.18226v2
|
2410.18226
|
2024-10-23
|
cybersecurity
|
(2+1)-dimensional f(R) gravity solutions via Hojman symmetry
|
In this paper, we use the Hojman symmetry approach to find new $(2+1)$-dimensional $f(R)$ gravity solutions, in comparison to Noether symmetry approach. In the special case of Hojman symmetry vector $X=R$, we recover $(2+1)$-dimensional BTZ black hole and generalized $(2+1)$-dimensional BTZ black hole solutions, obtained by Noether symmetry approach, and the interesting point is that the cosmological constant is appeared as the direct manifestation of Hojman symmetry.
|
https://arxiv.org/abs/2010.08424v3
|
2010.08424
|
2020-10-16
|
cybersecurity
|
2+1 dimensional gravity
|
It gives me great pleasure to review some of the joint work by Tullio Regge
and myself. We worked intensely on 2+1-dimensional gravity from 1989 for about
five years, and published 16 articles. I will present and review two of our
early articles, highlighting what I believe are the most important results,
some of them really surprising, and discuss later developments.
|
http://arxiv.org/abs/1804.08456v2
|
1804.08456
|
2018-04-24
|
cybersecurity
|
2+1-dimensional gravity coupled to a dust shell: quantization in terms of global phase space variables
|
We perform canonical analysis of a model in which gravity is coupled to a
spherically symmetric dust shell in 2+1 spacetime dimensions. The result is a
reduced action depending on a finite number of degrees of freedom. The emphasis
is made on finding canonical variables providing the global chart for the
entire phase space of the model. It turns out that all the distinct pieces of
momentum space could be assembled into a single manifold which has
ADS^{2}-geometry, and the global chart for it is provided by the Euler angles.
This results in both non-commutativity and discreteness in coordinate space,
which allows to resolve the central singularity. We also find the map between
ADS^{2} momentum space obtained here and momentum space in Kuchar variables,
which could be helpful in extending the present results to 3+1 dimensions.
|
http://arxiv.org/abs/1812.11425v1
|
1812.11425
|
2018-12-29
|
cybersecurity
|
(2+1)-Dimensional Gravity in Weyl Integrable Spacetime
|
We investigate (2+1)-dimensional gravity in a Weyl integrable spacetime
(WIST). We show that, unlike general relativity, this scalar-tensor theory has
a Newtonian limit for any dimension and that in three dimensions the congruence
of world lines of particles of a pressureless fluid has a non-vanishing
geodesic deviation. We present and discuss a class of static vacuum solutions
generated by a circularly symmetric matter distribution that for certain values
of the parameter w corresponds to a space-time with a naked singularity at the
center of the matter distribution. We interpret all these results as being a
direct consequence of the space-time geometry.
|
http://arxiv.org/abs/1503.04186v1
|
1503.04186
|
2015-03-13
|
cybersecurity
|
(2+1)-dimensional interacting model of two massless spin-2 fields as a bi-gravity model
|
We propose a new group-theoretical (Chern-Simons) formulation for the
bi-metric theory of gravity in (2+1)-dimensional spacetime which describe two
interacting massless spin-2 fields. Our model has been formulated in terms of
two dreibeins rather than two metrics. We obtain our Chern-Simons gravity model
by gauging {\it mixed AdS-AdS Lie algebra} and show that it has a two
dimensional conformal field theory (CFT) at the boundary of the anti de Sitter
(AdS) solution. We show that the central charge of the dual CFT is proportional
to the mass of the AdS solution. We also study cosmological implications of our
massless bi-gravity model.
|
http://arxiv.org/abs/1705.11042v4
|
1705.11042
|
2018-04-24
|
cybersecurity
|
(2+1)-dimensional interface dynamics: mixing time, hydrodynamic limit and Anisotropic KPZ growth
|
Stochastic interface dynamics serve as mathematical models for diverse
time-dependent physical phenomena: the evolution of boundaries between
thermodynamic phases, crystal growth, random deposition... Interesting limits
arise at large space-time scales: after suitable rescaling, the randomly
evolving interface converges to the solution of a deterministic PDE
(hydrodynamic limit) and the fluctuation process to a (in general non-Gaussian)
limit process. In contrast with the case of $(1+1)$-dimensional models, there
are very few mathematical results in dimension $(d+1), d\ge2$. As far as growth
models are concerned, the $(2+1)$-dimensional case is particularly interesting:
D. Wolf conjectured the existence of two different universality classes (called
KPZ and Anisotropic KPZ), with different scaling exponents. Here, we review
recent mathematical results on (both reversible and irreversible) dynamics of
some $(2+1)$-dimensional discrete interfaces, mostly defined through a mapping
to two-dimensional dimer models. In particular, in the irreversible case, we
discuss mathematical support and remaining open problems concerning Wolf's
conjecture on the relation between the Hessian of the growth velocity on one
side, and the universality class of the model on the other.
|
http://arxiv.org/abs/1711.05571v1
|
1711.05571
|
2017-11-15
|
cybersecurity
|
(2+1)-dimensional KdV, fifth-order KdV, and Gardner equations derived from the ideal fluid model. Soliton, cnoidal and superposition solutions
|
We study the problem of gravity surface waves for an ideal fluid model in the (2+1)-dimensional case. We apply a systematic procedure to derive the Boussinesq equations for a given relation between the orders of four expansion parameters, the amplitude parameter $\alpha$, the long-wavelength parameter $\beta$, the transverse wavelength parameter $\gamma$, and the bottom variation parameter $\delta$. We derived the only possible (2+1)-dimensional extensions of the Korteweg-de Vries equation, the fifth-order KdV equation, and the Gardner equation in three special cases of the relationship between these parameters. All these equations are non-local. When the bottom is flat, the (2+1)-dimensional KdV equation can be transformed to the Kadomtsev-Petviashvili equation in a fixed reference frame and next to the classical KP equation in a moving frame. We have found soliton, cnoidal, and superposition solutions (essentially one-dimensional) to the (2+1)-dimensional Korteweg-de Vries equation and the Kadomtsev-Petviashvili equation.
|
https://arxiv.org/abs/2206.08964v3
|
2206.08964
|
2022-06-17
|
cybersecurity
|
2+1 dimensional loop quantum cosmology of Bianchi I models
|
We study the anisotropic Bianchi I loop quantum cosmology in 2+1 dimensions.
Both the $\mubar$ and $\mubar'$ schemes are considered in the present paper and
the following expected results are established: (i) the massless scalar field
again play the role of emergent time variables and serves as an internal clock;
(ii) By imposing the fundamental discreteness of length operator, the total
Hamiltonian constraint is obtained and gives rise the evolution as a difference
equation; and (iii) the exact solutions of Friedmann equation are constructed
rigorously for both classical and effective level. The investigation extends
the domain of validity of loop quantum cosmology to beyond the four dimensions.
|
http://arxiv.org/abs/1602.07478v1
|
1602.07478
|
2016-02-24
|
cybersecurity
|
$(2+1)$-dimensional regular black holes with nonlinear electrodynamics sources
|
On the basis of two requirements: the avoidance of the curvature singularity
and the Maxwell theory as the weak field limit of the nonlinear
electrodynamics, we find two restricted conditions on the metric function of
$(2+1)$-dimensional regular black hole in general relativity coupled with
nonlinear electrodynamics sources. By the use of the two conditions, we obtain
a general approach to construct $(2+1)$-dimensional regular black holes. In
this manner, we construct four $(2+1)$-dimensional regular black holes as
examples. We also study the thermodynamic properties of the regular black holes
and verify the first law of black hole thermodynamics.
|
http://arxiv.org/abs/1709.09473v1
|
1709.09473
|
2017-09-27
|
cybersecurity
|
(2+1)$-dimensional sonic black hole from spin-orbit coupled Bose-Einstein condensate and its analogue Hawking radiation
|
We study the properties of a $2+1$ dimensional Sonic black hole (SBH) that
can be realised, in a quasi-two-dimensional two-component spin-orbit coupled
Bose-Einstein condensate (BEC). The corresponding equation for phase
fluctuations in the total density mode that describes phonon field in the
hydrodynamic approximation is described by a scalar field equation in $2+1$
dimension whose space-time metric is significantly different from that of the
SBH realised from a single component BEC that was studied experimentally, and,
theoretically meticulously in literature. Given the breakdown of the
irrotationality constraint of the velocity field in such spin-orbit coupled
BEC, we study in detail how the time evolution of such condensate impacts the
various properties of the resulting SBH. By time evolving the condensate in a
suitably created laser-induced potential, we show that such a sonic black hole
is formed, in an annular region bounded by inner and outer event horizon as
well as elliptical ergo-surfaces. We observe amplifying density modulation due
to the formation of such sonic horizons and show how they change the nature of
analogue Hawking radiation emitted from such sonic black hole by evaluating the
density-density correlation at different times, using the truncated Wigner
approximation (TWA) for different values of spin-orbit coupling parameters. We
finally investigate the thermal nature of such analogue Hawking radiation.
|
http://arxiv.org/abs/1810.04860v3
|
1810.04860
|
2020-07-29
|
cybersecurity
|
(2+1)-dimensional Static Cyclic Symmetric Traversable Wormhole: Quasinormal Modes and Causality
|
In this paper we study a static cyclic symmetric traversable wormhole in
$(2+1)-$dimensional gravity coupled to nonlinear electrodynamics in anti-de
Sitter spacetime. The solution is characterized by three parameters: mass $M$,
cosmological constant $\Lambda$ and one electromagnetic parameter,
$q_{\alpha}$. The causality of this spacetime is studied, determining its
maximal extension and constructing then the corresponding Kruskal-Szekeres and
Penrose diagrams. The quasinormal modes (QNMs) that result from considering a
massive scalar test field in the wormhole background are determined by solving
in exact form the Klein-Gordon equation; the effective potential resembles the
one of a harmonic oscillator shifted from its equilibrium position and,
consequently, the QNMs have a pure point spectrum.
|
http://arxiv.org/abs/1906.04360v2
|
1906.04360
|
2019-10-26
|
cybersecurity
|
2+1-dimensional traversable wormholes supported by positive energy
|
We revisit the shapes of the throats of wormholes, including thin-shell
wormholes (TSWs) in $2+1-$dimensions. In particular, in the case of TSWs this
is done in a flat $2+1-$dimensional bulk spacetime by using the standard method
of cut-and-paste. Upon departing from a pure time-dependent circular shape
i.e., $r=a\left( t\right) $ for the throat, we employ a $% \theta -$dependent
closed loop of the form $r=R\left( t,\theta \right) ,$ and in terms of $R\left(
t,\theta \right) $ we find the surface energy density $\sigma $ on the throat.
For the specific convex shapes we find that the total energy which supports the
wormhole is positive and finite. In addition to that we analyze the general
wormhole's throat. By considering a specific equation of $r=R\left( \theta
\right) $ instead of $r=r_{0}=const.,$ and upon certain choices of functions
for $R\left( \theta \right) $ we find the total energy of the wormhole to be
positive.
|
http://arxiv.org/abs/1409.2686v2
|
1409.2686
|
2015-02-17
|
cybersecurity
|
2+1-dimensional wormhole from a doublet of scalar fields
|
We present a class of exact solutions in the framework of $2+1-$dimensional
Einstein gravity coupled minimally to a doublet of scalar fields. Our solution
can be interpreted upon tuning of parameters as an asymptotically flat wormhole
as well as a particle model in $2+1-$dimensions.
|
http://arxiv.org/abs/1507.07257v1
|
1507.07257
|
2015-07-26
|
cybersecurity
|
2+1D symmetry-topological-order from local symmetric operators in 1+1D
|
A generalized symmetry (defined by the algebra of local symmetric operators) can go beyond group or higher group description. A theory of generalized symmetry (up to holo-equivalence) was developed in terms of symmetry-TO -- a bosonic topological order (TO) with gappable boundary in one higher dimension. We propose a general method to compute the 2+1D symmetry-TO from the local symmetric operators in 1+1D systems. Our theory is based on the commutant patch operators, which are extended operators constructed as products and sums of local symmetric operators. A commutant patch operator commutes with all local symmetric operators away from its boundary. We argue that topological invariants associated with anyon diagrams in 2+1D can be computed as contracted products of commutant patch operators in 1+1D. In particular, we give concrete formulae for several topological invariants in terms of commutant patch operators. Topological invariants computed from patch operators include those beyond modular data, such as the link invariants associated with the Borromean rings and the Whitehead link. These results suggest that the algebra of commutant patch operators is described by 2+1D symmetry-TO. Based on our analysis, we also argue briefly that the commutant patch operators would serve as order parameters for gapped phases with finite symmetries.
|
https://arxiv.org/abs/2310.05790v1
|
2310.05790
|
2023-10-09
|
cybersecurity
|
(2+1)D topological phases with RT symmetry: many-body invariant, classification, and higher order edge modes
|
It is common in condensed matter systems for reflection ($R$) and time-reversal ($T$) symmetry to both be broken while the combination $RT$ is preserved. In this paper we study invariants that arise due to $RT$ symmetry. We consider many-body systems of interacting fermions with fermionic symmetry groups $G_f = \mathbb{Z}_2^f \times \mathbb{Z}_2^{RT}$, $U(1)^f \rtimes \mathbb{Z}_2^{RT}$, and $U(1)^f \times \mathbb{Z}_2^{RT}$. We show that (2+1)D invertible fermionic topological phases with these symmetries have a $\mathbb{Z} \times \mathbb{Z}_8$, $\mathbb{Z}^2 \times \mathbb{Z}_2$, and $\mathbb{Z}^2 \times \mathbb{Z}_4$ classification, respectively, which we compute using the framework of $G$-crossed braided tensor categories. We provide a many-body $RT$ invariant in terms of a tripartite entanglement measure, and which we show can be understood using an edge conformal field theory computation in terms of vertex states. For $G_f = U(1)^f \rtimes \mathbb{Z}_2^{RT}$, which applies to charged fermions in a magnetic field, the non-trivial value of the $\mathbb{Z}_2$ invariant requires strong interactions. For symmetry-preserving boundaries, the phases are distinguished by zero modes at the intersection of the reflection axis and the boundary. Additional invariants arise in the presence of translation or rotation symmetry.
|
https://arxiv.org/abs/2403.18887v1
|
2403.18887
|
2024-03-27
|
cybersecurity
|
$2+1$ Einstein-Klein-Gordon black holes by gravitational decoupling
|
In this work we study the 2+1 Einstein-Klein-Gordon system in the framework of Gravitational Decoupling. We associate the generic matter decoupling sector with a real scalar field so we can obtain a constraint which allows to close the system of differential equations. The constraint corresponds to a differential equation involving the decoupling functions and the metric of the seed sector and will be independent of the scalar field itself. We show that when the equation admits analytical solutions, the scalar field and the self-interacting potential can be obtained straightforwardly. We found that, in the cases under consideration, it is possible to express the potential as an explicit function of the scalar field only for certain particular cases corresponding to limiting values of the parameters involved.
|
https://arxiv.org/abs/2203.00661v2
|
2203.00661
|
2022-03-01
|
cybersecurity
|
2+1 Flavor Domain Wall Fermion QCD Lattices: Ensemble Production and (some) Properties
|
The RBC and UKQCD Collaborations continue to produce 2+1 flavor domain wall
fermion ensembles, currently focusing on an ensemble with a $96^3 \times 192$
volume on SUMMIT at ORNL with $1/a \approx 2.8$ GeV, and smaller ensembles at
stronger couplings. The $1/a \approx 2.8$ GeV ensemble uses the Exact One
Flavor Algorithm for the strange quark, along with the Multisplitting
Preconditioned Conjugate Gradient for solving the Dirac equation. We report on
our progress and experience to date with the evolution of this ensemble.
|
http://arxiv.org/abs/1912.13150v1
|
1912.13150
|
2019-12-31
|
cybersecurity
|
2+1 flavor fine lattice simulation at finite temperature with domain-wall fermions
|
Simulations for the thermodynamics of the 2+1 flavor QCD are performed employing chiral fermions. The use of M\"obius domain-wall fermions with stout-link smearing is more effective on the finer lattices where all the relevant chiral symmetries are realized more accurately. We report on the initial simulations near the (pseudo) critical point using the line of constant physics with an average $ud$ quark mass slightly heavier than physical at $a\lesssim 0.1$ fm.
|
https://arxiv.org/abs/2112.11771v1
|
2112.11771
|
2021-12-22
|
cybersecurity
|
2+1 flavor QCD simulation on a $96^4$ lattice
|
We generate $2+1$ flavor QCD configurations near the physical point on a
$96^4$ lattice employing the 6-APE stout smeared Wilson clover action with a
nonperturbative $c_{\rm SW}$ and the Iwasaki gauge action at $\beta=1.82$. The
physical point is estimated based on the chiral perturbation theory using
several data points generated by the reweighting technique from the simulation
point, wherer $m_\pi$,$m_K$ and $m_\Omega$ are used as physical inputs. The
physics results include the quark masses, the hadron spectrum, the pseudoscalar
meson decay constants and nucleon sigma terms, using the nonperturbative
renormalization factors evaluated with the Schrodinger functional method.
|
http://arxiv.org/abs/1511.09222v1
|
1511.09222
|
2015-11-30
|
cybersecurity
|
(2+1)-flavor QCD Thermodynamics from the Gradient Flow
|
Recently, we proposed a novel method to define and calculate the
energy-momentum tensor (EMT) in lattice gauge theory on the basis of the
Yang-Mills gradient flow [1]. In this proceedings, we summarize the basic idea
and technical steps to obtain the bulk thermodynamic quantities in lattice
gauge theory using this method for the quenched and $(2+1)$-flavor QCD. The
revised results of integration measure (trace anomaly) and entropy density of
the quenched QCD with corrected coefficients are shown. Furthermore, we also
show the flow time dependence of the parts of EMT including the dynamical
fermions. This work is based on a joint-collaboration between FlowQCD and WHOT
QCD.
|
http://arxiv.org/abs/1511.03009v1
|
1511.03009
|
2015-11-10
|
cybersecurity
|
2+1 flavors QCD equation of state at zero temperature within Dyson-Schwinger equations
|
Within the framework of Dyson-Schwinger equations (DSEs), we discuss the
equation of state (EOS) and quark number densities of 2+1 flavors, that is to
say, $u$, $d$, and $s$ quarks. The chemical equilibrium and electric charge
neutrality conditions are used to constrain the chemical potential of different
quarks. The EOS in the cases of 2 flavors and 2+1 flavors are discussed, and
the quark number densities, the pressure, and energy density per baryon are
also studied. The results show that there is a critical chemical potential for
each flavor of quark, at which the quark number density turns to nonzero from
0; and furthermore, the system with 2+1 flavors of quarks is more stable than
that with 2 flavors in the system. These discussion may provide some useful
information to some research fields, such as the studies related to the QCD
phase transitions or compact stars.
|
http://arxiv.org/abs/1506.06846v1
|
1506.06846
|
2015-06-23
|
cybersecurity
|
(2+1) Lorentzian quantum cosmology from spin-foams: opportunities and obstacles for semi-classicality
|
We construct an effective cosmological spin-foam model for a (2+1) dimensional spatially flat universe, discretized on a hypercubical lattice, containing both space- and time-like regions. Our starting point is the recently proposed coherent state spin-foam model for (2+1) Lorentzian quantum gravity. The full amplitude is assumed to factorize into single vertex amplitudes with boundary data corresponding to Lorentzian 3-frusta. A stationary phase approximation is performed at each vertex individually, where the inverse square root of the Hessian determinant serves as a measure for the effective path integral. Additionally, a massive scalar field is coupled to the geometry, and we show that its mass renders the partition function convergent. For a single 3-frustum with time-like struts, we compute the expectation value of the bulk strut length and show that it generically agrees with the classical solutions and that it is a discontinuous function of the scalar field mass. Allowing the struts to be space-like introduces causality violations, which drive the expectation values away from the classical solutions due to the lack of an exponential suppression of these configurations. This is a direct consequence of the semi-classical amplitude only containing the real part of deficit angles, in contrast with the Lorentzian Regge action used in effective spin-foams. We give an outlook on how to evaluate the partition function on an extended discretization including a bulk spatial slice. This serves as a foundation for future investigations of physically interesting scenarios such as a quantum bounce or the viability of massive scalar field clocks. Our results demonstrate that the effective path integral in the causally regular sector serves as a viable quantum cosmology model, but that the agreement of expectation values with classical solutions is tightly bound to the path integral measure.
|
https://arxiv.org/abs/2411.08109v3
|
2411.08109
|
2024-11-12
|
cybersecurity
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.