title
string
abstract
string
url
string
arxiv_id
string
date
string
category
string
2020 Global reassessment of the neutrino oscillation picture
We present an updated global fit of neutrino oscillation data in the simplest three-neutrino framework. In the present study we include up-to-date analyses from a number of experiments. Namely, we have included all T2K measurements as of December 2019, the most recent NO$\nu$A antineutrino statistics, and data collected by the Daya Bay and RENO reactor experiments. Concerning the atmospheric and solar sectors, we have also updated our analyses of DeepCore and SNO data, respectively. All in all, these new analyses result in more accurate measurements of $\theta_{13}$, $\theta_{12}$, $\Delta m_{21}^2$ and $|\Delta m_{31}^2|$. The best fit value for the atmospheric angle $\theta_{23}$ lies in the second octant, but first octant solutions remain allowed at $\sim2\sigma$. Regarding CP violation measurements, the preferred value of $\delta$ we obtain is 1.20$\pi$ (1.54$\pi$) for normal (inverted) neutrino mass ordering. These new results should be regarded as extremely robust due to the excellent agreement found between our Bayesian and frequentist approaches. Taking into account only oscillation data, there is a preference for the normal neutrino mass ordering at the $2.7\sigma$ level. While adding neutrinoless double beta decay from the latest Gerda, CUORE and KamLAND-Zen results barely modifies this picture, cosmological measurements raise the significance to $3.1\sigma$ within a conservative approach. A more aggressive data set combination of cosmological observations leads to a stronger preference for normal with respect to inverted mass ordering, at the $3.3\sigma$ level. This cosmological data set provides $2\sigma$ upper limits on the total neutrino mass corresponding to $\sum m_\nu<0.13$ ($0.15$)~eV in the normal (inverted) neutrino mass ordering scenario. These bounds are among the most complete ones in the literature, as they include all currently available neutrino physics inputs.
http://arxiv.org/abs/2006.11237v1
2006.11237
2020-06-19
natural-language-processing
2020 Ian Snook Prize Problem : Three Routes to the Information Dimensions for a One-Dimensional Stochastic Random Walk and for an Equivalent Prototypical Two-Dimensional Baker Map
The \$1000 Ian Snook Prize for 2020 will be awarded to the author(s) of the most interesting paper exploring a pair of relatively simple, but fractal, models of nonequilibrium systems, a dissipative time-reversible Baker Map and an equivalent stochastic random walk. The two-dimensional deterministic, time-reversible, chaotic, fractal, and dissipative Baker map is equivalent to the stochastic one-dimensional random walk model for which three distinct estimates for the information dimension, $\{ \ 0.7897,\ 0.741_5, \ 0.7337 \ \}$ have all been put forward. So far there is no cogent explanation for the differences among them. We describe the three routes to the information dimension, $D_I$: [ 1 ] iterated Cantor-like mappings, [ 2 ] mesh-based analyses of single-point iterations, and [ 3 ] the Kaplan-Yorke Lyapunov dimension, thought by many to be exact for these models. We encourage colleagues to address this Prize Problem by suggesting, testing, and analyzing mechanisms underlying these differing results.
http://arxiv.org/abs/1910.12642v3
1910.12642
2019-11-10
natural-language-processing
2020 NDSA Agenda for Digital Stewardship
The NDSA Agenda is a comprehensive overview of the state of global digital preservation. It casts its eye over current research trends, grants, projects, and various efforts spanning the preservation ecosystem. The agenda identifies successes and ongoing challenges in addition to providing some tangible recommendations to both researcher and practitioner alike. As both an overview and comprehensive dive into digital preservation issues, the audience ranges from high level to hands on experts. Funders can use this report as a signpost for the overall state of the profession.
http://arxiv.org/abs/2005.05474v1
2005.05474
2020-05-11
natural-language-processing
2020 Nobel Prize for Physics: Black holes and the Milky Way's darkest secret
This article was written at the invitation of Current Science to explain the history and Science behind this year's Nobel prize in Physics. The article is aimed at a general audience and provides a popular account and perspective on the subject of black holes.
https://arxiv.org/abs/2011.06656v1
2011.06656
2020-11-12
natural-language-processing
2020 Physics Critique: Can a muon collider be operational within the next 30 years?
A 2020 physics critique report for Year 4 MSci Physics with Particle Physics & Cosmology with the University of Birmingham, School of Physics & Astronomy. Muon colliders are a proposed next-generation particle accelerator which benefit from the muon's fundamentality and relatively high mass to perform simultaneous high precision, high energy experiments. This critique reviews their physics potential and technological feasibility, then proposes a roadmap for how a 3 TeV muon collider could be built, concluding that it is experimentally possible for a muon collider to be operational in 30 years.
https://arxiv.org/abs/2502.13781v1
2502.13781
2025-02-19
natural-language-processing
2020 State of the Octoverse: Finding Balance Between Work and Play
Over the past year, many developers and other technology professionals have transitioned to a remote-first world, as COVID-19 pressed organizations to support working from home whenever possible. This shift quickly changed the routines and environments where we work and learn, redrawing the lines between personal and professional lives. How does this affect the ways we develop and deliver software, both at work and in our open source projects?
https://arxiv.org/abs/2110.10248v1
2110.10248
2021-10-19
natural-language-processing
2020 State of the Octoverse: Securing the World's Software
Open source is the connective tissue for much of the information economy. You would be hard-pressed to find a scenario where your data does not pass through at least one open source component. Many of the services and technology we all rely on, from banking to healthcare, also rely on open source software. The artifacts of open source code serve as critical i infrastructure for much of the global economy, making the security of open source software mission-critical to the world.
https://arxiv.org/abs/2110.10246v1
2110.10246
2021-10-19
natural-language-processing
2020 UK Lockdown Cyber Narratives: the Secure, the Insecure and the Worrying
On the 23rd March 2020, the UK entered a period of lockdown in the face of a deadly pandemic. While some were unable to work from home, many organisations were forced to move their activities online. Here, we discuss the technologies they used, from a privacy and security perspective. We also mention the communication failures that have exacerbated uncertainty and anxiety during the crisis. An organisation could be driven to move their activities online by a range of disasters, of which a global pandemic is only one. We seek, in this paper, to highlight the need for organisations to have contingency plans in place for this kind of eventuality. The insecure usages and poor communications we highlight are a symptom of a lack of advance pre-pandemic planning. We hope that this paper will help organisations to plan more effectively for the future.
http://arxiv.org/abs/2006.06340v2
2006.06340
2020-06-19
natural-language-processing
2020 U.S. presidential election in swing states: Gender differences in Twitter conversations
Social media is commonly used by the public during election campaigns to express their opinions regarding different issues. Among various social media channels, Twitter provides an efficient platform for researchers and politicians to explore public opinion regarding a wide range of topics such as the economy and foreign policy. Current literature mainly focuses on analyzing the content of tweets without considering the gender of users. This research collects and analyzes a large number of tweets and uses computational, human coding, and statistical analyses to identify topics in more than 300,000 tweets posted during the 2020 U.S. presidential election and to compare female and male users regarding the average weight of the discussed topics. Our findings are based upon a wide range of topics, such as tax, climate change, and the COVID-19 pandemic. Out of the topics, there exists a significant difference between female and male users for more than 70% of topics.
https://arxiv.org/abs/2108.09416v2
2108.09416
2021-08-21
natural-language-processing
2020 Vision: Towards a Sustainable OIR System
Open-access telescopes of all apertures are needed to operate a competitive and efficient national science program. While larger facilities contribute light-gathering power and angular resolution, smaller ones dominate for field of view, time-resolution, and especially, total available observing time, thereby enabling our entire, diversely-expert community. Smaller aperture telescopes therefore play a critical and indispensable role in advancing science. Thus, the divestment of NSF support for modest-aperture (1 - 4 m) public telescopes poses a serious threat to U.S. scientific leadership, which is compounded by the unknown consequences of the shift from observations driven by individual investigators to survey-driven science. Given the much higher cost efficiency and dramatic science returns for investments in modest aperture telescopes, it is hard to justify funding only the most expensive facilities. We therefore urge the Astro2020 panel to explicitly make the case for modest aperture facilities, and to recommend enhancing this funding stream to support and grow this critical component of the OIR System. Further study is urgently needed to prioritize the numerous exciting potential capabilities of smaller facilities,and to establish sustainable, long-term planning for the System.
http://arxiv.org/abs/1907.06715v1
1907.06715
2019-07-15
natural-language-processing
2020福爾摩沙臺語語音辨識比賽之初步實驗 (A Preliminary Study of Formosa Speech Recognition Challenge 2020 – Taiwanese ASR)
https://aclanthology.org/2021.ijclclp-1.3
null
null
natural-language-processing
2021 BEETL Competition: Advancing Transfer Learning for Subject Independence & Heterogenous EEG Data Sets
Transfer learning and meta-learning offer some of the most promising avenues to unlock the scalability of healthcare and consumer technologies driven by biosignal data. This is because current methods cannot generalise well across human subjects' data and handle learning from different heterogeneously collected data sets, thus limiting the scale of training data. On the other side, developments in transfer learning would benefit significantly from a real-world benchmark with immediate practical application. Therefore, we pick electroencephalography (EEG) as an exemplar for what makes biosignal machine learning hard. We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI), that have to be solved in the face of low signal-to-noise ratios, major variability among subjects, differences in the data recording sessions and techniques, and even between the specific BCI tasks recorded in the dataset. Task 1 is centred on the field of medical diagnostics, addressing automatic sleep stage annotation across subjects. Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets. The BEETL competition with its over 30 competing teams and its 3 winning entries brought attention to the potential of deep transfer learning and combinations of set theory and conventional machine learning techniques to overcome the challenges. The results set a new state-of-the-art for the real-world BEETL benchmark.
https://arxiv.org/abs/2202.12950v1
2202.12950
2022-02-14
natural-language-processing
2021 Census of Interstellar, Circumstellar, Extragalactic, Protoplanetary Disk, and Exoplanetary Molecules
To date, 241 individual molecular species, comprised of 19 different elements, have been detected in the interstellar and circumstellar medium by astronomical observations. These molecules range in size from two atoms to seventy, and have been detected across the electromagnetic spectrum from cm-wavelengths to the ultraviolet. This census presents a summary of the first detection of each molecular species, including the observational facility, wavelength range, transitions, and enabling laboratory spectroscopic work, as well as listing tentative and disputed detections. Tables of molecules detected in interstellar ices, external galaxies, protoplanetary disks, and exoplanetary atmospheres are provided. A number of visual representations of this aggregate data are presented and briefly discussed in context.
https://arxiv.org/abs/2109.13848v2
2109.13848
2021-09-27
natural-language-processing
2021 Drexel Society of Artificial Intelligence Research Conference
The 2021 Drexel Society of Artificial Intelligence Research Conference highlights papers focused on a broad set of papers in machine learning. This was our organizations' first annual conference. It was conducted virtually via Zoom. The highlights are currently posted on YouTube.
https://arxiv.org/abs/2110.05263v3
2110.05263
2021-08-25
natural-language-processing
2021 Effective Area calibration of the Nuclear Spectroscopic Telescope ARray (NuSTAR)
We present here the updated calibration of The Nuclear Spectroscopic Telescope ARray NuSTAR, which was performed using data on the Crab accumulated over the last 9 years in orbit. The basis for this new calibration contains over 250ks of focused Crab (imaged through the optics) and over 500ks of stray-light Crab (not imaged through optics). We measured an epoch averaged Crab spectrum of the stray-light Crab data and define a canonical Crab spectrum of Gamma = 2.103 +- 0.001 and N = 9.69 +- 0.02 keV-1 cm-2 s-1 at 1 keV, which we use as our calibration standard. The new calibration, released in the CALDB update 20211020, provides significant updates to: 1) the detector absorption component, 2) the detector response function, and 3) the effective area vignetting function. The calibration improves agreement between FPMA and FPMB across detectors with a standard deviation of 1.7% for repeat observations between off-axis angles of 1-4 arcmin, and the measured flux has increased by 5-15%, with 5% below 1 arcmin off-axis angle, 10% between 1-2 arcmin, and 15 above 4arcmin.
https://arxiv.org/abs/2110.11522v1
2110.11522
2021-10-21
natural-language-processing
2021-$H_0$ Odyssey: Closed, Phantom and Interacting Dark Energy Cosmologies
Up-to-date cosmological data analyses have shown that \textit{(a)} a closed universe is preferred by the Planck data at more than $99\%$ CL, and \textit{(b)} interacting scenarios offer a very compelling solution to the Hubble constant tension. In light of these two recent appealing scenarios, we consider here an interacting dark matter-dark energy model with a non-zero spatial curvature component and a freely varying dark energy equation of state in both the quintessential and phantom regimes. When considering Cosmic Microwave Background data only, a phantom and closed universe can perfectly alleviate the Hubble tension, without the necessity of a coupling among the dark sectors. Accounting for other possible cosmological observations compromises the viability of this very attractive scenario as a global solution to current cosmological tensions, either by spoiling its effectiveness concerning the $H_0$ problem, as in the case of Supernovae Ia data, or by introducing a strong disagreement in the preferred value of the spatial curvature, as in the case of Baryon Acoustic Oscillations.
https://arxiv.org/abs/2101.03129v3
2101.03129
2021-01-08
natural-language-processing
2021 occultations and transits of Linus orbiting (22) Kalliope: I. Polygonal and `cliptracing' algorithm
The satellite Linus orbiting the main-belt asteroid (22) Kalliope exhibited occultation and transit events in late 2021. A photometric campaign was organized and observations were taken by the TRAPPIST-South, SPECULOOS-Artemis, OWL-Net, and BOAO telescopes, with the goal to constrain models of this system. Our dynamical model is complex, with multipoles (up to the order $\ell = 2$), internal tides, and external tides. The model was constrained by astrometry (spanning 2001--2021), occultations, adaptive-optics imaging, calibrated photometry, as well as relative photometry. Our photometric model was substantially improved. A new precise (${<}\,0.1\,{\rm mmag}$) light curve algorithm was implemented, based on polygon intersections, which are computed exactly -- by including partial eclipses and partial visibility of polygons. Moreover, we implemented a `cliptracing' algorithm, based again on polygon intersections, in which partial contributions to individual pixels are computed exactly. Both synthetic light curves and synthetic images are then very smooth. Based on our combined solution, we confirmed the size of Linus, $(28\pm 1)\,{\rm km}$. However, this solution exhibits some tension between the light curves and the PISCO speckle-interferometry dataset. In most solutions, Linus is darker than Kalliope, with the albedos $A_{\rm w} = 0.40$ vs. $0.44$. This is confirmed on deconvolved images. A~detailed revision of astrometric data allowed us to revise also the $J_2 \equiv -C_{20}$ value of Kalliope. Most importantly, a~homogeneous body is excluded. For a differentiated body, two solutions exist: low-oblateness ($C_{20} \simeq -0.12$), with a~spherical iron core, and alternatively, high-oblateness ($C_{20} \simeq -0.22$) with an elongated iron core. These correspond to the low- and high-energy collisions, respectively, studied by means of SPH simulations in our previous work.
https://arxiv.org/abs/2306.04768v1
2306.04768
2023-06-07
natural-language-processing
2021 superoutburst of WZ Sge-type dwarf nova V627 Pegasi lacks an early superhump phase
Superoutbursts in WZ Sge-type dwarf novae (DNe) are characterized by both early superhumps and ordinary superhumps originating from the 2:1 and 3:1 resonances, respectively. However, some WZ Sge-type DNe show a superoutburst lacking early superhumps; it is not well established how these differ from superoutbursts with an early superhump phase. We report time-resolved photometric observations of the WZ Sge-type DN V627 Peg during its 2021 superoutburst. The detection of ordinary superhumps before the superoutburst peak highlights that this 2021 superoutburst of V627 Peg, like that {in} 2014, did not feature an early superhump phase. The duration of stage B superhumps was slightly longer in the 2010 superoutburst accompanying early superhumps than that in the 2014 and 2021 superoutbursts which lacked early superhumps. This result suggests that an accretion disk experiencing the 2:1 resonance may have a larger mass at the inner part of the disk and hence take more time for the inner disk to become eccentric. The presence of a precursor outburst in the 2021 superoutburst suggests that the maximum disk radius should be smaller than that of the 2014 superoutburst, even though the duration of quiescence was longer than that before the 2021 superoutburst. This could be accomplished if the 2021 superoutburst was triggered as an inside-out outburst or if the mass transfer rate in quiescence changes by a factor of two, suggesting that the outburst mechanism and quiescence state of WZ Sge-type DNe may have more variety than ever thought.
https://arxiv.org/abs/2303.17960v1
2303.17960
2023-03-31
natural-language-processing
2021 Update on $\varepsilon_K$ with lattice QCD inputs
We present recent updates for $\varepsilon_K$ determined directly from the standard model (SM) with lattice QCD inputs such as $\hat{B}_K$, $|V_{cb}|$, $|V_{us}|$, $\xi_0$, $\xi_2$, $\xi_\text{LD}$, $f_K$, and $m_c$. We find that the standard model with exclusive $|V_{cb}|$ and other lattice QCD inputs describes only 66\% of the experimental value of $|\varepsilon_K|$ and does not explain its remaining 34\%, which leads to a strong tension in $|\varepsilon_K|$ at the $4.5\sigma \sim 3.7\sigma$ level between the SM theory and experiment. We also find that this tension disappears when we use the inclusive value of $|V_{cb}|$ obtained using the heavy quark expansion based on the QCD sum rule approach.
https://arxiv.org/abs/2202.11473v2
2202.11473
2022-02-23
natural-language-processing
2022 Flood Impact in Pakistan: Remote Sensing Assessment of Agricultural and Urban Damage
Pakistan was hit by the world's deadliest flood in June 2022, causing agriculture and infrastructure damage across the country. Remote sensing technology offers a cost-effective and efficient method for flood impact assessment. This study is aimed to assess the impact of flooding on crops and built-up areas. Landsat 9 imagery, European Space Agency-Land Use/Land Cover (ESA-LULC) and Soil Moisture Active Passive (SMAP) data are used to identify and quantify the extent of flood-affected areas, crop damage, and built-up area destruction. The findings indicate that Sindh, a province in Pakistan, suffered the most. This impact destroyed most Kharif season crops, typically cultivated from March to November. Using the SMAP satellite data, it is assessed that the high amount of soil moisture after flood also caused a significant delay in the cultivation of Rabi crops. The findings of this study provide valuable information for decision-makers and stakeholders involved in flood risk management and disaster response.
https://arxiv.org/abs/2410.07126v1
2410.07126
2024-09-21
natural-language-processing
2022 Nobel Prize in Physics and the End of Mechanistic Materialism
The ideas and results that are in the background of the 2022 Nobel Prize in physics had an immense impact on our understanding of reality. Therefore, it is crucial that these implications reach also the general public, not only the scientists in the related fields of quantum mechanics. The purpose of this review is to attempt to elucidate these revolutionary changes in our worldview that were eventually acknowledged also by the Nobel's committee, and to do it with very few references to mathematical details (which could be even ignored without undermining the take-away essence of the text). We first look into the foundational disputes between Einstein and Bohr about the nature of quantum mechanics, which culminated in the so-called EPR paradox -- the main impetus for all the research that would ensue in this context. Next, we try to explain the statement of the famous Bell's theorem -- the theorem that relocated the Einstain-Bohr discussions from the realm of philosophy and metaphysics to hard-core physics verifiable by experiments (we also give a brief derivation of the theorem's proof). Then we overview the experimental work of the last year's laureates, that had the final say about who was right in the debate. The outcome of these experiments forced us to profoundly revise our understanding of the universe. Finally, we discuss in more detail the implications of such outcomes, and what are the possible ways that our worldviews can be modified to account for the experimental facts. As we will see, the standard mechanist picture of the universe is no longer a viable option, and can be never again. Nowadays, we know this with certainty unusual for physics, that only a strict mathematical theorem could provide.
https://arxiv.org/abs/2308.12297v2
2308.12297
2023-08-11
natural-language-processing
2022 report from the Auger-TA working group on UHECR arrival directions
After over 60 years, the powerful engines that accelerate ultra-high-energy cosmic rays (UHECRs) to the formidable energies at which we observe them from Earth remain mysterious. Assuming standard physics, we expect UHECR sources to lie within the local Universe (up to a few hundred~Mpc). The distribution of matter in the local Universe is anisotropic, and we expect this anisotropy to be imprinted on the distribution of UHECR arrival directions. Even though intervening intergalactic and Galactic magnetic fields deflect charged UHECRs and can distort these anisotropies, some amount of information on the distribution of the sources is preserved. In this proceedings contribution, we present the results of the joint Pierre Auger Observatory and Telescope Array searches for (a) the largest-scale anisotropies (the harmonic dipole and quadrupole) and (b) correlations with a sample of nearby starburst galaxies and the 2MRS catalogue tracing stellar mass within~250~Mpc. This analysis updates our previous results with the most recent available data, notably with the addition of 3~years of new Telescope Array data. The main finding is a correlation between the arrival directions of $12.1\%_{-3.1\%}^{+4.5\%}$~of UHECRs detected with $E \geq 38$~EeV by~Auger or with~$E \gtrsim 49$~EeV by~TA and the positions of nearby starburst galaxies on a ${15.1\text{deg}}_{-3.0\text{deg}}^{+4.6\text{deg}}$~angular scale, with a $4.7\sigma$~post-trial significance, up from $4.2\sigma$ obtained in our previous study.
https://arxiv.org/abs/2302.04502v1
2302.04502
2023-02-09
natural-language-processing
2022 Review of Data-Driven Plasma Science
Data science and technology offer transformative tools and methods to science. This review article highlights latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS). A large amount of data and machine learning algorithms go hand in hand. Most plasma data, whether experimental, observational or computational, are generated or collected by machines today. It is now becoming impractical for humans to analyze all the data manually. Therefore, it is imperative to train machines to analyze and interpret (eventually) such data as intelligently as humans but far more efficiently in quantity. Despite the recent impressive progress in applications of data science to plasma science and technology, the emerging field of DDPS is still in its infancy. Fueled by some of the most challenging problems such as fusion energy, plasma processing of materials, and fundamental understanding of the universe through observable plasma phenomena, it is expected that DDPS continues to benefit significantly from the interdisciplinary marriage between plasma science and data science into the foreseeable future.
https://arxiv.org/abs/2205.15832v1
2205.15832
2022-05-31
natural-language-processing
2022 Roadmap for Materials for Quantum Technologies
Quantum technologies are poised to move the foundational principles of quantum physics to the forefront of applications. This roadmap identifies some of the key challenges and provides insights on materials innovations underlying a range of exciting quantum technology frontiers. Over the past decades, hardware platforms enabling different quantum technologies have reached varying levels of maturity. This has allowed for first proof-of-principle demonstrations of quantum supremacy, for example quantum computers surpassing their classical counterparts, quantum communication with reliable security guaranteed by laws of quantum mechanics, and quantum sensors uniting the advantages of high sensitivity, high spatial resolution, and small footprints. In all cases, however, advancing these technologies to the next level of applications in relevant environments requires further development and innovations in the underlying materials. From a wealth of hardware platforms, we select representative and promising material systems in currently investigated quantum technologies. These include both the inherent quantum bit systems as well as materials playing supportive or enabling roles, and cover trapped ions, neutral atom arrays, rare earth ion systems, donors in silicon, color centers and defects in wide-band gap materials, two-dimensional materials and superconducting materials for single-photon detectors. Advancing these materials frontiers will require innovations from a diverse community of scientific expertise, and hence this roadmap will be of interest to a broad spectrum of disciplines.
https://arxiv.org/abs/2202.07309v1
2202.07309
2022-02-15
natural-language-processing
2022 Roadmap on Neuromorphic Computing and Engineering
Modern computation based on the von Neumann architecture is today a mature cutting-edge science. In the Von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this Roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The Roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges. We hope that this Roadmap will be a useful resource to readers outside this field, for those who are just entering the field, and for those who are well established in the neuromorphic community. https://doi.org/10.1088/2634-4386/ac4a83
https://arxiv.org/abs/2105.05956v3
2105.05956
2021-05-12
natural-language-processing
2022 Update of the discoveries of nuclides
The 2022 update of the discovery of nuclide project is presented. It is the first update in four years, and 36 new nuclides were observed for the first time during 2019-2022. Isotopes that have so far only been published in conference proceedings or internal reports are also listed.
https://arxiv.org/abs/2303.01958v1
2303.01958
2023-03-03
natural-language-processing
2022 Update on $\varepsilon_K$ with lattice QCD inputs
We present recent updates for $\varepsilon_K$ determined directly from the standard model (SM) with lattice QCD inputs such as $\hat{B}_K$, $|V_{cb}|$, $|V_{us}|$, $\xi_0$, $\xi_2$, $\xi_\text{LD}$, $f_K$, and $m_c$. We find that the standard model with exclusive $|V_{cb}|$ and other lattice QCD inputs describes only 65% of the experimental value of $|\varepsilon_K|$ and does not explain its remaining 35%, which leads to a strong tension in $|\varepsilon_K|$ at the $5.1\sigma \sim 3.9\sigma$ level between the SM theory and experiment. We also find that this tension disappears when we use the inclusive value of $|V_{cb}|$ obtained using the heavy quark expansion based on the QCD sum rule approach, although this inclusive tension is small ($\approx 1.4\sigma$) but keeps increasing as time goes on.
https://arxiv.org/abs/2301.12375v2
2301.12375
2023-01-29
natural-language-processing
2022 Upgrade and Improved Low Frequency Camera Sensitivity for CMB Observation at the South Pole
Constraining the Galactic foregrounds with multi-frequency Cosmic Microwave Background (CMB) observations is an essential step towards ultimately reaching the sensitivity to measure primordial gravitational waves (PGWs), the sign of inflation after the Big-Bang that would be imprinted on the CMB. The BICEP Array telescope is a set of multi-frequency cameras designed to constrain the energy scale of inflation through CMB B-mode searches while also controlling the polarized galactic foregrounds. The lowest frequency BICEP Array receiver (BA1) has been observing from the South Pole since 2020 and provides 30 GHz and 40 GHz data to characterize the Galactic synchrotron in our CMB maps. In this paper, we present the design of the BA1 detectors and the full optical characterization of the camera including the on-sky performance at the South Pole. The paper also introduces the design challenges during the first observing season including the effect of out-of-band photons on detectors performance. It also describes the tests done to diagnose that effect and the new upgrade to minimize these photons, as well as installing more dichroic detectors during the 2022 deployment season to improve the BA1 sensitivity. We finally report background noise measurements of the detectors with the goal of having photon noise dominated detectors in both optical channels. BA1 achieves an improvement in mapping speed compared to the previous deployment season.
https://arxiv.org/abs/2208.01080v1
2208.01080
2022-08-01
natural-language-processing
2023 Astrophotonics Roadmap: pathways to realizing multi-functional integrated astrophotonic instruments
Photonics offer numerous functionalities that can be used to realize astrophotonic instruments. The most spectacular example to date is the ESO Gravity instrument at the Very Large Telescope in Chile. Integrated astrophotonic devices stand to offer critical advantages for instrument development, including extreme miniaturization, as well as integration, superior thermal and mechanical stabilization owing to the small footprint, and high replicability offering cost savings. Numerous astrophotonic technologies have been developed to address shortcomings of conventional instruments to date, including for example the development of photonic lanterns, complex aperiodic fiber Bragg gratings, complex beam combiners to enable long baseline interferometry, and laser frequency combs for high precision spectral calibration of spectrometers. Despite these successes, the facility implementation of photonic solutions in astronomical instrumentation is currently limited because of (1) low throughputs from coupling to fibers, coupling fibers to chips, propagation and bend losses, device losses, etc, (2) difficulties with scaling to large channel count devices needed for large bandwidths and high resolutions, and (3) efficient integration of photonics with detectors, to name a few. In this roadmap, we identify 24 areas that need further development. We outline the challenges and advances needed across those areas covering design tools, simulation capabilities, fabrication processes, the need for entirely new components, integration and hybridization and the characterization of devices. To realize these advances the astrophotonics community will have to work cooperatively with industrial partners who have more advanced manufacturing capabilities. With the advances described herein, multi-functional instruments will be realized leading to novel observing capabilities for both ground and space platforms.
https://arxiv.org/abs/2311.00615v1
2311.00615
2023-11-01
natural-language-processing
2023 Low-Power Computer Vision Challenge (LPCVC) Summary
This article describes the 2023 IEEE Low-Power Computer Vision Challenge (LPCVC). Since 2015, LPCVC has been an international competition devoted to tackling the challenge of computer vision (CV) on edge devices. Most CV researchers focus on improving accuracy, at the expense of ever-growing sizes of machine models. LPCVC balances accuracy with resource requirements. Winners must achieve high accuracy with short execution time when their CV solutions run on an embedded device, such as Raspberry PI or Nvidia Jetson Nano. The vision problem for 2023 LPCVC is segmentation of images acquired by Unmanned Aerial Vehicles (UAVs, also called drones) after disasters. The 2023 LPCVC attracted 60 international teams that submitted 676 solutions during the submission window of one month. This article explains the setup of the competition and highlights the winners' methods that improve accuracy and shorten execution time.
https://arxiv.org/abs/2403.07153v1
2403.07153
2024-03-11
natural-language-processing
2023 Update of the Discovery of Nuclides
The 2023 update of the discovery of nuclide project is presented when thirteen nuclides were observed for the first time. In addition, a major update and revision of the isotope discovery project is described.
https://arxiv.org/abs/2403.17750v1
2403.17750
2024-03-26
natural-language-processing
2023 update of the extraction of the CKM matrix elements
I discuss the extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements under the Standard Model (SM) framework from a global fit combining observables that satisfy the double requirement of being precisely known both experimentally and theoretically. The analysis shown here relies on the CKMfitter package, consisting of a frequentist approach that employs the Range fit (Rfit) scheme to handle theoretical uncertainties.
https://arxiv.org/abs/2405.08046v1
2405.08046
2024-05-13
natural-language-processing
2023 Update of $\varepsilon_K$ with lattice QCD inputs
We report recent progress on $\varepsilon_K$ evaluated directly from the standard model (SM) with lattice QCD inputs such as $\hat{B}_K$, $|V_{cb}|$, $|V_{us}|$, $|V_{ud}|$, $\xi_0$, $\xi_2$, $\xi_\text{LD}$, $f_K$, and $m_c$. We find that the standard model with exclusive $|V_{cb}|$ and lattice QCD inputs describes only 66\% of the experimental value of $|\varepsilon_K|$ and does not explain its remaining 34\%, which corresponds to a strong tension in $|\varepsilon_K|$ at the $4.9\sigma \sim 3.9\sigma$ level between the SM theory and experiment. We also find that this tension disappears when we use the inclusive value of $|V_{cb}|$ obtained using the heavy quark expansion based on the QCD sum rule approach.
https://arxiv.org/abs/2312.02986v2
2312.02986
2023-11-20
natural-language-processing
2024 California Community Earth Models for Seismic Hazard Assessments Workshop Report
The California Community Earth Models for Seismic Hazard Assessments Workshop (https://www.scec.org/workshops/2024/california-community-models, accessed December 16, 2024) was held online on March 4-5, 2024, with more than 200 participants over two days. In this report, we provide a summary of the key points from the presentations and discussions. We highlight three use cases that drive the development of community Earth models, present an inventory of existing community Earth models in California, summarize a few techniques for integrating and merging models, discuss potential connections with the Cascadia Region Earthquake Science Center (CRESCENT), and discuss what "community" means in community Earth models. Appendix B contains the workshop agenda and Appendix C contains a list of participants.
https://arxiv.org/abs/2503.11545v1
2503.11545
2025-03-14
natural-language-processing
2024 Google Scholar Research Interest Ranking for Top 3260 Computer Science Authors
Computer science research spans a diverse array of topics, with scholars exploring numerous subfields. This paper examines the self-reported research interests of the top 3,260 most cited computer science authors on Google Scholar. Using the scholarly Python library, we systematically retrieved and classified their interests into predefined categories based on the Computer Science Ontology (CSO). The analysis highlights a hierarchy of primary research areas, including Artificial Intelligence, Software Engineering, Data Mining, and Computer Systems. Additionally, it investigates the distribution of these interests, identifying emerging trends, established fields, and areas with relatively less attention. These findings provide a current snapshot of research priorities and serve as a foundation for guiding future studies in computer science.
https://arxiv.org/abs/2503.13451v1
2503.13451
2024-12-31
natural-language-processing
2024 'Key Reflections' on the 1824 Sadi Carnot's 'Reflexions' and 200 Year Legacy
This author is not a philosopher nor historian of science, but an engineering thermodynamicist. In that regard and in addition to various philosophical "why & how" treatises and existing historical analyses, the physical and logical "what it is" reflections, as sequential Key Points, where a key Sadi Carnot's reasoning infers the next one, along with novel contributions and original generalizations, are presented. We need to keep in mind that in Sadi Carnot's time (early 1800s) the steam engines were inefficient (below 5%, so the heat in and out were comparable within experimental uncertainty, as if caloric were conserved), the conservation of caloric flourished (might be a fortunate misconception leading to the critical analogy with the waterwheel), and many critical thermal-concepts, including the conservation of energy (The First Law) were not even established. Since Clausius and Kelvin earned to be "Fathers of thermodynamics," then Sadi Carnot was 'the ingenious' "Forefather of thermodynamics-to-become".
https://arxiv.org/abs/2501.15787v1
2501.15787
2025-01-27
natural-language-processing
2024 roadmap on 2D topological insulators
2D topological insulators promise novel approaches towards electronic, spintronic, and quantum device applications. This is owing to unique features of their electronic band structure, in which bulk-boundary correspondences enforces the existence of 1D spin-momentum locked metallic edge states - both helical and chiral - surrounding an electrically insulating bulk. Forty years since the first discoveries of topological phases in condensed matter, the abstract concept of band topology has sprung into realization with several materials now available in which sizable bulk energy gaps - up to a few hundred meV - promise to enable topology for applications even at room-temperature. Further, the possibility of combining 2D TIs in heterostructures with functional materials such as multiferroics, ferromagnets, and superconductors, vastly extends the range of applicability beyond their intrinsic properties. While 2D TIs remain a unique testbed for questions of fundamental condensed matter physics, proposals seek to control the topologically protected bulk or boundary states electrically, or even induce topological phase transitions to engender switching functionality. Induction of superconducting pairing in 2D TIs strives to realize non-Abelian quasiparticles, promising avenues towards fault-tolerant topological quantum computing. This roadmap aims to present a status update of the field, reviewing recent advances and remaining challenges in theoretical understanding, materials synthesis, physical characterization and, ultimately, device perspectives.
https://arxiv.org/abs/2406.14209v1
2406.14209
2024-06-20
natural-language-processing
2024 Roadmap on Magnetic Microscopy Techniques and Their Applications in Materials Science
Considering the growing interest in magnetic materials for unconventional computing, data storage, and sensor applications, there is active research not only on material synthesis but also characterisation of their properties. In addition to structural and integral magnetic characterisations, imaging of magnetization patterns, current distributions and magnetic fields at nano- and microscale is of major importance to understand the material responses and qualify them for specific applications. In this roadmap, we aim to cover a broad portfolio of techniques to perform nano- and microscale magnetic imaging using SQUIDs, spin center and Hall effect magnetometries, scanning probe microscopies, x-ray- and electron-based methods as well as magnetooptics and nanoMRI. The roadmap is aimed as a single access point of information for experts in the field as well as the young generation of students outlining prospects of the development of magnetic imaging technologies for the upcoming decade with a focus on physics, materials science, and chemistry of planar, 3D and geometrically curved objects of different material classes including 2D materials, complex oxides, semi-metals, multiferroics, skyrmions, antiferromagnets, frustrated magnets, magnetic molecules/nanoparticles, ionic conductors, superconductors, spintronic and spinorbitronic materials.
https://arxiv.org/abs/2401.04793v1
2401.04793
2024-01-09
natural-language-processing
2024 Update on $\varepsilon_K$ with lattice QCD inputs
We report recent progress on $\varepsilon_K$ evaluated directly from the standard model (SM) with lattice QCD inputs such as $\hat{B}_K$, exclusive $|V_{cb}|$, $|V_{us}|$, $|V_{ud}|$, $\xi_0$, $\xi_2$, $\xi_\text{LD}$, $f_K$, and $m_c$. We find that the standard model with exclusive $|V_{cb}|$ and lattice QCD inputs describes only $2/3 \cong 65\%$ of the experimental value of $|\varepsilon_K|$ and does not explain its remaining 35\%, which represents a strong tension in $|\varepsilon_K|$ at the $5.1\sigma \sim 4.1\sigma$ level between the SM theory and experiment. We also find that this tension disappears when we use the inclusive value of $|V_{cb}|$ obtained using the heavy quark expansion based on the QCD sum rule approach. We also report results for $|\varepsilon_K|$ obtained using the Brod-Gorbahn-Stamou (BGS) method for $\eta_i$ of $u-t$ unitarity, which leads to even a stronger tension of $5.7\sigma \sim 4.2\sigma$ with lattice QCD inputs.
https://arxiv.org/abs/2501.00215v2
2501.00215
2024-12-31
natural-language-processing
2025 Santorini-Amorgos crisis triggered by a transition from volcanic to regular tectonic activity
Fluid movement beneath volcanic regions can influence earthquake activity, but the processes linking seismic and volcanic systems are not fully understood. In early 2025, an unusual seismic sequence occurred close to Santorini, providing new insight into these interactions. Here we show that the sequence was likely initiated by the accumulation and migration of fluids beneath the volcanic complex. Seismic and ground deformation data reveal a progression from deep fluid buildup and microfracturing to the concentration of shallow earthquakes beneath Columbo volcano. This culminated in a four-day seismic episode that behaved like a single, slow-propagating rupture along a 16-kilometer fault, releasing energy equivalent to a magnitude 6.2 earthquake. The rupture was followed by a typical aftershock sequence. These observations suggest that fluid-driven processes can generate large earthquakes and redistribute stress in ways similar to tectonic mainshocks. This challenges conventional views on how seismic and volcanic hazards are connected and assessed.
https://arxiv.org/abs/2504.21371v1
2504.21371
2025-04-30
natural-language-processing
2025 TGRS A Self-Supervised Method for Seismic Random Noise Attenuation under Non-Pixelwise Independent Assumption
The attenuation of seismic field noise using self-supervised deep learning has gained attention due to its label-free training process. However, common self-supervised methods are limited by the pixelwise independence assumption, which does not align with field seismic noise characteristics, and suffer from signal leakage due to receptive fields containing inherent blind spots or traces. In this paper, we propose a self-supervised random noise attenuation method based on the non-pixelwise independence assumption. By considering the spatial correlation map of field noise, we extend the blind spot to a generalized blind neighborhood, ensuring that the prediction pixel is not influenced by neighboring pixels with noise correlation greater than zero. The blind neighborhood size controls how much spatial correlation is disrupted, allowing our method to handle random noise with varying spatial correlation. Since larger blind neighborhoods may lead to signal loss, we introduce an automatic trade-off between noise correlation disruption and signal preservation during training. Experiments on real seismic noise attenuation (including random and tracewise coherent noise) demonstrate the superiority of our method in destroying the spatial coherence of noise and preventing useful signal leakage.
https://ieeexplore.ieee.org/document/11007646
null
2025-05-11
natural-language-processing
2025 update on $\varepsilon_K$ in the Standard Model with lattice QCD inputs
We present theoretical results for the indirect CP violation parameter, $|\varepsilon_K|$ calculated directly from the standard model using lattice QCD inputs such as $\hat{B}_K$, $|V_{cb}|$, $|V_{us}|$, $|V_{ud}|$, $\xi_0$, $\xi_2$, $F_K$, and $m_c$ (charm quark mass). We find a strong tension in $|\varepsilon_K|$ at the $\approx 5\sigma$ ($5.2\sigma \sim 4.6\sigma$) level between the experimental value and the theoretical value calculated directly from the standard model using lattice QCD inputs. The standard model with lattice QCD inputs describes only 65\% of the experimental value of $|\varepsilon_K|$, and does not explain its remaining 35\%. We also find that this tension disappears when we use inclusive $|V_{cb}|$ which comes from the heavy quark expansion and QCD sum rules. This tension is highly correlated with the discrepancy between exclusive $|V_{cb}|$, and inclusive $|V_{cb}|$. We also present results for $|\varepsilon_K|$ obtained using the Brod-Gorbahn-Stamou (BGS) method of $u-t$ unitarity, which leads to even a stronger tension at the $5.5\sigma \sim 4.9\sigma$ level with lattice QCD inputs.
https://arxiv.org/abs/2503.00351v3
2503.00351
2025-03-01
natural-language-processing
2026 ESPPU input from the ANUBIS Collaboration
It is imperative for us as a particle physics community to fully exploit the physics potential of the High-Luminosity LHC. This calls for us not to leave any stone unturned in the search for Beyond the Standard Model (BSM) physics. Many BSM models that address fundamental questions of physics like the particulate nature of dark matter, the matter-antimatter asymmetry in the Universe, small but non-zero neutrino masses etc, predict Long-Lived Particles (LLPs) with macroscopic lifetimes of $\tau>10^{-10}$ s. The challenge in searching for BSM models with LLP signatures at the HL-LHC is that it requires the complementary interplay of general purpose detectors like ATLAS, CMS, and LHCb; dedicated detectors situated close to the beamline including the proposed Forward Physics Facility (FPF); and dedicated detectors covering a large decay volume at a reasonable solid angle transverse to the beamline, i.e., a Transverse Physics Facility (TPF). Hence, it is of vital importance to realise a TPF in order to expand dramatically the physics coverage within long-lived particle searches to harvest the physics at the HL-LHC fully. A TPF may be composed of several experiments based at the HL-LHC. In this document, we propose that the community realise the ANUBIS experiment as part of a TPF.
https://arxiv.org/abs/2504.03195v1
2504.03195
2025-04-04
natural-language-processing
2048 is (PSPACE) Hard, but Sometimes Easy
We prove that a variant of 2048, a popular online puzzle game, is PSPACE-Complete. Our hardness result holds for a version of the problem where the player has oracle access to the computer player's moves. Specifically, we show that for an $n \times n$ game board $\mathcal{G}$, computing a sequence of moves to reach a particular configuration $\mathbb{C}$ from an initial configuration $\mathbb{C}_0$ is PSPACE-Complete. Our reduction is from Nondeterministic Constraint Logic (NCL). We also show that determining whether or not there exists a fixed sequence of moves $\mathcal{S} \in \{\Uparrow, \Downarrow, \Leftarrow, \Rightarrow\}^k$ of length $k$ that results in a winning configuration for an $n \times n$ game board is fixed-parameter tractable (FPT). We describe an algorithm to solve this problem in $O(4^k n^2)$ time.
https://arxiv.org/abs/1408.6315v1
1408.6315
2014-08-27
natural-language-processing
2060: Civilization, Energy, and Progression of Mankind on the Kardashev Scale
Energy has been propelling the development of human civilization for millennia, and technologies acquiring energy beyond human and animal power have been continuously advanced and transformed. In 1964, the Kardashev Scale was proposed to quantify the relationship between energy consumption and the development of civilizations. Human civilization presently stands at Type 0.7276 on this scale. Projecting the future energy consumption, estimating the change of its constituting structure, and evaluating the influence of possible technological revolutions are critical in the context of civilization development. In this study, we use two machine learning models, random forest (RF) and autoregressive integrated moving average (ARIMA), to simulate and predict energy consumption on a global scale. We further project the position of human civilization on the Kardashev Scale in 2060. The result shows that the global energy consumption is expected to reach 928-940 EJ in 2060, with a total growth of over 50% in the coming 40 years, and our civilization is expected to achieve Type 0.7474 on the Kardashev Scale, still far away from a Type 1 civilization. Additionally, we discuss the potential energy segmentation change before 2060 and present the influence of the advent of nuclear fusion in this context.
https://arxiv.org/abs/2208.12617v1
2208.12617
2022-08-10
natural-language-processing
2064 global population crisis scenario predicted by the most general dynamic model
There is currently no consensus on how the global population will evolve in the next decades and in the next century. The reason for this uncertainty is the absence of reliable population dynamic models. In this paper, we remedy to this situation by reporting on a population dynamic model, a single nonlinear differential equation adapted from the physics of disordered systems, which is able to mathematically describe all the various regimes encountered in the global population recorded as a function of time, over the past 12000 years until now. Regimes of simple exponential growth (Malthus), logistic (Verhulst) plateaus as well as stretched-exponential and compressed-exponential growth regimes are all reliably described by this mathematical equation in its various limits. Besides showing that this is, indeed, the most general population dynamic model, we use it to explore its solutions projected into the future. In particular, two different scenarios are predicted. In one of them, which assumes that the future evolution would continue along a similar pattern as the past decades (hence without any major global ecological crisis affecting the resource exploitation), a von Foerster-type doomsday scenario with a sudden rise of the global population to unsustainable levels could appear as early as 2078. In the opposite scenario, if a global ecological crisis were to set in today, affecting the ability to exploit resources, given the current estimates of the Earth's carrying capacity, the global population is forecasted to reduce by half by 2064. Furthermore, the proposed dynamic model provides with a new aggregated parameter (K, in the model) that can be monitored and controlled so as to avoid the doomsday scenarios.
https://arxiv.org/abs/2502.19063v2
2502.19063
2025-02-26
natural-language-processing
20736-node Weighted Max-Cut Problem Solving by Quadrature Photonic Spatial Ising Machine
To tackle challenging combinatorial optimization problems, analog computing machines based on the nature-inspired Ising model are attracting increasing attentions in order to disruptively overcome the impending limitations on conventional electronic computers. Photonic spatial Ising machine has become an unique and primitive solution with all-to-all connections to solve large-scale Max-cut problems. However, spin configuration and flipping requires two independent sets of spatial light modulators (SLMs) for amplitude and phase modulation, which will lead to tremendous engineering difficulty of optical alignment and coupling. We report a novel quadrature photonic spatial-Euler Ising machine to realize large-scale and flexible spin-interaction configuration and spin-flip in a single spatial light modulator, and develop a noise enhancement approach by adding digital white noise onto detected optical signals. We experimentally show that such proposal accelerates solving (un)weighted, (non)fully connected, 20736-node Max-cut problems, which offers obvious advantages over simulation and heuristic algorithm results in digital computers.
https://arxiv.org/abs/2301.04651v2
2301.04651
2023-01-11
natural-language-processing
207 New Open Star Clusters within 1 kpc from Gaia Data Release 2
We conducted a survey of open clusters within 1 kpc from the Sun using the astrometric and photometric data of the Gaia Data Release 2. We found 655 cluster candidates by visual inspection of the stellar distributions in proper motion space and spatial distributions in l-b space. All of the 655 cluster candidates have a well defined main-sequence except for two candidates if we consider that the main sequence of very young clusters is somewhat broad due to differential extinction. Cross-matching of our 653 open clusters with known open clusters in various catalogs resulted in 207 new open clusters. We present the physical properties of the newly discovered open clusters. The majority of the newly discovered open clusters are of young to intermediate age and have less than ~50 member stars.
http://arxiv.org/abs/1907.06872v3
1907.06872
2019-10-15
natural-language-processing
(208) Lacrimosa: A case that missed the Slivan state?
The largest asteroids in the Koronis family (sizes $\geq 25$ km) have very peculiar rotation state properties, with the retrograde- and prograde-rotating objects being distinctly different. A recent e-analysis of observations suggests that one of the asteroids formerly thought to be retrograde-rotating, 208~Lacrimosa, in reality exhibits prograde rotation, yet other properties of this object are discrepant with other members this group. We seek to understand whether the new spin solution of Lacrimosa invalidates the previously proposed model of the Koronis large members or simply reveals more possibilities for the long-term evolutionary paths, including some that have not yet been explored. We confirm and substantiate the previously suggested prograde rotation of Lacrimosa. Its spin vector has an ecliptic longitude and latitude of $(\lambda,\beta)=(15^\circ \pm 2^\circ, 67^\circ\pm 2^\circ)$ and a sidereal rotation period $P=14.085734\pm 0.000007$ hr. The thermal and occultation data allow us to calibrate a volume equivalent size of $D=44\pm 2$ km of Lacrimosa. The observations also constrain the shape model relatively well. Assuming uniform density, the dynamical ellipticity is $\Delta=0.35\pm 0.05$. Unlike other large prograde-rotating Koronis members, Lacrimosa spin is not captured in the Slivan state. We propose that Lacrimosa differed from this group in that it had initially slightly larger obliquity and longer rotation period. With those parameters, it jumped over the Slivan state instead of being captured and slowly evolved into the present spin configuration. In the future, it is likely to be captured in the Slivan state corresponding to the proper (instead of forced) mode of the orbital plane precession in the inertial space.
https://arxiv.org/abs/2103.12480v1
2103.12480
2021-03-23
natural-language-processing
$^{208}$Pb nuclear charge radius revisited: closing the fine-structure-anomaly gap
A comprehensive reevaluation of the root-mean-square nuclear charge radius is presented for the doubly magic $^{208}$Pb extracted from muonic spectroscopy measurements. By integrating rigorous theoretical quantum electrodynamics calculations, state-of-the-art numerical methods, and a systematic reanalysis of the uncertainties, we reduced the long-standing muonic fine-structure anomaly and improved the goodness of fit by a factor of twenty. The resulting value of 5.5062(5) fm is fairly consistent with the previously reported muonic spectroscopy value, and three standard deviations larger than the commonly used compilation data, which indicates that the current value and its uncertainty could be significantly underestimated. Our study paves a new path for systematic reevaluation of all rms radii based on muonic spectroscopy.
https://arxiv.org/abs/2504.19977v1
2504.19977
2025-04-28
natural-language-processing
$20'$ Five-Point Function from $AdS_5\times S^5$ Supergravity
We develop new techniques to compute five-point correlation functions from IIB supergravity on $AdS_5\times S^5$. Our methods rely entirely on symmetry and general consistency conditions, and eschew detailed knowledge of the supergravity effective action. We demonstrate our methods by computing the five-point function of the $\mathbf{20'}$ operator, which is the superconformal primary of the stress tensor multiplet. We also develop systematic methods to compute the five-point conformal blocks in series expansions. Using the explicit expressions of the conformal blocks, we perform a Euclidean OPE analysis of the $\mathbf{20'}$ five-point function. We find expected agreement with non-renormalized quantities and also extract new CFT data at strong coupling.
http://arxiv.org/abs/1906.05305v2
1906.05305
2019-10-21
natural-language-processing
20-fold Accelerated 7T fMRI Using Referenceless Self-Supervised Deep Learning Reconstruction
High spatial and temporal resolution across the whole brain is essential to accurately resolve neural activities in fMRI. Therefore, accelerated imaging techniques target improved coverage with high spatio-temporal resolution. Simultaneous multi-slice (SMS) imaging combined with in-plane acceleration are used in large studies that involve ultrahigh field fMRI, such as the Human Connectome Project. However, for even higher acceleration rates, these methods cannot be reliably utilized due to aliasing and noise artifacts. Deep learning (DL) reconstruction techniques have recently gained substantial interest for improving highly-accelerated MRI. Supervised learning of DL reconstructions generally requires fully-sampled training datasets, which is not available for high-resolution fMRI studies. To tackle this challenge, self-supervised learning has been proposed for training of DL reconstruction with only undersampled datasets, showing similar performance to supervised learning. In this study, we utilize a self-supervised physics-guided DL reconstruction on a 5-fold SMS and 4-fold in-plane accelerated 7T fMRI data. Our results show that our self-supervised DL reconstruction produce high-quality images at this 20-fold acceleration, substantially improving on existing methods, while showing similar functional precision and temporal effects in the subsequent analysis compared to a standard 10-fold accelerated acquisition.
https://arxiv.org/abs/2105.05827v1
2105.05827
2021-05-12
natural-language-processing
20 GHz fiber-integrated femtosecond pulse and supercontinuum generation with a resonant electro-optic frequency comb
Frequency combs with mode spacing in the range of 10 to 20 gigahertz (GHz) are critical for increasingly important applications such as astronomical spectrograph calibration, high-speed dual-comb spectroscopy, and low-noise microwave generation. While electro-optic modulators and microresonators can provide narrowband comb sources at this repetition rate, a significant remaining challenge is a means to produce pulses with sufficient peak power to initiate nonlinear supercontinuum generation spanning hundreds of terahertz (THz) as required for self-referencing in these applications. Here, we provide a simple, robust, and universal solution to this problem using off-the-shelf polarization-maintaining (PM) amplification and nonlinear fiber components. This fiber-integrated approach for nonlinear temporal compression and supercontinuum generation is demonstrated with a resonant electro-optic frequency comb at 1550 nm. We show how to readily achieve pulses shorter than 60 fs at a repetition rate of 20 GHz and with peak powers in excess of 2 kW. The same technique can be applied to picosecond pulses at 10 GHz to demonstrate temporal compression by a factor of 9x yielding 50 fs pulses with peak power of 5.5 kW. These compressed pulses enable flat supercontinuum generation spanning more than 600 nm after propagation through multi-segment dispersion-tailored anomalous-dispersion highly nonlinear fiber (HNLF) or tantala waveguides. The same 10 GHz source can readily achieve an octave-spanning spectrum for self-referencing in dispersion-engineered silicon nitride waveguides. This simple all-fiber approach to nonlinear spectral broadening fills a critical gap for transforming any narrowband 10 to 20 GHz frequency comb into a broadband spectrum for a wide range of applications that benefit from the high pulse rate and require access to the individual comb modes.
https://arxiv.org/abs/2303.11523v1
2303.11523
2023-03-21
natural-language-processing
20 GHZ Low Noise LLRF System
A 20 GHz LLRF system is being built using a two-board(RF Front End + ADC/DAC/FPGA) architecture. The RF Front End provides 8 down-converting channels and 3 up-converting channels (5.5-20 GHz RF to 0.05-3 GHz IF). Separate, phase locked, low-noise input and output LO's are generated on-board with an independent programmable frequency range of 4-20 GHz. A user input is provided so that both LO's as well as all ADC, DAC, and FPGA clocks can be locked to a supplied reference source with a frequency range from 100 MHz to 20 GHz. The IF is processed with a commercial board (HiTech Global ZRF8) based on the Xilinx ZYNQ RFSoC FPGA. The RFSoC FPGA incorporates eight 4-GSPS 12-bit ADC's with a 4 GHz analog bandwidth and eight 6.4-GSPS 14-bit DAC's. The ZRF8 is a PCIe-standard board that provides low noise ADC/DAC/FPGA clocking, 16 GB of memory, a FMC+ socket, and a 1 Gbps Ethernet port. The complete system will be housed in a standard 2U 19" rack.
http://arxiv.org/abs/1910.11936v1
1910.11936
2019-10-23
natural-language-processing
20 K superconductivity in heavily electron doped surface layer of FeSe bulk crystal
A superconducting transition temperature Tc as high as 100 K was recently discovered in 1 monolayer (1ML) FeSe grown on SrTiO3 (STO). The discovery immediately ignited efforts to identify the mechanism for the dramatically enhanced Tc from its bulk value of 7 K. Currently, there are two main views on the origin of the enhanced Tc; in the first view, the enhancement comes from an interfacial effect while in the other it is from excess electrons with strong correlation strength. The issue is controversial and there are evidences that support each view. Finding the origin of the Tc enhancement could be the key to achieving even higher Tc and to identifying the microscopic mechanism for the superconductivity in iron-based materials. Here, we report the observation of 20 K superconductivity in the electron doped surface layer of FeSe. The electronic state of the surface layer possesses all the key spectroscopic aspects of the 1ML FeSe on STO. Without any interface effect, the surface layer state is found to have a moderate Tc of 20 K with a smaller gap opening of 4 meV. Our results clearly show that excess electrons with strong correlation strength alone cannot induce the maximum Tc, which in turn strongly suggests need for an interfacial effect to reach the enhanced Tc found in 1ML FeSe/STO.
http://arxiv.org/abs/1511.07950v2
1511.07950
2015-12-15
natural-language-processing
20-MAD -- 20 Years of Issues and Commits of Mozilla and Apache Development
Data of long-lived and high profile projects is valuable for research on successful software engineering in the wild. Having a dataset with different linked software repositories of such projects, enables deeper diving investigations. This paper presents 20-MAD, a dataset linking the commit and issue data of Mozilla and Apache projects. It includes over 20 years of information about 765 projects, 3.4M commits, 2.3M issues, and 17.3M issue comments, and its compressed size is over 6 GB. The data contains all the typical information about source code commits (e.g., lines added and removed, message and commit time) and issues (status, severity, votes, and summary). The issue comments have been pre-processed for natural language processing and sentiment analysis. This includes emoticons and valence and arousal scores. Linking code repository and issue tracker information, allows studying individuals in two types of repositories and provide more accurate time zone information for issue trackers as well. To our knowledge, this the largest linked dataset in size and in project lifetime that is not based on GitHub.
http://arxiv.org/abs/2003.14015v1
2003.14015
2020-03-31
natural-language-processing
20min-XD: A Comparable Corpus of Swiss News Articles
We present 20min-XD (20 Minuten cross-lingual document-level), a French-German, document-level comparable corpus of news articles, sourced from the Swiss online news outlet 20 Minuten/20 minutes. Our dataset comprises around 15,000 article pairs spanning 2015 to 2024, automatically aligned based on semantic similarity. We detail the data collection process and alignment methodology. Furthermore, we provide a qualitative and quantitative analysis of the corpus. The resulting dataset exhibits a broad spectrum of cross-lingual similarity, ranging from near-translations to loosely related articles, making it valuable for various NLP applications and broad linguistically motivated studies. We publicly release the dataset in document- and sentence-aligned versions and code for the described experiments.
https://arxiv.org/abs/2504.21677v1
2504.21677
2025-04-30
natural-language-processing
20-Mode Universal Quantum Photonic Processor
Integrated photonics is an essential technology for optical quantum computing. Universal, phase-stable, reconfigurable multimode interferometers (quantum photonic processors) enable manipulation of photonic quantum states and are one of the main components of photonic quantum computers in various architectures. In this paper, we report the realization of the largest quantum photonic processor to date. The processor enables arbitrary unitary transformations on its 20 input modes with an amplitude fidelity of $F_{\text{Haar}} = 97.4\%$ and $F_{\text{Perm}} = 99.5\%$ for Haar-random and permutation matrices, respectively, an optical loss of 2.9 dB averaged over all modes, and high-visibility quantum interference with $V_{\text{HOM}}=98\%$. The processor is realized in $\mathrm{Si_3N_4}$ waveguides and is actively cooled by a Peltier element.
https://arxiv.org/abs/2203.01801v5
2203.01801
2022-03-03
natural-language-processing
20 open questions about deformations of compactifiable manifolds
Deformation theory of complex manifolds is a classical subject with recent new advances in the noncompact case using both algebraic and analytic methods. In this note, we recall some concepts of the existing theory and introduce new notions of deformations for manifolds with boundary, for compactifiable manifolds, and for $q$-concave spaces. We highlight some of the possible applications and give a list of open questions which we intend as a guide for further research in this rich and beautiful subject.
http://arxiv.org/abs/2004.11299v1
2004.11299
2020-04-23
natural-language-processing
20 ps Time Resolution with a Fully-Efficient Monolithic Silicon Pixel Detector without Internal Gain Layer
A second monolithic silicon pixel prototype was produced for the MONOLITH project. The ASIC contains a matrix of hexagonal pixels with 100 {\mu}m pitch, readout by a low-noise and very fast SiGe HBT frontend electronics. Wafers with 50 {\mu}m thick epilayer of 350 {\Omega}cm resistivity were used to produce a fully depleted sensor. Laboratory and testbeam measurements of the analog channels present in the pixel matrix show that the sensor has a 130 V wide bias-voltage operation plateau at which the efficiency is 99.8%. Although this prototype does not include an internal gain layer, the design optimised for timing of the sensor and the front-end electronics provides a time resolutions of 20 ps.
https://arxiv.org/abs/2301.12244v1
2301.12244
2023-01-28
natural-language-processing
20 T Dipole Magnet Based on Hybrid HTS/LTS Cos-Theta Coils with Stress Management
This paper presents the design concept of the dipole magnet with 50 mm aperture, 20 T nominal field and 13% margin based on a six-layer cos-theta (CT) hybrid coil design. Due to the high stresses and strains in the coil at high field, Stress Management (SM) elements are implemented in the CT coil geometry. The results of magnet magnetic analysis are presented and discussed. The key parameters of this design are compared with the parameters of similar magnets based on block-type and canted cos-theta coils.
https://arxiv.org/abs/2305.06776v1
2305.06776
2023-05-11
natural-language-processing
(2,0) theory on $S^5 \times S^1$ and quantum M2 branes
The superconformal index $Z$ of the 6d (2,0) theory on $S^5 \times S^1$ (which is related to the localization partition function of 5d SYM on $S^5$) should be captured at large $N$ by the quantum M2 brane theory in the dual M-theory background. Generalizing the type IIA string theory limit of this relation discussed in arXiv:2111.15493 and arXiv:2304.12340, we consider semiclassically quantized M2 branes in a half-supersymmetric 11d background which is a twisted product of thermal AdS$_7$ and $S^4$. We show that the leading non-perturbative term at large $N$ is reproduced precisely by the 1-loop partition function of an "instanton" M2 brane wrapped on $S^1\times S^2$ with $S^2\subset S^4$. Similarly, the (2,0) theory analog of the BPS Wilson loop expectation value is reproduced by the partition function of a "defect" M2 brane wrapped on thermal AdS$_3\subset$ AdS$_7$. We comment on a curious analogy of these results with similar computations in arXiv:2303.15207 and arXiv:2307.14112 of the partition function of quantum M2 branes in AdS$_4 \times S^7/\mathbb Z_k$ which reproduced the corresponding localization expressions in the ABJM 3d gauge theory.
https://arxiv.org/abs/2309.10786v4
2309.10786
2023-09-19
natural-language-processing
20 Years of ACE Data: How Superposed Epoch Analyses Reveal Generic Features in Interplanetary CME Profiles
Interplanetary coronal mass ejections (ICMEs) are magnetic structures propagating from the Sun's corona to the interplanetary medium. With over 20 years of observations at the L1 libration point, ACE offers hundreds of ICMEs detected at different times during several solar cycles and with different features such as the propagation speed. We investigate a revisited catalog of more than 400 ICMEs using the superposed epoch method on the mean, median, and the most probable values of the distribution of magnetic and plasma parameters. We also investigate the effects of the speed of ICMEs relative to the solar wind, the solar cycle, and the existence of a magnetic cloud on the generic ICME profile. We find that fast-propagating ICMEs (relatively to the solar wind in front) still show signs of compression at 1 au, as seen by the compressed sheath and the asymmetric profile of the magnetic field. While the solar cycle evolution does not impact the generic features of ICMEs, there are more extreme events during the active part of the cycle, widening the distributions of all parameters. Finally, we find that ICMEs with or without a detected magnetic cloud show similar profiles, which confirms the hypothesis that ICMEs with no detected magnetic clouds are crossed further away from the flux rope core. Such a study provides a generic understanding of processes that shape the overall features of ICMEs in the solar wind and can be extended with future missions at different locations in the solar system.
http://arxiv.org/abs/2011.05050v1
2011.05050
2020-11-10
natural-language-processing
20 Years of DDoS: a Call to Action
Botnet Distributed Denial of Service (DDoS) attacks are now 20 years old; what has changed in that time? Their disruptive presence, their volume, distribution across the globe, and the relative ease of launching them have all been trending in favor of attackers. Our increases in network capacity and our architectural design principles are making our online world richer, but are favoring attackers at least as much as Internet services. The DDoS mitigation techniques have been evolving but they are losing ground to the increasing sophistication and diversification of the attacks that have moved from the network to the application level, and we are operationally falling behind attackers. It is time to ask fundamental questions: are there core design issues in our network architecture that fundamentally enable DDoS attacks? How can our network infrastructure be enhanced to address the principles that enable the DDoS problem? How can we incentivize the development and deployment of the necessary changes? In this article, we want to sound an alarm and issue a call to action to the research community. We propose that basic research and principled analyses are badly needed, because the status quo does not paint a pretty picture for the future.
http://arxiv.org/abs/1904.02739v2
1904.02739
2019-04-21
natural-language-processing
20 years of developments in optical frequency comb technology and applications
Optical frequency combs were developed nearly two decades ago to support the world's most precise atomic clocks. Acting as precision optical synthesizers, frequency combs enable the precise transfer of phase and frequency information from a high-stability reference to hundreds of thousands of tones in the optical domain. This versatility, coupled with near-continuous spectroscopic coverage from the terahertz to the extreme ultra-violet, has enabled precision measurement capabilities in both fundamental and applied contexts. This review takes a tutorial approach to illustrate how 20 years of source development and technology has facilitated the journey of optical frequency combs from the lab into the field.
http://arxiv.org/abs/1909.05384v1
1909.05384
2019-09-11
natural-language-processing
20 years of disk winds in 4U 1630-47 -- I. Long-term behavior and influence of hard X-rays
Highly ionized X-ray wind signatures have been found in the soft states of high-inclination Black Hole Low Mass X-ray Binaries (BHLMXBs) for more than two decades. Yet signs of a systematic evolution of the outflow itself along the outburst remain elusive, due to the limited sampling of individual sources and the necessity to consider the broad-band evolution of the Spectral Energy Distribution (SED). We perform an holistic analysis of archival X-ray wind signatures in the most observed wind-emitting transient BHLMXB to date, 4U 1630-47 . The combination of Chandra, NICER, NuSTAR, Suzaku, and XMM-Newton, complemented in hard X-rays by Swift/BAT and INTEGRAL, spans more than 200 individual days over 9 individual outbursts, and provides a near complete broad-band coverage of the brighter portion of the outburst. Our results show that the hard X-rays allow to define "soft" states with ubiquitous wind detections, and their contribution is strongly correlated with the Equivalent Width (EW) of the lines. We then constrain the evolution of the outflow in a set of representative observations, using thermal stability curves and photoionization modeling. The former confirms that the switch to unstable SEDs occurs well after the wind signatures disappear, to the point where the last canonical hard states are thermally stable. The latter shows that intrinsic changes in the outflow are required to explain the main correlations of the line EWs, be it with luminosity or the hard X-rays. These behaviors are seen systematically over all outbursts and confirm individual links between the wind properties, the thermal disk, and the corona.
https://arxiv.org/abs/2504.00991v1
2504.00991
2025-04-01
natural-language-processing
20 Years of Evolution from Cognitive to Intelligent Communications
It has been 20 years since the concept of cognitive radio (CR) was proposed, which is an efficient approach to provide more access opportunities to connect massive wireless devices. To improve the spectrum efficiency, CR enables unlicensed usage of licensed spectrum resources. It has been regarded as the key enabler for intelligent communications. In this article, we will provide an overview on the intelligent communication in the past two decades to illustrate the revolution of its capability from cognition to artificial intelligence (AI). Particularly, this article starts from a comprehensive review of typical spectrum sensing and sharing, followed by the recent achievements on the AI-enabled intelligent radio. Moreover, research challenges in the future intelligent communications will be discussed to show a path to the real deployment of intelligent radio. After witnessing the glorious developments of CR in the past 20 years, we try to provide readers a clear picture on how intelligent radio could be further developed to smartly utilize the limited spectrum resources as well as to optimally configure wireless devices in the future communication systems.
http://arxiv.org/abs/1909.11562v1
1909.11562
2019-09-25
natural-language-processing
20 years of Greedy Randomized Adaptive Search Procedures with Path Relinking
This is a comprehensive review of the Greedy Randomized Adaptive Search Procedure (GRASP) metaheuristic and its hybridization with Path Relinking (PR) over the past two decades. GRASP with PR has become a widely adopted approach for solving hard optimization problems since its proposal in 1999. The paper covers the historical development of GRASP with PR and its theoretical foundations, as well as recent advances in its implementation and application. The review includes a critical analysis of variants of PR, including memory-based and randomized designs, with a total of ten different implementations. It describes these advanced designs both theoretically and practically on two well-known optimization problems, linear ordering and max-cut. The paper also explores the hybridization of GRASP with PR and other metaheuristics, such as Tabu Search and Scatter Search. Overall, this review provides valuable insights for researchers and practitioners seeking to utilize GRASP with PR for solving optimization problems.
https://arxiv.org/abs/2312.12663v1
2312.12663
2023-12-19
natural-language-processing
20 Years of Light Pentaquark Searches
In this paper, I pay tribute to my exceptional colleagues and friends Dmitri Diakonov, Victor Petrov, and Maxim Polyakov by examining the experimental progress and current status of the searches of the $\Theta^+$ pentaquark from its inception to the present.
https://arxiv.org/abs/2503.21545v2
2503.21545
2025-03-27
natural-language-processing
20 Years of Mobility Modeling & Prediction: Trends, Shortcomings & Perspectives
In this paper, we present a comprehensive survey of human-mobility modeling based on 1680 articles published between 1999 and 2019, which can serve as a roadmap for research and practice in this area. Mobility modeling research has accelerated the advancement of several fields of studies such as urban planning, epidemic modeling, traffic engineering and contributed to the development of location-based services. However, while the application of mobility models in different domains has increased, the credibility of the research results has decreased. We highlight two significant shortfalls commonly observed in our reviewed studies: (1) data-agnostic model selection resulting in a poor tradeoff between accuracy vs. complexity, and (2) failure to identify the source of empirical gains, due to adoption of inaccurate validation methodologies. We also observe troubling trends with respect to application of Markov model variants for modeling mobility, despite the questionable association of Markov processes and human-mobility dynamics. To this end, we propose a data-driven mobility-modeling framework that quantifies the characteristics of a dataset based on four mobility meta-attributes, in order to select the most appropriate prediction algorithm. Experimental evaluations on three real-world mobility datasets based on a rigorous validation methodology demonstrate our frameworks ability to correctly analyze the model accuracy vs. complexity tradeoff. We offer these results to the community along with the tools and the literature meta-data in order to improve the reliability and credibility of human mobility modeling research.
http://arxiv.org/abs/1906.07451v1
1906.07451
2019-06-18
natural-language-processing
20 years of network community detection
A fundamental technical challenge in the analysis of network data is the automated discovery of communities - groups of nodes that are strongly connected or that share similar features or roles. In this commentary we review progress in the field over the last 20 years.
https://arxiv.org/abs/2208.00111v2
2208.00111
2022-07-30
natural-language-processing
20 years of ordinal patterns: Perspectives and challenges
In 2002, in a seminal article, Christoph Bandt and Bernd Pompe proposed a new methodology for the analysis of complex time series, now known as Ordinal Analysis. The ordinal methodology is based on the computation of symbols (known as ordinal patterns) which are defined in terms of the temporal ordering of data points in a time series, and whose probabilities are known as ordinal probabilities. With the ordinal probabilities, the Shannon entropy can be calculated, which is the permutation entropy. Since it was proposed, the ordinal method has found applications in fields as diverse as biomedicine and climatology. However, some properties of ordinal probabilities are still not fully understood, and how to combine the ordinal approach of feature extraction with machine learning techniques for model identification, time series classification or forecasting remains a challenge. The objective of this perspective article is to present some recent advances and to discuss some open problems.
https://arxiv.org/abs/2204.12883v1
2204.12883
2022-04-27
natural-language-processing
20 years of photometric microlensing events predicted by Gaia DR2: Potential planet-hosting lenses within 100 pc
Context. Gaia DR2 offers unparalleled precision on stars' parallaxes and proper motions. This allows the prediction of microlensing events for which the lens stars (and any planets they possess) are nearby and may be well studied and characterised. Aims. We identify a number of potential microlensing events that will occur before the year 2035.5, 20 years from the Gaia DR2 reference epoch. Methods. We query Gaia DR2 for potential lenses within 100 pc, extract parallaxes and proper motions of the lenses and background sources, and identify potential lensing events. We estimate the lens masses from Priam effective temperatures, and use these to calculate peak magnifications and the size of the Einstein radii relative to the lens stars' habitable zones. Results. We identify 7 future events with a probability > 10% of an alignment within one Einstein radius. Of particular interest is DR2 5918299904067162240 (WISE J175839.20-583931.6), magnitude G = 14.9, which will lens a G = 13.9 background star in early 2030, with a median 23% net magnification. Other pairs are typically fainter, hampering characterisation of the lens (if the lens is faint) or the ability to accurately measure the magnification (if the source is much fainter than the lens). Of timely interest is DR2 4116504399886241792 (2MASS J17392440-2327071), which will lens a background star in July 2020, albeit with weak net magnification (0.03%). Median magnifications for the other 5 high-probability events range from 0.3% to 5.3%. The Einstein radii for these lenses are 1-10 times the radius of the habitable zone, allowing these lensing events to pick out cold planets around the ice line, and filling a gap between transit and current microlensing detections of planets around very low-mass stars. Conclusions. We provide a catalogue of the predicted events to aid future characterisation efforts... [abridged]
http://arxiv.org/abs/1805.11638v2
1805.11638
2018-07-10
natural-language-processing
$^{210}$Pb measurements at the André E. Lalonde AMS Laboratory for the radioassay of materials used in rare event search detectors
Naturally occurring radionuclide $^{210}$Pb ($T_{1/2}$=22.2 y) is an important source of background in rare event searches, such as neutrinoless double-$\beta$ decay and dark matter direct detection experiments. When a sample mass of hundreds of grams is available, $\gamma$-counting measurements can be performed. However, there are other cases where only grams of sample can be used. For these cases, better sensitivities are required. In this paper, in collaboration with the Astroparticle Physics group at Carleton University, the capabilities of the A.E. Lalonde AMS Laboratory at the University of Ottawa for $^{210}$Pb measurements are discussed. PbF$_{2}$ and PbO targets were used, selecting in the low energy sector, respectively, (PbF$_{3}$)$^{-}$ or (PbO$_{2}$)$^{-}$ ions. For fluoride targets, the blank $^{210}$Pb/$^{206}$Pb ratio was in the 10$^{-14}$ to 10$^{-13}$ range, but current output was lower and less stable. For oxide targets, current output showed better stability, despite a significant difference in current output for commercial PbO and processed samples, and background studies suggested a background not much higher than that of the fluoride targets. Both target materials showed, therefore, good performance for $^{210}$Pb AMS assay. Measurements of Kapton films, an ultra-thin polymer material, where masses available are typically just several grams, were performed. 90% C.L. upper limits for the $^{210}$Pb specific activity in the range of 0.74-2.8 Bq/kg were established for several Kapton HN films.
https://arxiv.org/abs/2102.06776v2
2102.06776
2021-02-15
natural-language-processing
$2^{1296}$ Exponentially Complex Quantum Many-Body Simulation via Scalable Deep Learning Method
For decades, people are developing efficient numerical methods for solving the challenging quantum many-body problem, whose Hilbert space grows exponentially with the size of the problem. However, this journey is far from over, as previous methods all have serious limitations. The recently developed deep learning methods provide a very promising new route to solve the long-standing quantum many-body problems. We report that a deep learning based simulation protocol can achieve the solution with state-of-the-art precision in the Hilbert space as large as $2^{1296}$ for spin system and $3^{144}$ for fermion system , using a HPC-AI hybrid framework on the new Sunway supercomputer. With highly scalability up to 40 million heterogeneous cores, our applications have measured 94% weak scaling efficiency and 72% strong scaling efficiency. The accomplishment of this work opens the door to simulate spin models and Fermion models on unprecedented lattice size with extreme high precision.
https://arxiv.org/abs/2204.07816v1
2204.07816
2022-04-16
natural-language-processing
2-16 GHz Multifrequency X-Cut Lithium Niobate NEMS Resonators on a Single Chip
This work presents the design, fabrication, and testing of X-Cut Lithium Niobate (LN) acoustic nanoelectromechanical (NEMS) Laterally Vibrating Resonators (LVRs) and Degenerate LVRs (d-LVRs) operating in the S0 (YZ30) and SH0 (YZ-10) modes between 2 to 16 GHz range, monolithically fabricated on a single chip. The NEMS topology is optimized to extend the aforementioned fundamental modes in the C-, X-, and Ku-bands while preserving performance and mass manufacturability. The devices present acoustic wavelengths ({\lambda}) varying between 1800 and 400 nm and are fabricated on a 100 nm ultra-thin LN film on high resistivity silicon with a 3-mask process. Experimental results highlighted quality factor at resonance (Qs) and mechanical quality factors (Qm) as high as 477 and 1750, respectively, and electromechanical coupling (kt2) as high as 32.7%. Large kt2 (>10%) are recorded over a broad range of frequencies (2 - 8 GHz), while Qm exceeding 100 are measured up to 15 GHz. Further enhancement to performance and range of operation on the same chip can be achieved by decreasing {\lambda}, refining the fabrication process, and optimizing device topology. These additional steps can help pave the way for manufacturing high-performance resonators on a single chip covering the entire 1 - 25 GHz spectrum.
https://arxiv.org/abs/2405.05547v1
2405.05547
2024-05-09
natural-language-processing
(216) Kleopatra, a low density critically rotating M-type asteroid
Context. The recent estimates of the 3D shape of the M/Xe-type triple asteroid system (216) Kleopatra indicated a density of 5 g.cm$^{-3}$. Such a high density implies a high metal content and a low porosity which is not easy to reconcile with its peculiar dumbbell shape. Aims. Given the unprecedented angular resolution of the VLT/SPHERE/ZIMPOL camera, we aim to constrain the mass and the shape of Kleopatra with high accuracy, hence its density. Methods. We combined our new VLT/SPHERE observations of Kleopatra recorded in 2017 and 2018 with archival data, as well as lightcurve, occultation, and delay-Doppler images, to derive its 3D shape model using two different algorithms (ADAM, MPCD). Furthermore, an N-body dynamical model allowed us to retrieve the orbital elements of the two moons as explained in the accompanying paper. Results. The shape of Kleopatra is very close to an equilibrium dumbbell figure with two lobes and a thick neck. Its volume equivalent diameter (118.75$\pm$1.40) km and mass (2.97$\pm$0.32) 10$^{18}$ kg imply a bulk density of (3.38$\pm$0.50) g cm$^{-3}$. Such a low density for a supposedly metal-rich body indicates a substantial porosity within the primary. This porous structure along with its near-equilibrium shape is compatible with a formation scenario including a giant impact followed by reaccumulation. Kleopatra's current rotation period and dumbbell shape imply that it is in a critically rotating state. The low effective gravity along the equator of the body, together with the equatorial orbits of the moons and possibly rubble-pile structure, opens the possibility that the moons formed via mass shedding. Conclusions. Kleopatra is a puzzling multiple system due to the unique characteristics of the primary. It deserves particular attention in the future, with the Extremely Large Telescopes and possibly a dedicated space mission.
https://arxiv.org/abs/2108.07207v1
2108.07207
2021-08-16
natural-language-processing
21 Balmer Jump Street: The Nebular Continuum at High Redshift and Implications for the Bright Galaxy Problem, UV Continuum Slopes, and Early Stellar Populations
We study, from both a theoretical and observational perspective, the physical origin and spectroscopic impact of extreme nebular emission in high-redshift galaxies. The nebular continuum, which can appear during extreme starbursts, is of particular importance as it tends to redden UV slopes and has a significant contribution to the UV luminosities of galaxies. Furthermore, its shape can be used to infer the gas density and temperature of the ISM. First, we provide a theoretical background, showing how different stellar populations (SPS models, IMFs, and stellar temperatures) and nebular conditions impact observed galaxy spectra. We demonstrate that, for systems with strong nebular continuum emission, 1) UV fluxes can increase by up to 0.7~magnitudes (or more in the case of hot/massive stars) above the stellar continuum, which may help reconcile the surprising abundance of bright high-redshift galaxies and the elevated UV luminosity density at $z>10$, 2) at high gas densities, UV slopes can redden from $\beta\lesssim-2.5$ to $\beta\sim-1$, 3) observational measurements of $\xi_{ion}$ are grossly underestimated, and 4) UV downturns from two-photon emission can masquerade as DLAs. Second, we present a dataset of 58 galaxies observed with NIRSpec on JWST at $2.5<z<9.0$ that are selected to have strong nebular continuum emission via the detection of the Balmer jump. Five of the 58 spectra are consistent with being dominated by nebular emission, exhibiting both a Balmer jump and a UV downturn consistent with two-photon emission. For some galaxies, this may imply the presence of hot massive stars and a top-heavy IMF. We conclude by exploring the properties of spectroscopically confirmed $z>10$ galaxies, finding that UV slopes and UV downturns are in some cases redder or steeper than expected from SPS models, which may hint at more exotic (e.g. hotter/more massive stars or AGN) ionizing sources.
https://arxiv.org/abs/2408.03189v2
2408.03189
2024-08-06
natural-language-processing
21-cm Constraints on Dark Matter Annihilation after an Early Matter-Dominated Era
Although it is commonly assumed that relativistic particles dominate the energy density of the universe quickly after inflation, a variety of well-motivated scenarios predict an early matter-dominated era (EMDE) before the onset of Big Bang nucleosynthesis. Subhorizon dark matter density perturbations grow faster during an EMDE than during a radiation-dominated era, leading to the formation of "microhalos" far earlier than in standard models of structure formation. This enhancement of small-scale structure boosts the dark-matter annihilation rate, which contributes to the heating of the intergalactic medium (IGM). We compute how the dark matter annihilation rate evolves after an EMDE and forecast how well measurements of the 21-cm background can detect dark matter annihilation in cosmologies with EMDEs. We find that future measurements of the global 21-cm signal at a redshift of $z\sim 17$ are unlikely to improve on bounds derived from observations of the isotropic gamma-ray background, but measurements of the 21-cm power spectrum have the potential to detect dark matter annihilation following an EMDE. Moreover, dark matter annihilation and astrophysical X-rays produce distinct heating signatures in the 21-cm power spectrum at redshifts around 14, potentially allowing differentiation between these two IGM heating mechanisms.
https://arxiv.org/abs/2502.08719v1
2502.08719
2025-02-12
natural-language-processing
21-cm constraints on spinning primordial black holes
Hawking radiation from primordial black holes (PBH) can ionize and heat up neutral gas during the cosmic dark ages, leaving imprints on the global 21-cm signal of neutral hydrogen. We use the global 21-cm signal to constrain the abundance of spinning PBHs in mass range of $[2 \times 10^{13}, 10^{18}]$ grams. We consider several extended PBH distribution models. Our results show that 21-cm can set the most stringent PBH bounds in our mass window. Compared with constraints set by {\it Planck} cosmic microwave background (CMB) data, 21-cm limits are more stringent by about two orders of magnitudes. PBHs with higher spin are typically more strongly constrained. Our 21-cm constraints for the monochromatic mass distribution rule out spinless PBHs with initial mass below $1.5 \times 10^{17}\ \r{g}$, whereas extreme Kerr PBHs with reduced initial spin of $a_0=0.999$ are excluded as the dominant dark matter component for masses below $6 \times 10^{17}\ \r{g}$. We also derived limits for the log-normal, power-law and critical collapse PBH mass distributions.
https://arxiv.org/abs/2108.13256v2
2108.13256
2021-08-30
natural-language-processing
21 cm cosmology and spin temperature reduction via spin-dependent dark matter interactions
The EDGES low-band experiment has measured an absorption feature in the cosmic microwave background radiation (CMB), corresponding to the 21 cm hyperfine transition of hydrogen at redshift $z \simeq 17$, before the era of cosmic reionization. The amplitude of this absorption is connected to the ratio of singlet and triplet hyperfine states in the hydrogen gas, which can be parametrized by a spin temperature. The EDGES result suggests that the spin temperature is lower than the expected temperatures of both the CMB and the hydrogen gas. A variety of mechanisms have been proposed in order to explain this signal, for example by lowering the kinetic temperature of the hydrogen gas via dark matter interactions. We introduce an alternative mechanism, by which a sub-GeV dark matter particle with spin-dependent coupling to nucleons or electrons can cause hyperfine transitions and lower the spin temperature directly, with negligible reduction of the kinetic temperature of the hydrogen gas. We consider a model with an asymmetric dark matter fermion and a light pseudo-vector mediator. Significant reduction of the spin temperature by this simple model is excluded, most strongly by coupling constant bounds coming from stellar cooling. Perhaps an alternative dark sector model, subject to different sets of constraints, can lower the spin temperature by the same mechanism.
https://arxiv.org/abs/1902.09552v2
1902.09552
2019-02-25
natural-language-processing
21cmEMU: an emulator of 21cmFAST summary observables
Recent years have witnessed rapid progress in observations of the Epoch of Reionization (EoR). These have enabled high-dimensional inference of galaxy and intergalactic medium (IGM) properties during the first billion years of our Universe. However, even using efficient, semi-numerical simulations, traditional inference approaches that compute 3D lightcones on-the-fly can take $10^5$ core hours. Here we present 21cmEMU: an emulator of several summary observables from the popular 21cmFAST simulation code. 21cmEMU takes as input nine parameters characterizing EoR galaxies, and outputs the following summary statistics: (i) the IGM mean neutral fraction; (ii) the 21-cm power spectrum; (iii) the mean 21-cm spin temperature; (iv) the sky-averaged (global) 21-cm signal; (v) the ultraviolet (UV) luminosity functions (LFs); and (vi) the Thomson scattering optical depth to the cosmic microwave background (CMB). All observables are predicted with sub-percent median accuracy, with a reduction of the computational cost by a factor of over 10$^4$. After validating inference results, we showcase a few applications, including: (i) quantifying the relative constraining power of different observational datasets; (ii) seeing how recent claims of a late EoR impact previous inferences; and (iii) forecasting upcoming constraints from the sixth observing season of the Hydrogen Epoch of Reionization Array (HERA) telescope. 21cmEMU is publicly-available, and is included as an alternative simulator in the public 21CMMC sampler.
https://arxiv.org/abs/2309.05697v3
2309.05697
2023-09-11
natural-language-processing
21cm Epoch of Reionisation Power Spectrum with Closure Phase using the Murchison Widefield Array
The radio interferometric closure phases can be a valuable tool for studying cosmological {H\scriptsize{I}}~from the early Universe. Closure phases have the advantage of being immune to element-based gains and associated calibration errors. Thus, calibration and errors therein, which are often sources of systematics limiting standard visibility-based approaches, can be avoided altogether in closure phase analysis. In this work, we present the first results of the closure phase power spectrum of {H\scriptsize{I}}~21-cm fluctuations using the Murchison Widefield Array (MWA), with $\sim 12$ hours of MWA-phase II observations centered around redshift, $z\approx 6.79$, during the Epoch of Reionisation. On analysing three redundant classes of baselines -- 14~m, 24~m, and 28~m equilateral triads, our estimates of the $2\sigma$ ($95\%$ confidence interval) 21-cm power spectra are $\lesssim (184)^2 pseudo \rm ~mK^2$ at ${k}_{||} = 0.36 $ $pseudo~h {\rm Mpc^{-1}}$ in the EoR1 field for the 14~m baseline triads, and $\lesssim (188)^2 pseudo \rm ~mK^2$ at $k_{||} = 0.18 $ $pseudo~h {\rm Mpc^{-1}}$ in the EoR0 field for the 24~m baseline triads. The ``$pseudo$'' units denote that the length scale and brightness temperature should be interpreted as close approximations. Our best estimates are still 3-4 orders high compared to the fiducial 21-cm power spectrum; however, our approach provides promising estimates of the power spectra even with a small amount of data. These data-limited estimates can be further improved if more datasets are included into the analysis. The evidence for excess noise has a possible origin in baseline-dependent systematics in the MWA data that will require careful baseline-based strategies to mitigate, even in standard visibility-based approaches.
https://arxiv.org/abs/2409.02906v1
2409.02906
2024-09-04
natural-language-processing
21cmFAST: A Fast, Semi-Numerical Simulation of the High-Redshift 21-cm Signal
We introduce a powerful semi-numeric modeling tool, 21cmFAST, designed to efficiently simulate the cosmological 21-cm signal. Our code generates 3D realizations of evolved density, ionization, peculiar velocity, and spin temperature fields, which it then combines to compute the 21-cm brightness temperature. Although the physical processes are treated with approximate methods, we compare our results to a state-of-the-art large-scale hydrodynamic simulation, and find good agreement on scales pertinent to the upcoming observations (>~ 1 Mpc). The power spectra from 21cmFAST agree with those generated from the numerical simulation to within 10s of percent, down to the Nyquist frequency. We show results from a 1 Gpc simulation which tracks the cosmic 21-cm signal down from z=250, highlighting the various interesting epochs. Depending on the desired resolution, 21cmFAST can compute a redshift realization on a single processor in just a few minutes. Our code is fast, efficient, customizable and publicly available, making it a useful tool for 21-cm parameter studies.
http://arxiv.org/abs/1003.3878v1
1003.3878
2010-03-19
natural-language-processing
21cmFAST v3: A Python-integrated C code forgenerating 3D realizations of the cosmic 21cm signal
This brief code paper presents a new Python-wrapped version of the popular 21cm cosmology simulator, 21cmFAST. The new version, v3+, maintains the same core functionality of previous versions of 21cmFAST, but features a simple and intuitive interface, and a great deal more flexibility. This evolution represents the work of a formalized collaboration, and the new version, available publicly on GitHub, provides a single point-of-reference for all future upgrades and community-added features. In this paper, we describe simple usage of 21cmFAST, some of its new features, and provide a simple performance benchmark.
http://arxiv.org/abs/2010.15121v1
2010.15121
2020-10-28
natural-language-processing
21cmFirstCLASS I. Cosmological tool for $Λ$CDM and beyond
In this work we present 21cmFirstCLASS, a modified version of 21cmFAST, the most popular code in the literature for computing the anisotropies of the 21-cm signal. Our code uses the public cosmic microwave background (CMB) Boltzmann code CLASS, to establish consistent initial conditions at recombination for any set of cosmological parameters and evolves them throughout the dark ages, cosmic dawn, the epoch of heating and reionization. We account for inhomogeneity in the temperature and ionization fields throughout the evolution, crucial for a robust calculation of both the global 21-cm signal and its fluctuations. We demonstrate how future measurements of the CMB and the 21-cm signal can be combined and analyzed with 21cmFirstCLASS to obtain constraints on both cosmological and astrophysical parameters and examine degeneracies between them. As an example application, we show how 21cmFirstCLASS can be used to study cosmological models that exhibit non-linearities already at the dark ages, such as scattering dark matter (SDM). For the first time, we present self-consistent calculations of the 21-cm power spectrum in the presence of SDM during the non-linear epoch of cosmic dawn. The code is publicly available at https://github.com/jordanflitter/21cmFirstCLASS.
https://arxiv.org/abs/2309.03942v4
2309.03942
2023-09-07
natural-language-processing
21cmFirstCLASS II. Early linear fluctuations of the 21cm signal
In a companion paper we introduce 21cmFirstCLASS, a new code for computing the 21-cm anisotropies, assembled from the merger of the two popular codes 21cmFAST and CLASS. Unlike the standard 21cmFAST, which begins at $z=35$ with homogeneous temperature and ionization boxes, our code begins its calculations from recombination, evolves the signal through the dark ages, and naturally yields an inhomogeneous box at $z=35$. In this paper, we validate the output of 21cmFirstCLASS by developing a new theoretical framework which is simple and intuitive on the one hand, but is robust and precise on the other hand. As has been recently claimed, using consistent inhomogeneous initial conditions mitigates inaccuracies, which according to our analysis can otherwise reach the $\mathcal O\left(20\%\right)$ level. On top of that, we also show for the first time that 21cmFAST over-predicts the 21-cm power spectrum at $z\gtrsim20$ by another $\mathcal O\left(20\%\right)$, due to the underlying assumption that $\delta_b=\delta_c$, namely that the density fluctuations in baryons and cold dark matter are indistinguishable. We propose an elegant solution to this discrepancy by introducing an appropriate scale-dependent growth factor into the evolution equations. Our analysis shows that this modification will ensure sub-percent differences between 21cmFirstCLASS and the Boltzmann solver CAMB at $z\leq50$ for all scales between the horizon and the Jeans scale. This will enable 21cmFirstCLASS to consistently and reliably simulate the 21-cm anisotropies both in the dark ages and cosmic dawn, for any cosmology. The code is publicly available at https://github.com/jordanflitter/21cmFirstCLASS.
https://arxiv.org/abs/2309.03948v3
2309.03948
2023-09-07
natural-language-processing
21cmfish: Fisher-matrix framework for fast parameter forecasts from the cosmic 21-cm signal
The 21-cm signal from neutral hydrogen in the early universe will provide unprecedented information about the first stars and galaxies. Extracting this information, however, requires accounting for many unknown astrophysical processes. Semi-numerical simulations are key for exploring the vast parameter space of said processes. These simulations use approximate techniques such as excursion-set and perturbation theory to model the 3D evolution of the intergalactic medium, at a fraction of the computational cost of hydrodynamic and/or radiative transfer simulations. However, exploring the enormous parameter space of the first galaxies can still be computationally expensive. Here we introduce 21cmfish, a Fisher-matrix wrapper for the semi-numerical simulation 21cmFAST. 21cmfish facilitates efficient parameter forecasts, scaling to significantly higher dimensionalities than MCMC approaches, assuming a multi-variate Gaussian posterior. Our method produces comparable parameter uncertainty forecasts to previous MCMC analyses but requires ~10$^4$x fewer simulations. This enables a rapid way to prototype analyses adding new physics and/or additional parameters. We carry out a forecast for HERA using the largest astrophysical parameter space to-date, with 10 free parameters, spanning both population II and III star formation. We find X-ray parameters for the first galaxies could be measured to sub-percent precision, and, though they are highly degenerate, the stellar-to-halo mass relation and ionizing photon escape fraction for population II and III galaxies can be constrained to ~10% precision (logarithmic quantities). Using a principal component analysis we find HERA is most sensitive to the product of the ionizing escape fraction and the stellar-to-halo mass fraction for population II galaxies.
https://arxiv.org/abs/2212.09797v2
2212.09797
2022-12-19
natural-language-processing
21-cm fluctuations from primordial magnetic fields
The fluid forces associated with primordial magnetic fields (PMFs) generate small-scale fluctuations in the primordial density field, which add to the $\mathrm{\Lambda CDM}$ linear matter power spectrum on small scales. These enhanced small-scale fluctuations lead to earlier formation of galactic halos and stars and thus affect cosmic reionization. We study the consequences of these effects on 21 cm observables using the semi-numerical code 21cmFAST v3.1.3. We find the excess small-scale structure generates strong stellar radiation backgrounds in the early Universe, resulting in altered 21 cm global signals and power spectra commensurate with earlier reionization. We restrict the allowed PMF models using the CMB optical depth to reionization. Lastly, we probe parameter degeneracies and forecast experimental sensitivities with an information matrix analysis subject to the CMB optical depth bound. Our forecasts show that interferometers like HERA are sensitive to PMFs of order $\sim \mathrm{pG}$, nearly an order of magnitude stronger than existing and next-generation experiments.
https://arxiv.org/abs/2308.04483v2
2308.04483
2023-08-08
natural-language-processing
21-cm foreground removal using AI and frequency-difference technique
The deep learning technique has been employed in removing foreground contaminants from 21 cm intensity mapping, but its effectiveness is limited by the large dynamic range of the foreground amplitude. In this study, we develop a novel foreground removal technique grounded in U-Net networks. The essence of this technique lies in introducing an innovative data preprocessing step specifically, utilizing the temperature difference between neighboring frequency bands as input, which can substantially reduce the dynamic range of foreground amplitudes by approximately two orders of magnitude. This reduction proves to be highly advantageous for the U-Net foreground removal. We observe that the HI signal can be reliably recovered, as indicated by the cross-correlation power spectra showing unity agreement at the scale of $k < 0.3 h^{-1}$Mpc in the absence of instrumental effects. Moreover, accounting for the systematic beam effects, our reconstruction displays consistent auto-correlation and cross-correlation power spectrum ratios at the $1\sigma$ level across scales $k \lesssim 0.1 h^{-1}$Mpc, with only a 10% reduction observed in the cross-correlation power spectrum at $k\simeq0.2 h^{-1}$Mpc. The effects of redshift-space distortion are also reconstructed successfully, as evidenced by the quadrupole power spectra matching. In comparison, our method outperforms the traditional Principal Component Analysis method, which derived cross-correlation ratios are underestimated by around 60%. We simulated various white noise levels in the map and found that the mean cross-correlation ratio $\bar{R}_\mathrm{cross} \gtrsim 0.8$ when the level of the thermal noise is smaller than or equal to that of the HI signal. We conclude that the proposed frequency-difference technique can significantly enhance network performance by reducing the amplitude range of foregrounds and aiding in the prevention of HI loss.
https://arxiv.org/abs/2310.06518v2
2310.06518
2023-10-10
natural-language-processing
21cm foregrounds and polarization leakage: a user's guide on cleaning and mitigation strategies
The success of HI intensity mapping is largely dependent on how well 21cm foreground contamination can be controlled. In order to progress our understanding further, we present a range of simulated foreground data from four different $\sim3000$\,deg$^2$ sky regions, with and without effects from polarization leakage. Combining these with underlying cosmological HI simulations creates a range of single-dish intensity mapping test cases that require different foreground treatments. This allows us to conduct the most generalized study to date into 21cm foregrounds and their cleaning techniques for the post-reionization era. We first provide a pedagogical review of the most commonly used blind foreground removal techniques (PCA/SVD, FASTICA, GMCA). We also trial a non-blind parametric fitting technique and discuss potential hybridization of methods. We highlight the similarities and differences in these techniques finding that the blind methods produce near equivalent results, and we explain the fundamental reasons for this. The simulations allow an exact decomposition of the resulting cleaned data and we analyse the contribution from foreground residuals. Our results demonstrate that polarized foreground residuals should be generally subdominant to HI on small scales ($k\gtrsim0.1\,h\,\text{Mpc}^{-1}$). However, on larger scales, results are more region dependent. In some cases, aggressive cleans severely damp HI power but still leave dominant foreground residuals. We also demonstrate the gain from cross-correlations with optical galaxy surveys, where extreme levels of residual foregrounds can be circumvented. However, these residuals still contribute to errors and we discuss the optimal balance between over- and under-cleaning.
https://arxiv.org/abs/2010.02907v2
2010.02907
2020-10-06
natural-language-processing
21 cm Forest Constraints on Primordial Black Holes
Primordial black holes (PBHs) as part of the Dark Matter (DM) would modify the evolution of large-scale structures and the thermal history of the universe. Future 21 cm forest observations, sensitive to small scales and the thermal state of the Inter Galactic Medium (IGM), could probe the existence of such PBHs. In this article, we show that the shot noise isocurvature mode on small scales induced by the presence of PBHs can enhance the amount of low mass halos, or minihalos, and thus, the number of 21 cm absorption lines. However, if the mass of PBHs is as large as $M_{\rm PBH}\gtrsim 10 \, M_\odot$, with an abundant enough fraction of PBHs as DM, $f_{\rm PBH}$, the IGM heating due to accretion onto the PBHs counteracts the enhancement due to the isocurvature mode, reducing the number of absorption lines instead. The concurrence of both effects imprints distinctive signatures in the number of absorbers, allowing to bound the abundance of PBHs. We compute the prospects for constraining PBHs with future 21 cm forest observations, finding achievable competitive upper limits on the abundance as low as $f_{\rm PBH} \sim 10^{-3}$ at $M_{\rm PBH}= 100 \, M_\odot$, or even lower at larger masses, in unexplored regions of the parameter space by current probes. The impact of astrophysical X-ray sources on the IGM temperature is also studied, which could potentially weaken the bounds.
https://arxiv.org/abs/2104.10695v1
2104.10695
2021-04-21
natural-language-processing
21cm forest probes on the axion dark matter in the post-inflationary Peccei-Quinn symmetry breaking scenarios
We study the future prospects of the 21cm forest observations on the axion-like dark matter when the spontaneous breaking of the global Peccei-Quinn (PQ) symmetry occurs after the inflation. The large isocurvature perturbations of order unity sourced from axion-like particles can result in the enhancement of minihalo formation, and the subsequent hierarchical structure formation can affect the minihalo abundance whose masses can exceed ${\cal O}(10^4) M_{\odot}$ relevant for the 21cm forest observations. We show that the 21cm forest observations are capable of probing the axion-like particle mass in the range $10^{-18}\lesssim m_a \lesssim 10^{-12}$ eV for the temperature independent axion mass. For the temperature dependent axion mass, the zero temperature axion mass scale for which the 21cm forest measurements can be affected is extended further to as big as of order $10^{-6}$ eV.
http://arxiv.org/abs/2005.05589v3
2005.05589
2020-07-14
natural-language-processing
21cm Forest with the SKA
An alternative to both the tomography technique and the power spectrum approach is to search for the 21cm forest, that is the 21cm absorption features against high-z radio loud sources caused by the intervening cold neutral intergalactic medium (IGM) and collapsed structures. Although the existence of high-z radio loud sources has not been confirmed yet, SKA-low would be the instrument of choice to find such sources as they are expected to have spectra steeper than their lower-z counterparts. Since the strongest absorption features arise from small scale structures (few tens of physical kpc, or even lower), the 21cm forest can probe the HI density power spectrum on small scales not amenable to measurements by any other means. Also, it can be a unique probe of the heating process and the thermal history of the early universe, as the signal is strongly dependent on the IGM temperature. Here we show what SKA1-low could do in terms of detecting the 21cm forest in the redshift range z = 7.5-15.
http://arxiv.org/abs/1501.04425v1
1501.04425
2015-01-19
natural-language-processing
21cm Global Signal Extraction: Extracting the 21cm Global Signal using Artificial Neural Networks
The study of the cosmic Dark Ages, Cosmic Dawn, and Epoch of Reionization (EoR) using the all-sky averaged redshifted HI 21cm signal, are some of the key science goals of most of the ongoing or upcoming experiments, for example, EDGES, SARAS, and the SKA. This signal can be detected by averaging over the entire sky, using a single radio telescope, in the form of a Global signal as a function of only redshifted HI 21cm frequencies. One of the major challenges faced while detecting this signal is the dominating, bright foreground. The success of such detection lies in the accuracy of the foreground removal. The presence of instrumental gain fluctuations, chromatic primary beam, radio frequency interference (RFI) and the Earth's ionosphere corrupts any observation of radio signals from the Earth. Here, we propose the use of Artificial Neural Networks (ANN) to extract the faint redshifted 21cm Global signal buried in a sea of bright Galactic foregrounds and contaminated by different instrumental models. The most striking advantage of using ANN is the fact that, when the corrupted signal is fed into a trained network, we can simultaneously extract the signal as well as foreground parameters very accurately. Our results show that ANN can detect the Global signal with $\gtrsim 92 \%$ accuracy even in cases of mock observations where the instrument has some residual time-varying gain across the spectrum.
https://arxiv.org/abs/1911.02580v1
1911.02580
2019-11-06
natural-language-processing
21cm Intensity Mapping cross-correlation with galaxy surveys: current and forecasted cosmological parameters estimation for the SKAO
We present a comprehensive set of forecasts for the cross-correlation signal between 21cm intensity mapping and galaxy redshift surveys. We focus on the data sets that will be provided by the SKAO for the 21cm signal, DESI and Euclid for galaxy clustering. We build a likelihood which takes into account the effect of the beam for the radio observations, the Alcock-Paczynski effect, a simple parameterization of astrophysical nuisances, and fully exploit the tomographic power of such observations in the range $z=0.7-1.8$ at linear and mildly non-linear scales ($k<0.25 h/$Mpc). The forecasted constraints, obtained with Monte Carlo Markov Chains techniques in a Bayesian framework, in terms of the six base parameters of the standard $\Lambda$CDM model, are promising. The predicted signal-to-noise ratio for the cross-correlation can reach $\sim 50$ for $z\sim 1$ and $k\sim 0.1 h/$ Mpc. When the cross-correlation signal is combined with current Cosmic Microwave Background (CMB) data from Planck, the error bar on $\Omega_{\rm c}\,h^2$ and $H_0$ is reduced by a factor 3 and 6, respectively, compared to CMB only data, due to the measurement of matter clustering provided by the two observables. The cross-correlation signal has a constraining power that is comparable to the auto-correlation one and combining all the clustering measurements a sub-percent error bar of 0.33% on $H_0$ can be achieved, which is about a factor 2 better than CMB only measurement. Finally, as a proof-of-concept, we test the full pipeline on the real data measured by the MeerKat collaboration (Cunnington et al. 2022) presenting some (weak) constraints on cosmological parameters.
https://arxiv.org/abs/2309.00710v2
2309.00710
2023-09-01
natural-language-processing
21 cm Intensity Mapping with the DSA-2000
Line intensity mapping is a promising probe of the universe's large-scale structure. We explore the sensitivity of the DSA-2000, a forthcoming array consisting of over 2000 dishes, to the statistical power spectrum of neutral hydrogen's 21 cm emission line. These measurements would reveal the distribution of neutral hydrogen throughout the near-redshift universe without necessitating resolving individual sources. The success of these measurements relies on the instrument's sensitivity and resilience to systematics. We show that the DSA-2000 will have the sensitivity needed to detect the 21 cm power spectrum at z=0.5 and across power spectrum modes of 0.03-35.12 h/Mpc with 0.1 h/Mpc resolution. We find that supplementing the nominal array design with a dense core of 200 antennas will expand its sensitivity at low power spectrum modes and enable measurement of Baryon Acoustic Oscillations (BAOs). Finally, we present a qualitative discussion of the DSA-2000's unique resilience to sources of systematic error that can preclude 21 cm intensity mapping.
https://arxiv.org/abs/2311.00896v2
2311.00896
2023-11-01
natural-language-processing
21cm Limits on Decaying Dark Matter and Primordial Black Holes
Recently the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) reported the detection of a 21cm absorption signal stronger than astrophysical expectations. In this paper we study the impact of radiation from dark matter (DM) decay and primordial black holes (PBH) on the 21cm radiation temperature in the reionization epoch, and impose a constraint on the decaying dark matter and PBH energy injection in the intergalactic medium, which can heat up neutral hydrogen gas and weaken the 21cm absorption signal. We consider decay channels DM$\rightarrow e^+e^-, \gamma\gamma$, $\mu^+\mu^-$, $b\bar{b}$ and the $10^{15-17}$g mass range for primordial black holes, and require the heating of the neutral hydrogen does not negate the 21cm absorption signal. For $e^+e^-$, $\gamma\gamma$ final states and PBH cases we find strong 21cm bounds that can be more stringent than the current extragalactic diffuse photon bounds. For the DM$\rightarrow e^+e^-$ channel, the lifetime bound is $\tau_{\rm DM}> 10^{27}$s for sub-GeV dark matter. The bound is $\tau_{\rm DM}\ge 10^{26}$s for sub-GeV DM$\rightarrow \gamma\gamma$ channel and reaches $10^{27}$s at MeV DM mass. For $b\bar{b}$ and $\mu^+\mu^-$ cases, the 21 cm constraint is better than all the existing constraints for $m_{\rm DM}<20$ GeV where the bound on $\tau_{\rm DM}\ge10^{26}$s. For both DM decay and primordial black hole cases, the 21cm bounds significantly improve over the CMB damping limits from Planck data.
http://arxiv.org/abs/1803.09390v1
1803.09390
2018-03-26
natural-language-processing
21-cm line Anomaly: A brief Status
In this short review I present the status of the global 21-cm signal detected by EDGES in March 2018. It is organized in three parts. First, I present the EDGES experiment and the fitting procedure used by the collaboration to extract the tiny 21-cm signal from large foregrounds of galactic synchrotron emission. Then, I review the physics behind the global 21-cm signature and I explain why the measured absorption feature is anomalous with respect to the predictions from standard astrophysics. I conclude with the implications for Beyond Standard Model (BSM) physics coming from the EDGES discovery.
http://arxiv.org/abs/1907.13384v2
1907.13384
2019-09-13
natural-language-processing
21 cm Line Astronomy and Constraining New Physics
The 21 cm signal appears to be a treasure trove to provide an insight into the period when the first generation of luminous objects formed in the Universe. Hydrogen is the predominating fraction of the total baryonic matter during cosmic dawn (CD). Therefore, it is convenient and advantageous to study physics during CD using the 21 cm signal. The presence of any exotic source of energy can inject energy into the intergalactic medium (IGM) and heat the gas. Subsequently, it can modify the absorption amplitude in the global 21 cm signal. This feature can provide a robust bound on such sources of energy injection into the IGM gas.
https://arxiv.org/abs/2301.02655v1
2301.02655
2023-01-06
natural-language-processing