text
stringlengths 0
1.92k
| source
stringlengths 32
167
|
|---|---|
This large-scale study, consisting of 21.3 million hand hygiene opportunities
from 19 distinct facilities in 10 different states, uses linear predictive
models to expose factors that may affect hand hygiene compliance. We examine
the use of features such as temperature, relative humidity, influenza severity,
day/night shift, federal holidays and the presence of new medical residents in
predicting daily hand hygiene compliance; the investigation is undertaken using
both a "global" model to glean general trends, and facility-specific models to
elicit facility-specific insights. The results suggest that colder temperatures
and federal holidays have an adverse effect on hand hygiene compliance rates,
and that individual cultures and attitudes regarding hand hygiene exist among
facilities.
|
http://arxiv.org/abs/1801.09546v1
|
We present 21 new long-term variable radio sources found commensally in two years of weekly MeerKAT monitoring of the low-mass X-ray binary GX 339-4. The new sources vary on time scales of weeks to months and have a variety of light curve shapes and spectral index properties. Three of the new variable sources are coincident with multi-wavelength counterparts; and one of these is coincident with an optical source in deep MeerLICHT images. For most sources, we cannot eliminate refractive scintillation of active galactic nuclei as the cause of the variability. These new variable sources represent $2.2\pm0.5$ per cent of the unresolved sources in the field, which is consistent with the 1-2 per cent variability found in past radio variability surveys. However, we expect to find short-term variable sources in the field as well as these 21 new long-term variable sources. We present the radio light curves and spectral index variability of the new variable sources, as well as the absolute astrometry and matches to coincident sources at other wavelengths.
|
https://arxiv.org/abs/2203.09806v1
|
The first results presented in our article are the clear definitions of both
intrinsic and extrinsic discrete curvatures in terms of holonomy and
plane-angle representation, a clear relation with their deficit angles, and
their clear geometrical interpretations in the first order discrete geometry.
The second results are the discrete version of Bianchi identity and
Gauss-Codazzi equation, together with their geometrical interpretations. It
turns out that the discrete Bianchi identity and Gauss-Codazzi equation, at
least in 3-dimension, could be derived from the dihedral angle formula of a
tetrahedron, while the dihedral angle relation itself is the spherical law of
cosine in disguise. Furthermore, the continuous infinitesimal curvature 2-form,
the standard Bianchi identity, and Gauss-Codazzi equation could be recovered in
the continuum limit.
|
http://arxiv.org/abs/1709.08373v1
|
Modeling globally averaged information on climate forcing from the land
surface temperature data, the sea surface temperatures (SST) and the
empirically determined relationship between the changes in SST and the
turbulent diffusion of heat into the upper ocean demonstrates a consistent
link. The modeling is accurate throughout the 20th century despite the
different phases of the Interdecadal Pacific Oscillation (IPO) or the strong
divergence between land and ocean surface warming. It only fails during the
last 15 years when SST drops well below the trend. The finding reinforces the
view that slower global warming over the previous 15 years is not a caused by a
negative phase of the IPO or by the variations in the upper ocean (top 700 m)
warming but results from a change in the ocean behavior leading to increased
heat transfer into the deeper ocean.
|
http://arxiv.org/abs/1507.04809v1
|
Because most technology and computer architecture innovations were
(intentionally) invisible to higher layers, application and other software
developers could reap the benefits of this progress without engaging in it.
Higher performance has both made more computationally demanding applications
feasible (e.g., virtual assistants, computer vision) and made less demanding
applications easier to develop by enabling higher-level programming
abstractions (e.g., scripting languages and reusable components). Improvements
in computer system cost-effectiveness enabled value creation that could never
have been imagined by the field's founders (e.g., distributed web search
sufficiently inexpensive so as to be covered by advertising links).
The wide benefits of computer performance growth are clear. Recently,
Danowitz et al. apportioned computer performance growth roughly equally between
technology and architecture, with architecture credited with ~80x improvement
since 1985. As semiconductor technology approaches its "end-of-the-road" (see
below), computer architecture will need to play an increasing role in enabling
future ICT innovation. But instead of asking, "How can I make my chip run
faster?," architects must now ask, "How can I enable the 21st century
infrastructure, from sensors to clouds, adding value from performance to
privacy, but without the benefit of near-perfect technology scaling?". The
challenges are many, but with appropriate investment, opportunities abound.
Underlying these opportunities is a common theme that future architecture
innovations will require the engagement of and investments from innovators in
other ICT layers.
|
http://arxiv.org/abs/1609.06756v1
|
Many regions across the globe broke their surface temperature records in recent years, further sparking concerns about the impending arrival of "tipping points" later in the 21st century. This study analyzes observed global surface temperature trends in three target latitudinal regions: the Arctic Circle, the Tropics, and the Antarctic Circle. We show that global warming is accelerating unevenly across the planet, with the Arctic warming at approximately three times the average rate of our world. We further analyzed the reliability of latitude-dependent surface temperature simulations from a suite of Coupled Model Intercomparison Project Phase 6 models and their multi-model mean. We found that GISS-E2-1-G and FGOALS-g3 were the best-performing models based on their statistical abilities to reproduce observational, latitude-dependent data. Surface temperatures were projected from ensemble simulations of the Shared Socioeconomic Pathway 2-4.5 (SSP2-4.5). We estimate when the climate will warm by 1.5, 2.0, and 2.5 degrees C relative to the preindustrial period, globally and regionally. GISS-E2-1-G projects that global surface temperature anomalies would reach 1.5, 2.0, and 2.5 degrees C in 2024 (+/-1.34), 2039 (+/-2.83), and 2057 (+/-5.03) respectively, while FGOALS-g3 predicts these "tipping points" would arrive in 2024 (+/-2.50), 2054 (+/-7.90), and 2087 (+/-10.55) respectively. Our results reaffirm a dramatic, upward trend in projected climate warming acceleration, with upward concavity in 21st century projections of the Arctic, which could lead to catastrophic consequences across the Earth. Further studies are necessary to determine the most efficient solutions to reduce global warming acceleration and maintain a low SSP, both globally and regionally.
|
https://arxiv.org/abs/2210.03245v2
|
For explaining electrical breakdown, field electron emission (FE) is a mechanism of interest. In the period 2006 to 2010 there were significant developments in basic FE theory, but these have not yet fully entered general thinking in technological FE areas, which are often still based on 1960s thinking or (in some contexts) 1920s thinking about FE theory. This paper outlines the history of FE theory and provides an overview of modern developments and of some related topics, in so far as these affect the interpretation of experiments and the explanation of physical phenomena. The paper concentrates on principles, with references given where details can be found. Some suggestions are made about moving to the use of "21st-Century" FE theory. In addition, an error in Feynman's treatment of the electrostatics of pointed conductors is displayed, and it is found that Zener tunneling is implausible as a primary cause of vacuum breakdown from a CuO overlayer.
|
https://arxiv.org/abs/2107.08801v2
|
Modern astronomy has been rapidly increasing our ability to see deeper into
the universe, acquiring enormous samples of cosmic populations. Gaining
astrophysical insights from these datasets requires a wide range of
sophisticated statistical and machine learning methods. Long-standing problems
in cosmology include characterization of galaxy clustering and estimation of
galaxy distances from photometric colors. Bayesian inference, central to
linking astronomical data to nonlinear astrophysical models, addresses problems
in solar physics, properties of star clusters, and exoplanet systems.
Likelihood-free methods are growing in importance. Detection of faint signals
in complicated noise is needed to find periodic behaviors in stars and detect
explosive gravitational wave events. Open issues concern treatment of
heteroscedastic measurement errors and understanding probability distributions
characterizing astrophysical systems. The field of astrostatistics needs
increased collaboration with statisticians in the design and analysis stages of
research projects, and to jointly develop new statistical methodologies.
Together, they will draw more astrophysical insights into astronomical
populations and the cosmos itself.
|
http://arxiv.org/abs/2005.13025v1
|
This chapter examines the motivations and imperatives for modernizing how statistical agencies approach statistical disclosure limitation for official data product releases. It discusses the implications for agencies' broader data governance and decision-making, and it identifies challenges that agencies will likely face along the way. In conclusion, the chapter proposes some principles and best practices that we believe can help guide agencies in navigating the transformation of their confidentiality programs.
|
https://arxiv.org/abs/2303.00845v1
|
Between the 4th and 6th of September 2024, the Astronomy & Astrophysics group at the University of Warwick held a meeting to celebrate 21 years of astronomy at Warwick and the scientific legacy of the late Prof. Tom Marsh, the group founder. More than a hundred people attended the meeting, with about half of the attendees being external delegates and coming from as far afield as the USA and South Africa. Tom Marsh moved to the University of Warwick from Southampton in 2003, after the Department of Physics decided to expand the scope of its research. From its humble beginnings with only two staff members, Tom himself and Boris G\"ansicke, one postdoc and a couple of PhD students, the group has now grown to more than 95 members, including 25 staff. Tom pioneered the development of Doppler tomography, led key discoveries in the field of double-degenerate binary systems and made extensive contributions to instrumentation, primarily to developing the high-speed imaging photometers ULTRACAM, ULTRASPEC and HiPERCAM. This article provides a summary of Tom's legacy and Warwick's history as presented in the 21 years of Astronomy at Warwick meeting.
|
https://arxiv.org/abs/2504.20954v1
|
Timing results for the black-widow pulsar J2051-0827 are presented, using a
21-year dataset from four European Pulsar Timing Array telescopes and the
Parkes radio telescope. This dataset, which is the longest published to date
for a black-widow system, allows for an improved analysis that addresses
previously unknown biases. While secular variations, as identified in previous
analyses, are recovered, short-term variations are detected for the first time.
Concurrently, a significant decrease of approx. 2.5x10-3 cm-3 pc in the
dispersion measure associated with PSR J2051-0827 is measured for the first
time and improvements are also made to estimates of the proper motion. Finally,
PSR J2051-0827 is shown to have entered a relatively stable state suggesting
the possibility of its eventual inclusion in pulsar timing arrays.
|
http://arxiv.org/abs/1607.04167v1
|
Terahertz (THz) communication technology is regarded as a promising enabler for achieving ultra-high data rate transmission in next-generation communication systems. To mitigate the high path loss in THz systems, the transmitting beams are typically narrow and highly directional, which makes it difficult for a single beam to serve multiple users simultaneously. To address this challenge, reconfigurable intelligent surfaces (RIS), which can dynamically manipulate the wireless propagation environment, have been integrated into THz communication systems to extend coverage. Existing works mostly remain theoretical analysis and simulation, while prototype validation of RIS-assisted THz communication systems is scarce. In this paper, we designed a liquid crystal-based RIS operating at 220 GHz supporting both single-user and multi-user communication scenarios, followed by a RIS-aided THz communication system prototype. To enhance the system performance, we developed a beamforming method including a real-time power feedback control, which is compatible with both single-beam and multibeam modes. To support simultaneous multi-user transmission, we designed an OFDM-based resource allocation scheme. In our experiments, the received power gain with RIS is no less than 10 dB in the single-beam mode, and no less than 5 dB in the multi-beam mode. With the assistance of RIS, the achievable rate of the system could reach 2.341 Gbps with 3 users sharing 400 MHz bandwidth and the bit error rate (BER) of the system decreased sharply. Finally, an image transmission experiment was conducted to vividly show that the receiver could recover the transmitted information correctly with the help of RIS. The experimental results also demonstrated that the received signal quality was enhanced through power feedback adjustments.
|
https://arxiv.org/abs/2502.16970v1
|
Owning abundant bandwidth resources, the Terahertz (THz) band (0.1-10~THz) is envisioned as a key technology to realize ultra-high-speed communications in 6G and beyond wireless networks. To realize reliable THz communications in urban microcell (UMi) environments, propagation analysis and channel characterization are still insufficient. In this paper, channel measurement campaigns are conducted in a UMi scenario at 220~GHz, using a correlation-based time domain channel sounder. 24 positions are measured along a road on the university campus, with distances ranging from 34~m to 410~m. Based on the measurement results, the spatial consistency and interaction of THz waves to the surrounding environments are analyzed. Moreover, the additional loss due to foliage blockage is calculated and an average value of 16.7~dB is observed. Furthermore, a full portrait of channel characteristics, including path loss, shadow fading, K-factor, delay and angular spreads, as well as cluster parameters, is calculated and analyzed. Specifically, an average K-factor value of 17.5 dB is measured in the line-of-sight (LoS) case, which is nearly two times larger than the extrapolated values from the 3GPP standard, revealing weak multipath effects in the THz band. Additionally, 2.5 clusters on average are observed in the LoS case, around one fifth of what is defined in the 3GPP model, which uncovers the strong sparsity in THz UMi. The results and analysis in this work can offer guidance for system design for future THz UMi networks.
|
https://arxiv.org/abs/2408.15772v1
|
In this work, the $^{222}$Rn contamination mechanisms on acrylic surfaces
have been investigated. $^{222}$Rn can represent a significant background
source for low-background experiments, and acrylic is a suitable material for
detector design thanks to its purity and transparency. Four acrylic samples
have been exposed to a $^{222}$Rn rich environment for different time periods,
being contaminated by $^{222}$Rn and its progenies. Subsequently, the time
evolution of radiocontaminants activity on the samples has been evaluated with
$\alpha$ and $\gamma$ measurements, highlighting the role of different decay
modes in the contamination process. A detailed analysis of the alpha spectra
allowed to quantify the implantation depth of the contaminants. Moreover, a
study of both $\alpha$ and $\gamma$ measurements pointed out the $^{222}$Rn
diffusion inside the samples.
|
http://arxiv.org/abs/1911.04836v1
|
The activity of $^{222}$Rn and its daughter isotopes was measured in the air of several underground laboratories of the Baksan Neutrino Observatory at various distances from the entrance. The measurements were carried out with the help of the cylindrical ionionization air chamber. We found that the radon content in the ventilated airflow within the measurement accuracy does not depend on the distance travelled along the adit. In addition, we observed that the radon content increases abruptly in those locations where underground gases and water are released. As a result, we review various mechanisms of air enrichment with radon. We also outline our research methodology and present the results of our measurements of radon release from the rocky walls of the underground laboratory. Finally, we present the results of the measurements of the radon content of various ground and underground water sources.
|
https://arxiv.org/abs/2110.15289v2
|
The selection of low-radioactive construction materials is of utmost
importance for the success of low-energy rare event search experiments. Besides
radioactive contaminants in the bulk, the emanation of radioactive radon atoms
from material surfaces attains increasing relevance in the effort to further
reduce the background of such experiments. In this work, we present the
$^{222}$Rn emanation measurements performed for the XENON1T dark matter
experiment. Together with the bulk impurity screening campaign, the results
enabled us to select the radio-purest construction materials, targeting a
$^{222}$Rn activity concentration of 10 $\mu$Bq/kg in 3.2 t of xenon. The
knowledge of the distribution of the $^{222}$Rn sources allowed us to
selectively eliminate critical components in the course of the experiment. The
predictions from the emanation measurements were compared to data of the
$^{222}$Rn activity concentration in XENON1T. The final $^{222}$Rn activity
concentration of (4.5 $\pm$ 0.1) $\mu$Bq/kg in the target of XENON1T is the
lowest ever achieved in a xenon dark matter experiment.
|
http://arxiv.org/abs/2009.13981v2
|
After nearly fifty years of searching, the vacuum ultraviolet $^{229}$Th nuclear isomeric transition has recently been directly laser excited [1,2] and measured with high spectroscopic precision [3]. Nuclear clocks based on this transition are expected to be more robust [4,5] than and may outperform [6,7] current optical atomic clocks. They also promise sensitive tests for new physics beyond the standard model [5,8,9]. In light of these important advances and applications, a dramatic increase in the need for $^{229}$Th spectroscopy targets in a variety of platforms is anticipated. However, the growth and handling of high-concentration $^{229}$Th-doped crystals [5] used in previous measurements [1-3,10] are challenging due to the scarcity and radioactivity of the $^{229}$Th material. Here, we demonstrate a potentially scalable solution to these problems by demonstrating laser excitation of the nuclear transition in $^{229}$ThF$_4$ thin films grown with a physical vapor deposition process, consuming only micrograms of $^{229}$Th material. The $^{229}$ThF$_4$ thin films are intrinsically compatible with photonics platforms and nanofabrication tools for integration with laser sources and detectors, paving the way for an integrated and field-deployable solid-state nuclear clock with radioactivity up to three orders of magnitude smaller than typical \thor-doped crystals [1-3,10]. The high nuclear emitter density in $^{229}$ThF$_4$ also potentially enables quantum optics studies in a new regime. Finally, we describe the operation and present the estimation of the performance of a nuclear clock based on a defect-free ThF$_4$ crystal.
|
https://arxiv.org/abs/2410.01753v1
|
Ultraviolet (UV) light emission at 229 nm wavelength from diode structures
based on AlN/Al0.77Ga0.23N quantum wells and using p-type Si to significantly
increase hole injection was reported. Both electrical and optical
characteristics were measured. Owing to the large concentration of holes from
p-Si and efficient hole injection, no efficiency droop was observed up to a
current density of 76 A/cm2 under continuous wave operation and without
external thermal management. An optical output power of 160 uW was obtained
with corresponding external quantum efficiency of 0.027%. This study
demonstrates that by adopting p-type Si nanomembrane contacts as hole injector,
practical levels of hole injection can be realized in UV light-emitting diodes
with very high Al composition AlGaN quantum wells, enabling emission
wavelengths and power levels that were previously inaccessible using
traditional p-i-n structures with poor hole injection efficiency.
|
http://arxiv.org/abs/1708.03973v1
|
The recent laser excitation of the 229Th isomeric transition in a solid-state host opens the door for a portable solid-state nuclear optical clock. However, at present the vacuum-ultraviolet laser systems required for clock operation are not conducive to a fieldable form factor. Here, we propose a possible solution to this problem by using 229Th-doped nonlinear optical crystals, which would allow clock operation without a vacuum-ultraviolet laser system and without the need of maintaining the crystal under vacuum.
|
https://arxiv.org/abs/2410.23364v1
|
The half-BPS boundary conditions preserving $\mathcal{N}=(2,2)$ and
$\mathcal{N}=(0,4)$ supersymmetry in 3d $\mathcal{N}=4$ supersymmetric gauge
theories are examined. The BPS equations admit decomposition of the bulk
supermultiplets into specific boundary supermultiplets of preserved
supersymmetry. Nahm-like equations arise in the vector multiplet BPS boundary
condition preserving $\mathcal{N}=(0,4)$ supersymmetry and Robin-type boundary
conditions appear for the hypermultiplet coupled to vector multiplet when
$\mathcal{N}=(2,2)$ supersymmetry is preserved. The half-BPS boundary
conditions are realized in the brane configurations of Type IIB string theory.
|
http://arxiv.org/abs/1608.05363v4
|
The representation of parallax on virtual environment is still a problem to be studied. Common algorithms, such as Bump Mapping, Parallax Mapping and Displacement Mapping, treats this problem for small disparity between a real object and a simplified model. This work will introduce a new texture structure and one possible render algorithm able to display parallax for large disparities, it is an approach based on the four-dimensional representation of the Light Field and was thought to positive parallax and to display the surfaces on the inside of our simplified model. These conditions are imposed to allow the free movement of an observer, if its movement is restrict, these conditions may be loosen. It is a high storage low process approach possible to be used in real time systems. As an example we will develop a scene with several objects and simplified them by a unique sphere that encloses them all, our system was able to run this scene with about 180fps.
|
https://arxiv.org/abs/2402.16815v1
|
We provide bounds on the compression size of the solutions to 22 problems in computer science. For each problem, we show that solutions exist with high probability, for some simple probability measure. Once this is proven, derandomization can be used to prove the existence of a simple solution.
|
https://arxiv.org/abs/2208.11562v4
|
Using gauge theory, we describe how to construct generalized Kahler
geometries with (2,2) two-dimensional supersymmetry, which are analogues of
familiar examples like projective spaces and Calabi-Yau manifolds. For special
cases, T-dual descriptions can be found which are squashed Kahler spaces. We
explore the vacuum structure of these gauge theories by studying the Coulomb
branch, which usually encodes the quantum cohomology ring. Some models without
Kahler dual descriptions possess unusual Coulomb branches. Specifically, there
appear to be an infinite number of supersymmetric vacua.
|
http://arxiv.org/abs/1810.01388v2
|
Quadratic gravity illustrates how a replacement for black holes can emerge from a UV completion of gravity. 2-2-holes are extremely compact horizonless objects with an entropy $S_{22}$ due to trapped normal matter, and in this way they are conceptually easy to understand. But the field equations are cumbersome and the numerical analysis has so far been restricted to relatively small size solutions. Here we show how the properties of arbitrarily large 2-2-holes can be found, including the time delay for gravitational wave echoes and the result $T_\infty S_{22}=M/2$. The starting point is to formulate the metric in terms of the tortoise coordinate, and to have one of the two metric functions be a conformal factor. A large conformally-related volume becomes associated with the interior of a 2-2-hole. We also discuss implications for the weak gravity conjecture.
|
https://arxiv.org/abs/2202.08442v2
|
The Target Absorbers for Neutrals (TANs) are located in a high-intensity radiation environment inside the tunnel of the Large Hadron Collider (LHC). TANs are positioned about $140$ m downstream from the beam interaction points. Seven $40$ cm long fused silica rods with different dopant specifications were irradiated in the TAN by the Beam RAte of Neutrals (BRAN) detector group during $p$+$p$ data taking from 2016 to 2018 at the LHC. The peak dose delivered to the fused silica rods was $18$ MGy. We report measurements of the $^{22}$Na activation of the fused silica rods carried out at the University of Illinois at Urbana-Champaign and Argonne National Laboratory. At the end of the irradiation campaign, the maximum $^{22}$Na activity observed was $A=21$ kBq$/{\rm cm^3}$ corresponding to a density, $\rho= 2.5\times 10^{12} /{\rm cm^3}$, of $^{22}$Na nuclei. FLUKA Monte Carlo simulations have been performed by the CERN FLUKA team to estimate $^{22}$Na activities for the irradiated BRAN rod samples. The simulations reproduce the $^{22}$Na activity profile measured along the rods, with a 35% underestimation of the experimental measurement results.
|
https://arxiv.org/abs/2204.01937v2
|
We investigate the impact of the new LUNA rate for the nuclear reaction
$^{22}$Ne$(p,\gamma)^{23}$Na on the chemical ejecta of intermediate-mass stars,
with particular focus on the thermally-pulsing asymptotic giant branch (TP-AGB)
stars that experience hot-bottom burning. To this aim we use the PARSEC and
COLIBRI codes to compute the complete evolution, from the pre-main sequence up
to the termination of the TP-AGB phase, of a set of stellar models with initial
masses in the range $3.0\,M_{\odot} - 6.0\,M_{\odot}$, and metallicities
$Z_{\rm i}=0.0005$, $Z_{\rm i}=0.006$, and $Z_{\rm i} = 0.014$. We find that
the new LUNA measures have much reduced the nuclear uncertainties of the
$^{22}$Ne and $^{23}$Na AGB ejecta, which drop from factors of $\simeq 10$ to
only a factor of few for the lowest metallicity models. Relying on the most
recent estimations for the destruction rate of $^{23}$Na, the uncertainties
that still affect the $^{22}$Ne and $^{23}$Na AGB ejecta are mainly dominated
by evolutionary aspects (efficiency of mass-loss, third dredge-up, convection).
Finally, we discuss how the LUNA results impact on the hypothesis that invokes
massive AGB stars as the main agents of the observed O-Na anti-correlation in
Galactic globular clusters. We derive quantitative indications on the
efficiencies of key physical processes (mass loss, third dredge-up, sodium
destruction) in order to simultaneously reproduce both the Na-rich, O-poor
extreme of the anti-correlation, and the observational constraints on the CNO
abundance. Results for the corresponding chemical ejecta are made publicly
available.
|
http://arxiv.org/abs/1611.07742v1
|
The precise astrometric measurements of the Gaia Data Release 2 have opened the door to detailed tests of the predictions of white dwarf cooling models. Significant discrepancies between theory and observations have been identified, the most striking affecting ultramassive white dwarfs. Cheng et al. (2019) found that a small fraction of white dwarfs on the so-called Q branch must experience an extra cooling delay of $\sim 8\,$Gyr not predicted by current models. $^{22}$Ne phase separation in a crystallizing C/O white dwarf can lead to a distillation process that efficiently transports $^{22}$Ne toward its center, thereby releasing a considerable amount of gravitational energy. Using state-of-the-art Monte Carlo simulations, we show that this mechanism can largely resolve the ultramassive cooling anomaly if the delayed population consists of white dwarfs with moderately above-average $^{22}$Ne abundances. We also argue that $^{22}$Ne phase separation can account for the smaller cooling delay currently missing for models of white dwarfs with more standard compositions.
|
https://arxiv.org/abs/2103.12892v1
|
In view of recent progress in studying matrix model-2D gravity duality, we reexamine some features of $(2,2p+1)$ minimal string. After reviewing both sides of the proposed correspondence in this case, a previously unnoted identification between correlation numbers of tachyon operators in certain domain of parameter space and "$p$-deformed volumes", which are certain integral transforms of topological recursion data, is described and clarified. This identification allows us to efficiently study correlation numbers at finite matter central charge. In particular, we obtain an intersection-theoretic formula and the simplest recurrent equations for them, analogous to the ones recently derived for Virasoro minimal string. These formulas might be useful in establishing a more thorough connection between worldsheet and matrix model approaches.
|
https://arxiv.org/abs/2403.02305v4
|
Analytic continuation from Minkowski space to $(2,2)$ split signature spacetime has proven to be a powerful tool for the study of scattering amplitudes. Here we show that, under this continuation, null infinity becomes the product of a null interval with a celestial torus (replacing the celestial sphere) and has only one connected component. Spacelike and timelike infinity are time-periodic quotients of AdS$_3$. These three components of infinity combine to an $S^3$ represented as a toric fibration over the interval. Privileged scattering states of scalars organize into $SL(2,\mathbb{R})_L \times SL(2,\mathbb{R})_R$ conformal primary wave functions and their descendants with real integral or half-integral conformal weights, giving the normally continuous scattering problem a discrete character.
|
https://arxiv.org/abs/2101.09591v1
|
We find a simple relation between two-dimensional BPS N=2 superconformal
blocks and bosonic Virasoro conformal blocks, which allows us to analyze the
crossing equations for BPS 4-point functions in unitary (2,2) superconformal
theories numerically with semidefinite programming. We constrain gaps in the
non-BPS spectrum through the operator product expansion of BPS operators, in
ways that depend on the moduli of exactly marginal deformations through chiral
ring coefficients. In some cases, our bounds on the spectral gaps are observed
to be saturated by free theories, by N=2 Liouville theory, and by certain
Landau-Ginzburg models.
|
http://arxiv.org/abs/1610.05371v1
|
Using continuous observations for 22 years from ground-based network GONG and
space-borne instruments MDI onboard {\it SoHO} and HMI onboard {\it SDO}, we
report both global and local properties of the convection zone and their
variations with time.
|
http://arxiv.org/abs/1805.05371v1
|
Flaky tests are tests that can non-deterministically pass or fail, even in the absence of code changes.Despite being a source of false alarms, flaky tests often remain in test suites once they are detected, as they also may be relied upon to detect true failures. Hence, a key open problem in flaky test research is: How to quickly determine if a test failed due to flakiness, or if it detected a bug? The state-of-the-practice is for developers to re-run failing tests: if a test fails and then passes, it is flaky by definition; if the test persistently fails, it is likely a true failure. However, this approach can be both ineffective and inefficient. An alternate approach that developers may already use for triaging test failures is failure de-duplication, which matches newly discovered test failures to previously witnessed flaky and true failures. However, because flaky test failure symptoms might resemble those of true failures, there is a risk of missclassifying a true test failure as a flaky failure to be ignored. Using a dataset of 498 flaky tests from 22 open-source Java projects, we collect a large dataset of 230,439 failure messages (both flaky and not), allowing us to empirically investigate the efficacy of failure de-duplication. We find that for some projects, this approach is extremely effective (with 100\% specificity), while for other projects, the approach is entirely ineffective. By analyzing the characteristics of these flaky and non-flaky failures, we provide useful guidance on how developers should rely on this approach.
|
https://arxiv.org/abs/2401.15788v1
|
We report on 230 GHz (1.3 mm) VLBI observations of M87 with the Event Horizon
Telescope using antennas on Mauna Kea in Hawaii, Mt. Graham in Arizona and
Cedar Flat in California. For the first time, we have acquired 230 GHz VLBI
interferometric phase information on M87 through measurement of closure phase
on the triangle of long baselines. Most of the measured closure phases are
consistent with 0$^{\circ}$ as expected by physically-motivated models for 230
GHz structure such as jet models and accretion disk models. The brightness
temperature of the event-horizon-scale structure is $\sim 1 \times 10^{10}$ K
derived from the compact flux density of $\sim 1$ Jy and the angular size of
$\sim 40 $ $\rm \mu$as $\sim$ 5.5 $R_{{\rm s}}$, which is broadly consistent
with the peak brightness of the radio cores at 1-86 GHz located within $\sim
10^2$ $R_{{\rm s}}$. Our observations occurred in the middle of an enhancement
in very-high-energy (VHE) $\rm \gamma$-ray flux, presumably originating in the
vicinity of the central black hole. Our measurements, combined with results of
multi-wavelength observations, favor a scenario in which the VHE region has an
extended size of $\sim$20-60 $R_{{\rm s}}$.
|
http://arxiv.org/abs/1505.03545v3
|
We introduce a systematic method to classify the Standard Model Effective Field Theory (SMEFT) operators based on their CP properties with the Hilbert series techniques. Our method makes it possible to enumerate operators violating CP symmetry in a few seconds. We present the complete classification of dimension eight operators under CP transformation, and the number of CP-odd or CP-violating operators are listed up to dimension 14. We also provide a companion code in FORM that allows anybody to reproduce our results.
|
https://arxiv.org/abs/2212.02413v3
|
This paper reports the first piezoelectric acoustic filter in periodically poled piezoelectric film (P3F) lithium niobate (LiNbO3) at 23.8 GHz with low insertion loss (IL) of 1.52 dB and 3-dB fractional bandwidth (FBW) of 19.4%. The filter features a compact footprint of 0.64 mm2. The third-order ladder filter is implemented with electrically coupled resonators in 150 nm bi-layer P3F 128 rotated Y-cut LiNbO3 thin film, operating in second-order symmetric (S2) Lamb mode. The record-breaking performance is enabled by the P3F LiNbO3 platform, where piezoelectric thin films of alternating orientations are transferred subsequently, facilitating efficient higher-order Lamb mode operation with simultaneously high quality factor (Q) and coupling coefficient (k2) at millimeter-wave (mmWave). Also, the multi-layer P3F stack promises smaller footprints and better nonlinearity than single-layer counterparts, thanks to the higher capacitance density and lower thermal resistance. Upon further development, the reported P3F LiNbO3 platform is promising for compact filters at mmWave.
|
https://arxiv.org/abs/2402.12194v2
|
We present a detailed nuclear magnetic resonance (NMR) study of ${}^{239}$Pu
in bulk and powdered single-crystal plutonium tetraboride (PuB$_4$), which has
recently been investigated as a potential correlated topological insulator.
This study constitutes the second-ever observation of the ${}^{239}$Pu NMR
signal, and provides unique on-site sensitivity to the rich $f$-electron
physics and insight into the bulk gap-like behavior in PuB$_4$. The
${}^{239}$Pu NMR spectra are consistent with axial symmetry of the shift tensor
showing for the first time that ${}^{239}$Pu NMR can be observed in an
anisotropic environment and up to room temperature. The temperature dependence
of the ${}^{239}$Pu shift, combined with a relatively long spin-lattice
relaxation time ($T_1$), indicate that PuB$_4$ adopts a non-magnetic state with
gap-like behavior consistent with our density functional theory (DFT)
calculations. The temperature dependencies of the NMR Knight shift and
$T_1^{-1}$--microscopic quantities sensitive only to bulk states--imply bulk
gap-like behavior confirming that PuB$_4$ is a good candidate topological
insulator. The large contrast between the ${}^{239}$Pu orbital shifts in the
ionic insulator PuO$_2$ ($\sim$~+24.7~\%) and PuB$_4$ ($\sim$~-0.5~\%) provides
a new tool to investigate the nature of chemical bonding in plutonium
materials.
|
http://arxiv.org/abs/1812.09202v1
|
We further develop the massive constructive theory of the Standard Model and
use it to calculate the amplitude and squared amplitude for all two-body
decays, a collection of weak three-body decays, as well as Higgs decay to four
neutrinos. We compare our results with those from Feynman diagrams and find
complete agreement. We show that in all the cases considered here, the
amplitudes of massive constructive theories are significantly simpler than
those resulting from Feynman diagrams. In fact, a naive counting of the number
of calculations required for a matrix-element generator to compute a
phase-space point is orders-of-magnitude smaller for the result coming from the
constructive method suggesting that these generators might benefit from this
method in the future, even in the case of massive weak amplitudes. We also
anticipate that our simpler expressions will produce numerically more stable
expressions.
|
http://arxiv.org/abs/1909.09164v2
|
Recent progress in genomics is bringing genetic testing to the masses.
Participatory public initiatives are underway to sequence the genome of
millions of volunteers, and a new market is booming with a number of companies
like 23andMe and AncestryDNA offering affordable tests directly to consumers.
Consequently, news, experiences, and views on genetic testing are increasingly
shared and discussed online and on social networks like Twitter. In this paper,
we present a large-scale analysis of Twitter discourse on genetic testing. We
collect 302K tweets from 113K users, posted over 2.5 years, by using thirteen
keywords related to genetic testing companies and public initiatives as search
keywords. We study both the tweets and the users posting them along several
axes, aiming to understand who tweets about genetic testing, what they talk
about, and how they use Twitter for that. Among other things, we find that
tweets about genetic testing originate from accounts that overall appear to be
interested in digital health and technology. Also, marketing efforts as well as
announcements, such as the FDA's suspension of 23andMe's health reports,
influence the type and the nature of user engagement.Finally, we report on
users who share screenshots of their results, and raise a few ethical and
societal questions as we find evidence of groups associating genetic testing to
racist ideologies.
|
http://arxiv.org/abs/1801.09946v2
|
The global influence of Big Data is not only growing but seemingly endless.
The trend is leaning towards knowledge that is attained easily and quickly from
massive pools of Big Data. Today we are living in the technological world that
Dr. Usama Fayyad and his distinguished research fellows discussed in the
introductory explanations of Knowledge Discovery in Databases (KDD) predicted
nearly two decades ago. Indeed, they were precise in their outlook on Big Data
analytics. In fact, the continued improvement of the interoperability of
machine learning, statistics, database building and querying fused to create
this increasingly popular science- Data Mining and Knowledge Discovery. The
next generation computational theories are geared towards helping to extract
insightful knowledge from even larger volumes of data at higher rates of speed.
As the trend increases in popularity, the need for a highly adaptive solution
for knowledge discovery will be necessary. In this research paper, we are
introducing the investigation and development of 23 bit-questions for a
Metaknowledge template for Big Data Processing and clustering purposes. This
research aims to demonstrate the construction of this methodology and proves
the validity and the beneficial utilization that brings Knowledge Discovery
from Big Data.
|
http://arxiv.org/abs/1503.00244v1
|
In this article we investigate the existence of (2,3)-cordial labelings of oriented hypercubes. In this investigation, we determine that there exists a (2,3)-cordial oriented hypercube for any dimension divisible by 3. Next, we provide examples of (2,3)-cordial oriented hypercubes of dimension not divisible by 3 and state a conjecture on existence for dimension 3k + 1. We close by presenting the only 3D oriented hypercubes up to isomorphism that are not (2,3)-cordial.
|
https://arxiv.org/abs/2012.11091v3
|
Recently L. B. Beasley introduced $(2,3)$-cordial labelings of directed graphs in [1]. He made two conjectures which we resolve in this article. He conjectured that every orientation of a path of length at least five is $(2,3)$ cordial, and that every tree of max degree $n =3$ has a cordial orientation. We show these two conjectures to be false. We also discuss the $(2,3)$ cordiality of orientations of the Petersen graph, and establish an upper bound for the number of edges a graph can have and still be $(2,3)$ cordial. An application of $(2,3)$ cordial labelings is also presented.
|
https://arxiv.org/abs/2012.10591v2
|
Coordinating the motion of robots with high degrees of freedom (DoF) to grasp objects gives rise to many challenges. In this paper, we propose a novel imitation learning approach to learn a policy that directly predicts 23 DoF grasp trajectories from a partial point cloud provided by a single, fixed camera. At the core of the approach is a second-order geometric-based model of behavioral dynamics. This Neural Geometric Fabric (NGF) policy predicts accelerations directly in joint space. We show that our policy is capable of generalizing to novel objects, and combine our policy with a geometric fabric motion planner in a loop to generate stable grasping trajectories. We evaluate our approach on a set of three different objects, compare different policy structures, and run ablation studies to understand the importance of different object encodings for policy learning.
|
https://arxiv.org/abs/2411.14400v1
|
Formation evaluation of unconventional reservoirs is challenging due to the
coexistence of different phases such as kerogen, bitumen, movable and bound
light hydrocarbon and water. Current low-frequency (0.05 T) nuclear magnetic
resonance (NMR) laboratory and logging methods are incapable of quantitatively
separating the different phases. We demonstrate the utility of high-field (9 T)
NMR 2D T1-T2 measurements for separating hydrocarbon and the clay-interacting
aqueous phases in shale based on the difference in the frequency dependence of
the spin-lattice relaxation time. Furthermore, we demonstrate 23Na NMR as a
promising complementary technique to conventional 1H NMR for shale fluid
typing, taking advantage of the fact that sodium ions are only present in the
aqueous phase. We validate high-field (9 T) 23Na-1H NMR relaxometry for
assessing brine-filled porosity and brine salinity in various porous materials,
including porous glass, conventional rocks, clays, and shale, and apply it for
differentiating hydrocarbon versus aqueous components and also the
clay-associated versus free water in Eagle Ford shale cores. This work lays the
groundwork for developing future downhole 23Na-1H NMR logging techniques.
|
http://arxiv.org/abs/1604.07731v1
|
We report on {23}Na NMR studies of a honeycomb lattice antiferromagnet
Na2Ni2TeO6 by {23}Na nuclear spin-echo techniques. The {23}Na nuclear
spin-lattice relaxation rate 1/{23}T_1 exhibits critical divergence near a Neel
temperature T_N = 26 K, a narrow critical region, and a critical exponent w =
0.34 in 1/{23}T_1 = a (T/T_N - 1)^{-w} for Na2Ni2TeO6, and T_N = 18 K for
Na2(Ni{0.5}Cu{0.5})2TeO6. Although the uniform magnetic susceptibility of
Na2Ni2TeO6 exhibits a broad maximum at 35 K characteristic of low dimensional
spin systems, the NMR results indicate three dimensional critical phenomenon
around the Neel temperature.
|
http://arxiv.org/abs/1504.03003v1
|
In this article, we present a measurement of flow rate, yield and effusion
time of a $^{23}$Ne production and transport system. We used an
accelerator-driven Li(d,n) neutron source to produce neutrons up to 20 MeV. The
radioactive atoms were produced by a $^{23}$Na(n,p) reaction at a NaCl target.
Later, the atoms were diffused out from the NaCl crystals and effused from the
production chamber via a 10 m hose to a measurement cell and their decay
products were detected using high purity germanium (HPGe) and plastic
scintillator detectors. The resulting flow rate was $6.9\pm0.5\cdot
10^4\sfrac{atoms}{sec}$ and the total yield was
$3.2\pm0.4\cdot10^{-9}\sfrac{atoms}{deuteron}$. We summarize our methods and
estimates of efficiencies, rates of production and effusion.
|
http://arxiv.org/abs/2006.00478v1
|
We show that the growth of the principal M\"obius function on the permutation
poset is exponential. This improves on previous work, which has shown that the
growth is at least polynomial. We define a method of constructing a permutation
from a smaller permutation which we call "ballooning". We show that if $\beta$
is a 2413-balloon, and $\pi$ is the 2413-balloon of $\beta$, then $\mu[1, \pi]
= 2 \mu[1, \beta]$. This allows us to construct a sequence of permutations
$\pi_1, \pi_2, \pi_3\ldots$ with lengths $n, n+4, n+8, \ldots$ such that $\mu
[1, \pi_{i+1}] = 2 \mu [1, \pi_{i}]$, and this gives us exponential growth.
Further, our construction method gives permutations that lie within a
hereditary class with finitely many simple permutations. We also find an
expression for the value of $\mu[1, \pi]$, where $\pi$ is a 2413-balloon, with
no restriction on the permutation being ballooned.
|
http://arxiv.org/abs/1812.05064v3
|
The reaction 243 Am (n, 2n) populates the T of 16 hours ground state 242 g Am with J of 1 or the 242 m Am isomer state J of 5 with T of 141 years. The former state 242 g Am mostly beta decays to 242 Cm, or transmutes to 242 Pu via electron capture. The absolute yield of 242 g Am is compatible with the measured data, estimated by the alpha activity of 242 Cm by Norris in 1983. The branching ratio defined by the ratio of the populations of the lowest intrinsic states of 242 Am. Calculated yields of ground 242 g Am and isomer 242 m Am states of the residual nucleus 242 Am are used to predict the relative yield of isomer. These populations defined by the gamma decay of the excited states, described by the standard kinetic equation. The ordering of the low and high spin states is different in case of 236 Np and 242 Am nuclei, that explains different shapes of relative yields near the (n, 2n) reaction threshold, though the excitation energy dependences are similar. Data of 243 Am (n, F) at 5 MeV and 15 MeV by Drapchinsky in 2004, support calculated 243Am (n,xnf) prefission neutron contribution to prompt fission neutron spectra and calculated exclusive neutron spectra of 243 Am (n, 2n) feeding the 242 g Am and isomer 242 m Am states.
|
https://arxiv.org/abs/2406.15445v1
|
We address the problem of large-scale visual place recognition for situations where the scene undergoes a major change in appearance, for example, due to illumination (day/night), change of seasons, aging, or structural modifications over time such as buildings built or destroyed. Such situations represent a major challenge for current large-scale place recognition methods. This work has the following three principal contributions. First, we demonstrate that matching across large changes in the scene appearance becomes much easier when both the query image and the database image depict the scene from approximately the same viewpoint. Second, based on this observation, we develop a new place recognition approach that combines (i) an efficient synthesis of novel views with (ii) a compact indexable image representation. Third, we introduce a new challenging dataset of 1,125 camera-phone query images of Tokyo that contain major changes in illumination (day, sunset, night) as well as structural changes in the scene. We demonstrate that the proposed approach significantly outperforms other large-scale place recognition techniques on this challenging data.
|
http://openaccess.thecvf.com/content_cvpr_2015/html/Torii_247_Place_Recognition_2015_CVPR_paper.html
|
A defective $k$-coloring is a coloring on the vertices of a graph using colors $1,2, \dots, k$ such that adjacent vertices may share the same color. A $(d_1,d_2)$-\emph{coloring} of a graph $G$ is a defective $2$-coloring of $G$ such that any vertex colored by color $i$ has at most $d_i$ adjacent vertices of the same color, where $i\in\{1,2\}$. A graph $G$ is said to be $(d_1,d_2)$-\emph{colorable} if it admits a $(d_1,d_2)$-coloring. Defective $2$-coloring in planar graphs without $3$-cycles, $4$-cycles, and $6$-cycles has been investigated by Dross and Ochem, as well as Sittitrai and Pimpasalee. They showed that such graphs are $(0,6)$-colorable and $(3,3)$-colorable, respectively. In this paper, we proved that these graphs are also $(2,4)$-colorable.
|
https://arxiv.org/abs/2501.07129v1
|
Fluxonium qubit is a promising building block for quantum information processing due to its long coherence time and strong anharmonicity. In this paper, we realize a 60 ns direct CNOT-gate on two inductively-coupled fluxonium qubits using selective darkening approach, resulting in a gate fidelity as high as 99.94%. The fidelity remains above 99.9% for 24 days without any recalibration between randomized benchmarking measurements. Compared with the 99.96% fidelity of a 60 ns identity gate, our data brings the investigation of the non-decoherence-related errors during gate operations down to $2 \times 10^{-4}$. The present result adds a simple and robust two-qubit gate into the still relatively small family of "the beyond three nines" demonstrations on superconducting qubits.
|
https://arxiv.org/abs/2407.15783v2
|
In order to provide a cloud service of optical quantum computing, it is inevitable to stabilize the optical system for many hours. It is advantageous to construct a fiber-based system, which does not require spatial alignment. However, fiber-based systems are instead subject to fiber-specific instabilities. For instance, there are phase drifts due to ambient temperature changes and external disturbances, and polarization fluctuations due to the finite polarization extinction ratio of fiber components. Here, we report the success of measuring squeezed light with a fiber system for 24 hours. To do this, we introduce stabilization mechanics to suppress fluctuations in the fiber system, and integrated controller to automatically align the entire system. The squeezed light at the wavelength of 1545.3 nm is measured every 2 minutes, where automated alignments are inserted every 30 minutes. The squeezing levels with the average of -4.42 dB are recorded with an extremely small standard deviation of 0.08 dB over 24 hours.
|
https://arxiv.org/abs/2401.17533v1
|
Bit commitment is a fundamental cryptographic primitive in which a party
wishes to commit a secret bit to another party. Perfect security between
mistrustful parties is unfortunately impossible to achieve through the
asynchronous exchange of classical and quantum messages. Perfect security can
nonetheless be achieved if each party splits into two agents exchanging
classical information at times and locations satisfying strict relativistic
constraints. A relativistic multi-round protocol to achieve this was previously
proposed and used to implement a 2~millisecond commitment time. Much longer
durations were initially thought to be insecure, but recent theoretical
progress showed that this is not so. In this letter, we report on the
implementation of a 24-hour bit commitment based on timed high-speed optical
communication and fast data processing only, with all agents located within the
city of Geneva. This duration is more than six orders of magnitude longer than
before, and we argue that it could be extended to one year and allow much more
flexibility on the locations of the agents. Our implementation offers a
practical and viable solution for use in applications such as digital
signatures, secure voting and honesty-preserving auctions.
|
http://arxiv.org/abs/1605.07442v1
|
The $^{24}$Mg($p$, $\alpha$)$^{21}$Na reaction was measured at the Holifield
Radioactive Ion Beam Facility at Oak Ridge National Laboratory in order to
better constrain spins and parities of energy levels in $^{21}$Na for the
astrophysically important $^{17}$F($\alpha, p$)$^{20}$Ne reaction rate
calculation. 31 MeV proton beams from the 25-MV tandem accelerator and enriched
$^{24}$Mg solid targets were used. Recoiling $^{4}$He particles from the
$^{24}$Mg($p$, $\alpha$)$^{21}$Na reaction were detected by a highly segmented
silicon detector array which measured the yields of $^{4}$He particles over a
range of angles simultaneously. A new level at 6661 $\pm$ 5 keV was observed in
the present work. The extracted angular distributions for the first four levels
of $^{21}$Na and Distorted Wave Born Approximation (DWBA) calculations were
compared to verify and extract angular momentum transfer.
|
http://arxiv.org/abs/1508.02128v1
|
Given d in IN, we prove that all smooth K3 surfaces (over any field of characteristic p other than 2,3) of degree greater than 84d^2 contain at most 24 rational curves of degree at most d. In the exceptional characteristics, the same bounds hold for non-unirational K3 surfaces, and we develop analogous results in the unirational case. For d at least 3, we also construct K3 surfaces of any degree greater than 4d(d+1) with 24 rational curves of degree exactly d, thus attaining the above bounds.
|
https://arxiv.org/abs/1907.04182v3
|
Aims : We present 24 synoptic maps of solar filaments, in which the average unambiguous magnetic field vectors of 296 prominences were determined with Pic-du-Midi observations between 1974 and 1982. This was the ascending phase of cycle 21. Methods : The magnetic field was determined by interpreting the Hanle effect, which is observed in the \ion{He}{i} D$_3$ line. Previous results for the prominence field polarity and prominence chirality were applied to solve the fundamental ambiguity. The measurements were averaged in each prominence for accuracy reasons. Results : The result is twofold. First, alternating field directions can be observed from one neutral line to the next. Second, a general field alignment is found along a solar north-south field that is distorted by the differential rotation effect. The numerical data for the prominences and their magnetic field coordinates are provided as online material associated with this paper.
|
https://arxiv.org/abs/2007.08219v4
|
We demonstrate a spectrally-sliced single-polarization optical coherent receiver with a record 2.4-THz bandwidth, using a 200-GHz tantalum pentoxide photonic crystal microring resonator as the local oscillator frequency comb.
|
https://arxiv.org/abs/2407.04060v1
|
Massively parallel multi-object spectrographs are on the leading edge of cosmology instrumentation. The highly successful Dark Energy Spectroscopic Instrument (DESI) which begun survey operations in May 2021, for example, has 5,000 robotically-actuated multimode fibers, which deliver light from thousands of individual galaxies and quasars simultaneously to an array of high-resolution spectrographs off-telescope. The redshifts are individually measured, thus providing 3D maps of the Universe in unprecedented detail, and enabling precise measurement of dark energy expansion and other key cosmological parameters. Here we present new work in the design and prototyping of the next generation of fiber-positioning robots. At 6.2 mm center-to-center pitch, with 1-2 um positioning precision, and in a scalable form factor, these devices will enable the next generation of cosmology instruments, scaling up to instruments with 10,000 to 25,000 fiber robots.
|
https://arxiv.org/abs/2212.07908v1
|
The nitrate radical NO$_3$ plays an important role in atmospheric chemistry, yet many aspects of its coupled and anharmonic vibronic structure remain elusive. Here, using an accurate, coupled full-dimensional diabatic potential that includes five electronic states, we revisit the vibronic spectrum associated with the electronic $\tilde X ^2A_2'$ state. Using recently developed tensor network state methods, we are able to compute more than 2500 vibronic states, thereby increasing the number of computed full-dimensional states by a factor of 50, compared to previous work. While we obtain good agreement with experiment for most of the assigned vibronic levels, for several others, we observe striking disagreement. Further, for the antisymmetric bending motion we find remarkably large symmetry-induced level splittings that are larger than the zero-order reference. We discuss non-negligible nonadiabatic effects and show that the Born-Oppenheimer approximation leads to significant errors in the spectrum.
|
https://arxiv.org/abs/2407.03398v2
|
Spatio-temporal scene-graph approaches to video-based reasoning tasks, such as video question-answering (QA), typically construct such graphs for every video frame. These approaches often ignore the fact that videos are essentially sequences of 2D "views" of events happening in a 3D space, and that the semantics of the 3D scene can thus be carried over from frame to frame. Leveraging this insight, we propose a (2.5+1)D scene graph representation to better capture the spatio-temporal information flows inside the videos. Specifically, we first create a 2.5D (pseudo-3D) scene graph by transforming every 2D frame to have an inferred 3D structure using an off-the-shelf 2D-to-3D transformation module, following which we register the video frames into a shared (2.5+1)D spatio-temporal space and ground each 2D scene graph within it. Such a (2.5+1)D graph is then segregated into a static sub-graph and a dynamic sub-graph, corresponding to whether the objects within them usually move in the world. The nodes in the dynamic graph are enriched with motion features capturing their interactions with other graph nodes. Next, for the video QA task, we present a novel transformer-based reasoning pipeline that embeds the (2.5+1)D graph into a spatio-temporal hierarchical latent space, where the sub-graphs and their interactions are captured at varied granularity. To demonstrate the effectiveness of our approach, we present experiments on the NExT-QA and AVSD-QA datasets. Our results show that our proposed (2.5+1)D representation leads to faster training and inference, while our hierarchical model showcases superior performance on the video QA task versus the state of the art.
|
https://arxiv.org/abs/2202.09277v2
|
If a biconnected graph stays connected after the removal of an arbitrary
vertex and an arbitrary edge, then it is called 2.5-connected. We prove that
every biconnected graph has a canonical decomposition into 2.5-connected
components. These components are arranged in a tree-structure. We also discuss
the connection between 2.5-connected components and triconnected components and
use this to present a linear-time algorithm which computes the 2.5-connected
components of a graph. We show that every critical 2.5-connected graph other
than K4 can be obtained from critical 2.5-connected graphs of smaller order
using simple graph operations. Furthermore, we demonstrate applications of
2.5-connected components in the context of cycle decompositions and cycle
packings.
|
http://arxiv.org/abs/2003.01498v2
|
While Model Based Iterative Reconstruction (MBIR) of CT scans has been shown
to have better image quality than Filtered Back Projection (FBP), its use has
been limited by its high computational cost. More recently, deep convolutional
neural networks (CNN) have shown great promise in both denoising and
reconstruction applications. In this research, we propose a fast reconstruction
algorithm, which we call Deep Learning MBIR (DL-MBIR), for approximating MBIR
using a deep residual neural network. The DL-MBIR method is trained to produce
reconstructions that approximate true MBIR images using a 16 layer residual
convolutional neural network implemented on multiple GPUs using Google
Tensorflow. In addition, we propose 2D, 2.5D and 3D variations on the DL-MBIR
method and show that the 2.5D method achieves similar quality to the fully 3D
method, but with reduced computational cost.
|
http://arxiv.org/abs/1812.08367v1
|
We consider the problem of robotic grasping using depth + RGB information
sampling from a real sensor. we design an encoder-decoder neural network to
predict grasp policy in real time. This method can fuse the advantage of depth
image and RGB image at the same time and is robust for grasp and observation
height.We evaluate our method in a physical robotic system and propose an
open-loop algorithm to realize robotic grasp operation. We analyze the result
of experiment from multi-perspective and the result shows that our method is
competitive with the state-of-the-art in grasp performance, real-time and model
size. The video is available in https://youtu.be/Wxw_r5a8qV0
|
http://arxiv.org/abs/1905.13675v1
|
Safe navigation in uneven terrains is an important problem in robotic research. In this paper we propose a 2.5D navigation system which consists of elevation map building, path planning and local path following with obstacle avoidance. For local path following we use Model Predictive Path Integral (MPPI) control method. We propose novel cost-functions for MPPI in order to adapt it to elevation maps and motion through unevenness. We evaluate our system on multiple synthetic tests and in a simulated environment with different types of obstacles and rough surfaces.
|
https://arxiv.org/abs/2209.07252v1
|
In the present paper, using MPI-AMRVAC, we perform a 2.5-D numerical MHD simulation of the dynamics and associated thermodynamical evolution of an initially force-free Harris current sheet subjected to an external velocity perturbation under the condition of uniform resistivity. The amplitude of the magnetic field is taken to be 10 Gauss, typical of the solar corona. We impose a Gaussian velocity pulse across this current sheet mimicking the interaction of fast magnetoacoustic waves with a current sheet in corona. This leads to a variety of dynamics and plasma processes in the current sheet, which is initially quasi-static. The initial pulse interacts with the current sheet and splits into a pair of counter-propagating wavefronts, which forms a rarefied region and leads to inflow and a thinning of the current sheet. The thinning results in Petschek-type magnetic reconnection followed by tearing instability and plasmoid formation. The reconnection outflows containing outward-moving plasmoids have accelerated motions with velocities ranging from 105-303 km/s. The average temperature and density of the plasmoids are found to be 8 MK and twice the background density of the solar corona, respectively. These estimates of velocity, temperature and density of plasmoids are similar to values reported from various solar coronal observations. Therefore, we infer that the external triggering of a quasi-static current sheet by a single velocity pulse is capable of initiating magnetic reconnection and plasmoid formation in the absence of a localized enhancement of resistivity in the solar corona.
|
https://arxiv.org/abs/2401.07048v1
|
Positron Emission Tomography (PET) is an important clinical imaging tool but inevitably introduces radiation hazards to patients and healthcare providers. Reducing the tracer injection dose and eliminating the CT acquisition for attenuation correction can reduce the overall radiation dose, but often results in PET with high noise and bias. Thus, it is desirable to develop 3D methods to translate the non-attenuation-corrected low-dose PET (NAC-LDPET) into attenuation-corrected standard-dose PET (AC-SDPET). Recently, diffusion models have emerged as a new state-of-the-art deep learning method for image-to-image translation, better than traditional CNN-based methods. However, due to the high computation cost and memory burden, it is largely limited to 2D applications. To address these challenges, we developed a novel 2.5D Multi-view Averaging Diffusion Model (MADM) for 3D image-to-image translation with application on NAC-LDPET to AC-SDPET translation. Specifically, MADM employs separate diffusion models for axial, coronal, and sagittal views, whose outputs are averaged in each sampling step to ensure the 3D generation quality from multiple views. To accelerate the 3D sampling process, we also proposed a strategy to use the CNN-based 3D generation as a prior for the diffusion model. Our experimental results on human patient studies suggested that MADM can generate high-quality 3D translation images, outperforming previous CNN-based and Diffusion-based baseline methods.
|
https://arxiv.org/abs/2406.08374v2
|
Dedicated, after acceptance and publication, in memory of the late Vassos
Soteriou. For the first time, we leverage the 2.5D interposer technology to
establish system-level security in the face of hardware- and software-centric
adversaries. More specifically, we integrate chiplets (i.e., third-party hard
intellectual property of complex functionality, like microprocessors) using a
security-enforcing interposer. Such hardware organization provides a robust
2.5D root of trust for trustworthy, yet powerful and flexible, computation
systems. The security paradigms for our scheme, employed firmly by design and
construction, are: 1) stringent physical separation of trusted from untrusted
components, and 2) runtime monitoring. The system-level activities of all
untrusted commodity chiplets are checked continuously against security policies
via physically separated security features. Aside from the security promises,
the good economics of outsourced supply chains are still maintained; the system
vendor is free to procure chiplets from the open market, while only producing
the interposer and assembling the 2.5D system oneself. We showcase our scheme
using the Cortex-M0 core and the AHB-Lite bus by ARM, building a secure 64-core
system with shared memories. We evaluate our scheme through hardware
simulation, considering different threat scenarios. Finally, we devise a
physical-design flow for 2.5D systems, based on commercial-grade design tools,
to demonstrate and evaluate our 2.5D root of trust.
|
http://arxiv.org/abs/2009.02412v2
|
X-ray computed tomography (XCT) is a key tool in non-destructive evaluation of additively manufactured (AM) parts, allowing for internal inspection and defect detection. Despite its widespread use, obtaining high-resolution CT scans can be extremely time consuming. This issue can be mitigated by performing scans at lower resolutions; however, reducing the resolution compromises spatial detail, limiting the accuracy of defect detection. Super-resolution algorithms offer a promising solution for overcoming resolution limitations in XCT reconstructions of AM parts, enabling more accurate detection of defects. While 2D super-resolution methods have demonstrated state-of-the-art performance on natural images, they tend to under-perform when directly applied to XCT slices. On the other hand, 3D super-resolution methods are computationally expensive, making them infeasible for large-scale applications. To address these challenges, we propose a 2.5D super-resolution approach tailored for XCT of AM parts. Our method enhances the resolution of individual slices by leveraging multi-slice information from neighboring 2D slices without the significant computational overhead of full 3D methods. Specifically, we use neighboring low-resolution slices to super-resolve the center slice, exploiting inter-slice spatial context while maintaining computational efficiency. This approach bridges the gap between 2D and 3D methods, offering a practical solution for high-throughput defect detection in AM parts.
|
https://arxiv.org/abs/2412.04525v1
|
Fast and reliable monitoring of volumetric heat distribution during MRI-guided tumor ablation is an urgent clinical need. In this work, we introduce a method for generating 2.5D thermometry maps from uniformly distributed 2D MRI phase images rotated around the applicator's main axis. The images can be fetched directly from the MR device, reducing the delay between image acquisition and visualization. For reconstruction, we use a weighted interpolation on a cylindric coordinate representation to calculate the heat value of voxels in a region of interest. A pilot study on 13 ex vivo bio protein phantoms with flexible tubes to simulate a heat sink effect was conducted to evaluate our method. After thermal ablation, we compared the measured coagulation zone extracted from the post-treatment MR data set with the output of the 2.5D thermometry map. The results show a mean Dice score of 0.75+-0.07, a sensitivity of 0.77+-0.03, and a reconstruction time within 18.02ms+-5.91ms. Future steps should address improving temporal resolution and accuracy, e.g., incorporating advanced bioheat transfer simulations.
|
https://arxiv.org/abs/2108.05734v1
|
The dimensional transition in turbulent jets of a shear-thinning fluid is studied via direct numerical simulations. Our findings reveal that under vertical confinement, the flow exhibits a unique mixed-dimensional (or 2.5D) state, where large-scale two-dimensional and small-scale three-dimensional structures coexist. This transition from three-dimensional turbulence near the inlet to two-dimensional dynamics downstream is dictated by the level of confinement: weak confinement guarantees turbulence to remain three-dimensional, whereas strong confinement forces the transition to two-dimensions; the mixed-dimensional state is observed for moderate confinement and it emerges as soon as flow scales are larger than the vertical length. In this scenario, we observed that the mixed-dimensional state is an overall more energetic state and it shows a multi-cascade process, where the direct cascade of energy at small scales and the direct cascade of enstrophy at large scales coexist. The results provide insights into the complex dynamics of confined turbulent flows, relevant in both natural and industrial settings.
|
https://arxiv.org/abs/2407.01038v4
|
Cryo-electron tomography (cryoET) is a crucial technique for unveiling the structure of protein complexes. Automatically analyzing tomograms captured by cryoET is an essential step toward understanding cellular structures. In this paper, we introduce the 4th place solution from the CZII - CryoET Object Identification competition, which was organized to advance the development of automated tomogram analysis techniques. Our solution adopted a heatmap-based keypoint detection approach, utilizing an ensemble of two different types of 2.5D U-Net models with depth reduction. Despite its highly unified and simple architecture, our method achieved 4th place, demonstrating its effectiveness.
|
https://arxiv.org/abs/2502.13484v1
|
It is well understood that in ADAS applications, a good estimate of the pose of the vehicle is required. This paper proposes a metaphorically named 2.5D odometry, whereby the planar odometry derived from the yaw rate sensor and four wheel speed sensors is augmented by a linear model of suspension. While the core of the planar odometry is a yaw rate model that is already understood in the literature, we augment this by fitting a quadratic to the incoming signals, enabling interpolation, extrapolation, and a finer integration of the vehicle position. We show, by experimental results with a DGPS/IMU reference, that this model provides highly accurate odometry estimates, compared with existing methods. Utilising sensors that return the change in height of vehicle reference points with changing suspension configurations, we define a planar model of the vehicle suspension, thus augmenting the odometry model. We present an experimental framework and evaluations criteria by which the goodness of the odometry is evaluated and compared with existing methods. This odometry model has been designed to support low-speed surround-view camera systems that are well-known. Thus, we present some application results that show a performance boost for viewing and computer vision applications using the proposed odometry
|
https://arxiv.org/abs/2111.08398v1
|
Visual 2.5D perception involves understanding the semantics and geometry of a scene through reasoning about object relationships with respect to the viewer in an environment. However, existing works in visual recognition primarily focus on the semantics. To bridge this gap, we study 2.5D visual relationship detection (2.5VRD), in which the goal is to jointly detect objects and predict their relative depth and occlusion relationships. Unlike general VRD, 2.5VRD is egocentric, using the camera's viewpoint as a common reference for all 2.5D relationships. Unlike depth estimation, 2.5VRD is object-centric and not only focuses on depth. To enable progress on this task, we create a new dataset consisting of 220k human-annotated 2.5D relationships among 512K objects from 11K images. We analyze this dataset and conduct extensive experiments including benchmarking multiple state-of-the-art VRD models on this task. Our results show that existing models largely rely on semantic cues and simple heuristics to solve 2.5VRD, motivating further research on models for 2.5D perception. The new dataset is available at https://github.com/google-research-datasets/2.5vrd.
|
https://arxiv.org/abs/2104.12727v1
|
Binaural audio provides a listener with 3D sound sensation, allowing a rich
perceptual experience of the scene. However, binaural recordings are scarcely
available and require nontrivial expertise and equipment to obtain. We propose
to convert common monaural audio into binaural audio by leveraging video. The
key idea is that visual frames reveal significant spatial cues that, while
explicitly lacking in the accompanying single-channel audio, are strongly
linked to it. Our multi-modal approach recovers this link from unlabeled video.
We devise a deep convolutional neural network that learns to decode the
monaural (single-channel) soundtrack into its binaural counterpart by injecting
visual information about object and scene configurations. We call the resulting
output 2.5D visual sound---the visual stream helps "lift" the flat single
channel audio into spatialized sound. In addition to sound generation, we show
the self-supervised representation learned by our network benefits audio-visual
source separation. Our video results:
http://vision.cs.utexas.edu/projects/2.5D_visual_sound/
|
http://arxiv.org/abs/1812.04204v4
|
Hard x-ray imaging is indispensable across diverse fields owing to its high penetrability. However, the resolution of traditional x-ray imaging modalities, such as computed tomography (CT) systems, is constrained by factors including beam properties, the absence of optical components, and detection resolution. As a result, typical resolution in commercial imaging systems is limited to a few hundred microns. This study advances high-photon-energy imaging by extending the concept of computational ghost imaging to multipixel ghost imaging with x-rays. We demonstrate a remarkable enhancement in resolution from 500 microns to approximately 20 microns for an image spanning 0.9 by 1 cm^2, comprised of 400,000 pixels and involving only 1000 realizations. Furthermore, we present a high-resolution CT reconstruction using our method, revealing enhanced visibility and resolution. Our achievement is facilitated by an innovative x-ray lithography technique and the computed tiling of images captured by each detector pixel. Importantly, this method can be scaled up for larger images without sacrificing the short measurement time, thereby opening intriguing possibilities for noninvasive high-resolution imaging of small features that are invisible with the present modalities.
|
https://arxiv.org/abs/2402.14023v1
|
We summarize observations of around a thousand solar energetic particle (SEP)
events since 1967 that include ~25 MeV protons, made by various near-Earth
spacecraft (IMPs 4, 5, 7, 8, ISEE 3, SOHO), that encompass Solar Cycle 20 to
the current cycle (24). We also discuss recent observations of similar SEP
events in Cycle 24 made by the STEREO spacecraft. The observations show, for
example, that the time distribution of ~25 MeV proton events varies from cycle
to cycle. In particular, the time evolution of the SEP occurrence rate in Cycle
24 is strongly asymmetric between the northern and southern solar hemispheres,
and tracks the sunspot number in each hemisphere, whereas Cycle 23 was more
symmetric. There was also an absence of 25 MeV proton events during the solar
minimum preceding Cycle 24 (other minima show occasional, often reasonably
intense events). So far, events comparable to the exceptionally intense events
detected in Cycles 22 and 23 have not been observed at Earth in Cycle 24,
though Cycle 21 (the largest of the cycles considered here) also apparently
lacked such events. We note a correlation between the rates of intense 25 MeV
proton events and "ground level enhancements" (GLEs) observed by neutron
monitors, since 1967, and conclude that the number of "official" GLEs observed
to date in Cycle 24 appears to be significantly lower than expected (5 to
7+/-1) based on the rate of intense 25 MeV proton events in this cycle.
|
http://arxiv.org/abs/1604.07873v2
|
We study the star-disc interaction in the presence of the strong magnetic field ($B_\star=6.2kG$) of a slowly rotating star. This situation describes a post-merger of the spectral type B and has not been previously investigated. We perform a set of resistive and viscosity $2.5D$-magnetohydrodynamical simulations using the PLUTO code. Based on our previous work, we consider the initial gas disc density $\rho_{d0}=10^{-13}\mathrm{gcm}^{-3}$ since it describes the conditions around IRAS 17449+2320 well. We find that the fall of gas towards the star occurs in the mid-plane, and remarkably, intermittent backflow takes place in the mid-plane in all of our models for $R\geq10R_\star$. However, we do not rule out that the funnel effect may occur and cause the accretion closer to the poles. Also, when larger values of viscosity ($\alpha_\nu=1$) and stellar rotation rate ($\delta_\star=0.2$) are considered, we find that the disc exhibits a thickening which is characteristic of FS~CMa-type stellar objects. Additionally, we find that the poloidal magnetic field lines twist over short periods of time, leading to magnetic reconnection causing coronal heating that could explain the presence of the Raman lines found observationally in several FS~CMa stars. Lastly, we find the formation of several knots in the magnetic field lines near and in the mid-plane of the disc which produce perturbations in the density and velocity components, as well as the formation of shallow gaps whose position depends on the inflation of the magnetic field lines.
|
https://arxiv.org/abs/2402.00720v2
|
We investigate the dynamic evolution of gaseous region around FS~CMa post-mergers. Due to the slow rotation of a central B-type star, the dynamics is driven mainly by the magnetic field of the central star. Recent observations have allowed us to set a realistic initial conditions such as, the magnetic field value ($B_\star\approx6\times10^{3}G$), the mass of the central star ($M_\star=6M_\odot$), and the initial disc density $\rho_{d0}\in[10^{-13}\mathrm{g\,cm^{-3}},10^{-11}\mathrm{g \, cm^{-3}}] $. We use the PLUTO code to perform 2.5D-MHD simulations of thin and thick discs models. Especially relevant for the interpretation of the observed properties of FS~CMa post-mergers are the results for low-density discs, in which we find formation of a jet emerging from inner edge of the disc, as well as the formation of the so called "hot plasmoid" in the corona region. Jets are probably detected as discrete absorption components in the resonance lines of FS~CMa stars. Moreover, the magnetic field configuration in the low-density plasma region, favors the appearance of magnetocentrifugal winds from the disc. The currents toward the star created by the magnetic field may explain accidentally observed material infall. The disc structure is significantly changed due to the presence of the magnetic field. The magnetic field is also responsible for the formation of a hot corona as observed in several FS~CMa stars through the Raman lines. Our results are valid for all magnetic stars surrounded by a low density plasma, i.e., some of stars showing the B[e] phenomenon.
|
https://arxiv.org/abs/2306.16073v1
|
We present 25 open questions about moduli spaces of vector bundles and related topics and discuss some longstanding conjectures. We hope to inspire young researchers to engage in this area of research.
|
https://arxiv.org/abs/2106.06434v1
|
We provide the post-Newtonian (PN) waveform for binary systems in motion along generic planar orbits at 2.5PN accuracy, in terms of the dynamical variables of the effective one-body (EOB) formalism. In addition to the calculation of the higher order terms for all the contributions to the waveform that have been already considered in previous avatars of EOB models, we also compute the EOB expression of the oscillatory memory terms. These are purely non-circular contributions, first appearing at 1.5PN order, that have been so far neglected in the EOB literature. This should foster their inclusion in EOB models and the definitive assessment of their role in shaping gravitational wave signals at infinity. To further promote the application of our results, we also derive associated non-circular factors according to the waveform factorization prescription of the non-circular EOB model TEOBResumS-DALI; the result is a set of ready-to-use non-circular factors that can be directly implemented as extra non-circular corrections in the waveform of TEOBResumS-DALI.
|
https://arxiv.org/abs/2305.14440v3
|
$\beta$-decay spectroscopy provides valuable information on exotic nuclei and
a stringent test for nuclear theories beyond the stability line. To search for
new $\beta$-delayed protons and $\gamma$ rays of $^{25}$Si to investigate the
properties of $^{25}$Al excited states. $^{25}$Si $\beta$ decays were measured
by using the Gaseous Detector with Germanium Tagging system at the National
Superconducting Cyclotron Laboratory. The protons and $\gamma$ rays emitted in
the decay were detected simultaneously. A Monte Carlo method was used to model
the Doppler broadening of $^{24}$Mg $\gamma$-ray lines caused by nuclear recoil
from proton emission. Shell-model calculations using two newly-developed
\textit{sd}-shell Hamiltonians, USDC and USDI, were performed. The most precise
$^{25}$Si half-life to date has been determined. A new proton branch at
724(4)~keV and new proton-$\gamma$-ray coincidences have been identified. Three
$^{24}$Mg $\gamma$-ray lines and eight $^{25}$Al $\gamma$-ray lines are
observed for the first time in $^{25}$Si decay. The first measurement of the
$^{25}$Si $\beta$-delayed $\gamma$ ray intensities through the $^{25}$Al
unbound states is reported. All the bound states of $^{25}$Al are observed to
be populated in the $\beta$ decay of $^{25}$Si. Several inconsistencies between
the previous measurements have been resolved, and new information on the
$^{25}$Al level scheme is provided. An enhanced decay scheme has been
constructed and compared to the mirror decay of $^{25}$Na and the shell-model
calculations. The measured excitation energies, $\gamma$-ray and proton
branchings, log~$ft$ values, and Gamow-Teller transition strengths for the
states of $^{25}$Al populated in the $\beta$ decay of $^{25}$Si are in good
agreement with the shell model calculations, offering gratifyingly consistent
insights into the fine nuclear structure of $^{25}$Al.
|
http://arxiv.org/abs/2009.00825v2
|
Predicting personality is essential for social applications supporting
human-centered activities, yet prior modeling methods with users written text
require too much input data to be realistically used in the context of social
media. In this work, we aim to drastically reduce the data requirement for
personality modeling and develop a model that is applicable to most users on
Twitter. Our model integrates Word Embedding features with Gaussian Processes
regression. Based on the evaluation of over 1.3K users on Twitter, we find that
our model achieves comparable or better accuracy than state of the art
techniques with 8 times fewer data.
|
http://arxiv.org/abs/1704.05513v1
|
Compared to image-text pair data, interleaved corpora enable Vision-Language Models (VLMs) to understand the world more naturally like humans. However, such existing datasets are crawled from webpage, facing challenges like low knowledge density, loose image-text relations, and poor logical coherence between images. On the other hand, the internet hosts vast instructional videos (e.g., online geometry courses) that are widely used by humans to learn foundational subjects, yet these valuable resources remain underexplored in VLM training. In this paper, we introduce a high-quality \textbf{multimodal textbook} corpus with richer foundational knowledge for VLM pretraining. It collects over 2.5 years of instructional videos, totaling 22,000 class hours. We first use an LLM-proposed taxonomy to systematically gather instructional videos. Then we progressively extract and refine visual (keyframes), audio (ASR), and textual knowledge (OCR) from the videos, and organize as an image-text interleaved corpus based on temporal order. Compared to its counterparts, our video-centric textbook offers more coherent context, richer knowledge, and better image-text alignment. Experiments demonstrate its superb pretraining performance, particularly in knowledge- and reasoning-intensive tasks like ScienceQA and MathVista. Moreover, VLMs pre-trained on our textbook exhibit outstanding interleaved context awareness, leveraging visual and textual cues in their few-shot context for task solving. Our code are available at https://github.com/DAMO-NLP-SG/multimodal_textbook.
|
https://arxiv.org/abs/2501.00958v4
|
We try to determine the progress made by convolutional neural networks over
the past 25 years in classifying images into abstractc lasses. For this purpose
we compare the performance of LeNet to that of GoogLeNet at classifying
randomly generated images which are differentiated by an abstract property
(e.g., one class contains two objects of the same size, the other class two
objects of different sizes). Our results show that there is still work to do in
order to solve vision problems humans are able to solve without much
difficulty.
|
http://arxiv.org/abs/1607.08366v1
|
Twenty-five years ago, Dunkelmann and Radons (1994) proposed that neural
networks should self-organize to a critical state. In models, criticality
offers a number of computational advantages. Thus this hypothesis, and in
particular the experimental work by Beggs and Plenz (2003), has triggered an
avalanche of research, with thousands of studies referring to it. Nonetheless,
experimental results are still contradictory. How is it possible, that a
hypothesis has attracted active research for decades, but nonetheless remains
controversial? We discuss the experimental and conceptual controversy, and then
present a parsimonious solution that (i) unifies the contradictory experimental
results, (ii) avoids disadvantages of a critical state, and (iii) enables
rapid, adaptive tuning of network properties to task requirements.
|
http://arxiv.org/abs/1903.05129v1
|
Introduced by the late Per Bak and his colleagues, self-organized criticality
(SOC) has been one of the most stimulating concepts to come out of statistical
mechanics and condensed matter theory in the last few decades, and has played a
significant role in the development of complexity science. SOC, and more
generally fractals and power laws, have attacted much comment, ranging from the
very positive to the polemical. The other papers in this special issue
(Aschwanden et al, 2014; McAteer et al, 2014; Sharma et al, 2015) showcase the
considerable body of observations in solar, magnetospheric and fusion plasma
inspired by the SOC idea, and expose the fertile role the new paradigm has
played in approaches to modeling and understanding multiscale plasma
instabilities. This very broad impact, and the necessary process of adapting a
scientific hypothesis to the conditions of a given physical system, has meant
that SOC as studied in these fields has sometimes differed significantly from
the definition originally given by its creators. In Bak's own field of
theoretical physics there are significant observational and theoretical open
questions, even 25 years on (Pruessner, 2012). One aim of the present review is
to address the dichotomy between the great reception SOC has received in some
areas, and its shortcomings, as they became manifest in the controversies it
triggered. Our article tries to clear up what we think are misunderstandings of
SOC in fields more remote from its origins in statistical mechanics, condensed
matter and dynamical systems by revisiting Bak, Tang and Wiesenfeld's original
papers.
|
http://arxiv.org/abs/1504.04991v1
|
The detection and characterization of self-organized criticality (SOC), in
both real and simulated data, has undergone many significant revisions over the
past 25 years. The explosive advances in the many numerical methods available
for detecting, discriminating, and ultimately testing, SOC have played a
critical role in developing our understanding of how systems experience and
exhibit SOC. In this article, methods of detecting SOC are reviewed; from
correlations to complexity to critical quantities. A description of the basic
autocorrelation method leads into a detailed analysis of application-oriented
methods developed in the last 25 years. In the second half of this manuscript
space-based, time-based and spatial-temporal methods are reviewed and the
prevalence of power laws in nature is described, with an emphasis on event
detection and characterization. The search for numerical methods to clearly and
unambiguously detect SOC in data often leads us outside the comfort zone of our
own disciplines - the answers to these questions are often obtained by studying
the advances made in other fields of study. In addition, numerical detection
methods often provide the optimum link between simulations and experiments in
scientific research. We seek to explore this boundary where the rubber meets
the road, to review this expanding field of research of numerical detection of
SOC systems over the past 25 years, and to iterate forwards so as to provide
some foresight and guidance into developing breakthroughs in this subject over
the next quarter of a century.
|
http://arxiv.org/abs/1506.08142v1
|
Wireless communication technology has progressed dramatically over the past 25 years, in terms of societal adoption as well as technical sophistication. In 1998, mobile phones were still in the process of becoming compact and affordable devices that could be widely utilized in both developed and developing countries. There were "only" 300 million mobile subscribers in the world [1]. Cellular networks were among the first privatized telecommunication markets, and competition turned the devices into fashion accessories with attractive designs that could be individualized. The service was circumscribed to telephony and text messaging, but it was groundbreaking in that, for the first time, telecommunication was between people rather than locations. Wireless networks have changed dramatically over the past few decades, enabling this revolution in service provisioning and making it possible to accommodate the ensuing dramatic growth in traffic. There are many contributing components, including new air interfaces for faster transmission, channel coding for enhanced reliability, improved source compression to remove redundancies, and leaner protocols to reduce overheads. Signal processing is at the core of these improvements, but nowhere has it played a bigger role than in the development of multiantenna communication. This article tells the story of how major signal processing advances have transformed the early multiantenna concepts into mainstream technology over the past 25 years. The story therefore begins somewhat arbitrarily in 1998. A broad account of the state-of-the-art signal processing techniques for wireless systems by 1998 can be found in [2], and its contrast with recent textbooks such as [3]-[5] reveals the dramatic leap forward that has taken place in the interim.
|
https://arxiv.org/abs/2304.02677v1
|
Sgr A* is currently very faint. However, X-ray radiation reflected by the Sgr A complex, a group of nearby molecular clouds, suggests that it went through one or more periods of high activity some hundreds of years ago. We aim to determine whether previously proposed physical scenarios are consistent with the observed X-ray variability over the past 25 years, and to characterize the spatial distribution, shape, and internal structure of the clouds. We exploit the full set of XMM-Newton observations, extending the previously studied dataset on variability by at least 12 years. Starting from the recent IXPE result that places the so-called Bridge cloud 26 pc behind Sgr A*, we reconstruct the LOS position of the other clouds, assuming that they were illuminated by a single flare. Additionally, we derive the probability density function (PDF) of the molecular density. We also study the 3D geometry of the complex in case two flares illuminate the clouds. As of spring 2024, the lightfront is still illuminating the Sgr A complex, with the Bridge currently being the brightest cloud. The other clouds in the complex have faded significantly. In the single flare scenario, the Sgr A complex is located $\simeq$ 25 pc behind Sgr A*. In the past 25 years, the illuminated region spans 10-15 pc along the LOS. The derived PDF is roughly log-normal, consistent with previous Chandra results, with a possible high-density excess. Both a single and a multiple flares scenario can explain the observed X-ray variability. Previous concerns about the single flare scenario, raised by shorter monitoring, are now overcome in the 25 years of monitoring. If two flares illuminate the clouds, they must be separated by at least $\sim$ 30 years. We speculate that these clouds are closer to Sgr A* than the nuclear molecular ring at $\simeq$ 100-200 pc and possibly drifting from the ring to the inner region of the Galaxy.
|
https://arxiv.org/abs/2501.09737v1
|
Low Gain Avalanche Detectors (LGADs) are silicon semiconductor sensors with an implanted thin p-doped multiplication layer that is designed to provide low gain. Most importantly, LGADs are specifically engineered to provide excellent spatial and temporal resolution simultaneously. The technology shows promising prospects of fulfilling the 4D tracking requirements of future high energy physics experiments. Micron Semiconductor Ltd. has fabricated LGADs with an active thickness of 50 $\mu$m. The electrical and timing performance has been measured and compared with devices fabricated at IMB-CNM for reference. 50 $\mu$m thin LGADs by Micron Semiconductor Ltd. were measured to have a timing resolution in the region of 30 ps using a dedicated setup involving minimum ionising particles produced by Sr-90. Specifically, the best timing resolution of 26.5 ps was measured at a bias voltage of 200 V at -30{\deg}C.
|
https://arxiv.org/abs/2310.06183v1
|
The presence of radioactive $^{26}$Al at 1.8 MeV reflects ongoing nucleosynthesis in the Milky Way. Diffuse emission from its decay can be measured with gamma-ray telescopes in space. The intensity, line shape, and spatial distribution of the $^{26}$Al emission allow a study of these nucleosynthesis sources. The line parameters trace massive-star feedback in the interstellar medium due to its 1~My lifetime. We aim to deepen previous studies of the $^{26}$Al emission in the Milky Way, using all gamma-ray data including single and double events as collected with SPI on INTEGRAL from 2003 until 2020. We apply improved spectral response and background as evaluated from tracing spectral details over the entire mission. The exposure for Galactic $^{26}$Al emission is enhanced using all event types measured within SPI. We re-determine the intensity of Galactic $^{26}$Al emission across the entire sky, through maximum likelihood fits of simulated and model-built sky distributions to SPI spectra for single and for double detector hits. We find an all-sky flux of (1.84$\pm$0.03$)\times$10$^{-3}$~ph~cm$^{-2}$s$^{-1}$ in the 1.809~MeV line from $^{26}$Al, determined as fitted to sky distributions from previous observations with COMPTEL. Significant emission from higher latitudes indicate an origin from nearby massive-star groups and superbubbles, also supported by a bottom-up population synthesis model. The line centroid is found at (1809.83$\pm$0.04~keV, and line broadening from source kinematics integrated over the sky is (0.62$\pm0.3$)~keV (FWHM).
|
https://arxiv.org/abs/2212.11228v1
|
High energy resolution spectroscopy of the 1.8 MeV radioactive decay line of
26Al with the SPI instrument on board the INTEGRAL satellite has recently
revealed that diffuse 26Al has large velocities in comparison to other
components of the interstellar medium in the Milky Way. 26Al shows Galactic
rotation in the same sense as the stars and other gas tracers, but reaches
excess velocities up to 300 km/s. We investigate if this result can be
understood in the context of superbubbles, taking into account the statistics
of young star clusters and H I supershells, as well as the association of young
star clusters with spiral arms. We derive energy output and 26Al mass of star
clusters as a function of the cluster mass via population synthesis from
stellar evolutionary tracks of massive stars. [...] We link this to the size
distribution of HI supershells and assess the properties of likely
26Al-carrying superbubbles. 26Al is produced by star clusters of all masses
above about 200 solar masses, roughly equally contributed over a logarithmic
star cluster mass scale, and strongly linked to the injection of feedback
energy. The observed superbubble size distribution cannot be related to the
star cluster mass function in a straight forward manner. In order to avoid that
the added volume of all superbubbles exceeds the volume of the Milky Way,
individual superbubbles have to merge frequently. If any two superbubbles
merge, or if 26Al is injected off-centre in a bigger HI supershell we expect
the hot 26Al-carrying gas to obtain velocities of the order of the typical
sound speed in superbubbles, about 300 km/s before decay. [...]
|
http://arxiv.org/abs/1504.03120v1
|
A carrier envelope phase stable near-single cycle mid-infrared laser based on
optical parametric chirped pulse amplification and hollow-core-fiber
compression is demonstrated. 4 {\mu}m laser pulses with 11.8 mJ energy are
delivered from a KTA based OPCPA with 100 Hz repetition rate, and compressed to
be ~105 fs by a two-grating compressor with efficiency over 50%. Subsequently,
the pulse spectrum is broadened by employing a krypton gas-filled
hollow-core-fiber (HCF). Then, the pulse duration is further compressed to 21.5
fs through a CaF2 bulk material with energy of 2.6 mJ and stability of 0.9%
RMS, which is about 1.6 cycle for 4 {\mu}m laser pulse. The near-single cycle 4
{\mu}m laser pulse CEP is passively stabilized with ~370 mrad based on a CEP
stable 4 {\mu}m OPA injection.
|
http://arxiv.org/abs/1712.07327v1
|
In the context of whether a massive compact object recently observed in the GW190814 event is a neutron star (NS) or not, we have studied the role of the parameters $\kappa$ and $\Lambda_c$ of the Eddington-inspired Born-Infeld (EiBI) gravity theory on the NS mass-radius relation, moment of inertia, and tidal deformability. The results are compared to recent observation constraints extracted from the analysis of NS observation data. The NS core equation of state (EoS) is calculated using the relativistic mean-field model with the G3 parameter set. In the hyperon sector, the SU(3) and hyperon potential depths are used to determine the hyperon coupling constants. For the inner and outer crusts, we use the crust EoS from Miyatsu et al. (2013). We also maintain the sound speed to not exceed $c$/$\sqrt{3}$ at high densities. We have found that, in general, the NS mass significantly depends on the value of $\kappa$, and the radius $R$ is sensitive to the value of $\Lambda_c$. Moreover, as $\Lambda_c$ is equal to zero or less than the accepted bound of the cosmological constant, the NS within the EiBI theory is compatible with observation constraints, including $2.0 M_\odot$ mass, canonical radius $R_{1.4 M_{\odot}}$, moment of inertia, and tidal deformation. Our investigation also reveals that the $2.6 M_\odot$ mass compact object and current observational constraint of canonical radius $R_{1.4 M_{\odot}}$ can simultaneously be satisfied only when the $\Lambda_c$ value is unphysically too large and negative. Therefore, within the spesific EoS employed in this work, we conclude that the secondary object with $2.6 M_\odot$ observed in the GW190814 event is not likely a static (or a slow-rotating) NS within the EiBI gravity theory.
|
https://arxiv.org/abs/2109.05718v1
|
With the rapid emergence of a spectrum of high-end mobile devices, many
applications that required desktop-level computation capability formerly can
now run on these devices without any problem. However, without a careful
optimization, executing Deep Neural Networks (a key building block of the
real-time video stream processing that is the foundation of many popular
applications) is still challenging, specifically, if an extremely low latency
or high accuracy inference is needed. This work presents CADNN, a programming
framework to efficiently execute DNN on mobile devices with the help of
advanced model compression (sparsity) and a set of thorough architecture-aware
optimization. The evaluation result demonstrates that CADNN outperforms all the
state-of-the-art dense DNN execution frameworks like TensorFlow Lite and TVM.
|
http://arxiv.org/abs/1905.00571v1
|
A pre-requisite for the design of wireless systems is the understanding of the propagation channel. While a wealth of propagation knowledge exists for bands below 6 GHz, the same can not be said for bands approaching millimeter-wave frequencies. In this paper, we present the design, implementation and measurement-based verification of a re-configurable 27.5-29.5 GHz channel sounder for measuring dynamic directional channels. Based on the switched array principle, our design is capable of characterizing 128$\times$256 dual-polarized channels with snapshot times of around 600 ms. This is in sharp contrast to measurement times on the order of tens-of-minutes with rotating horn antenna sounders. Our design lends itself to high angular resolution at both link ends with calibrated antenna arrays sampled at 2$^\circ$ and 5$^\circ$ intervals in the azimuth and elevation domains. This is complemented with a bandwidth of up to 2 GHz, enabling nanosecond-level delay resolution. The short measurement times and stable radio frequency design facilitates real-time processing and averaging of the received wavefronts to gain measurement signal-to-noise ratio and dynamic range. After disclosing the sounder design and implementation, we demonstrate its capabilities by presenting dynamic and static measurements at 28 GHz over a 1 GHz bandwidth in an office corridor environment.
|
https://arxiv.org/abs/2105.10712v1
|
Since 2014, NASA's K2 mission has observed large portions of the ecliptic
plane in search of transiting planets and has detected hundreds of planet
candidates. With observations planned until at least early 2018, K2 will
continue to identify more planet candidates. We present here 275 planet
candidates observed during Campaigns 0-10 of the K2 mission that are orbiting
stars brighter than 13 mag (in Kepler band) and for which we have obtained
high-resolution spectra (R = 44,000). These candidates are analyzed using the
VESPA package (Morton 2012, 2015b) in order to calculate their false-positive
probabilities (FPP). We find that 149 candidates are validated with an FPP
lower than 0.1%, 39 of which were previously only candidates and 56 of which
were previously undetected. The processes of data reduction, candidate
identification, and statistical validation are described, and the demographics
of the candidates and newly validated planets are explored. We show tentative
evidence of a gap in the planet radius distribution of our candidate sample.
Comparing our sample to the Kepler candidate sample investigated by Fulton et
al. (2017), we conclude that more planets are required to quantitatively
confirm the gap with K2 candidates or validated planets. This work, in addition
to increasing the population of validated K2 planets by nearly 50% and
providing new targets for follow-up observations, will also serve as a
framework for validating candidates from upcoming K2 campaigns and the
Transiting Exoplanet Survey Satellite, expected to launch in 2018.
|
http://arxiv.org/abs/1802.05277v2
|
In medical-data driven learning, 3D convolutional neural networks (CNNs) have started to show superior performance to 2D CNNs in numerous deep learning tasks, proving the added value of 3D spatial information in feature representation. However, the difficulty in collecting more training samples to converge, more computational resources and longer execution time make this approach less applied. Also, applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D models. To tackle these issues, we proposed a novel 2D strategical representation of volumetric data, namely 2.75D. In this work, the spatial information of 3D images is captured in a single 2D view by a spiral-spinning technique. As a result, 2D CNN networks can also be used to learn volumetric information. Besides, we can fully leverage pre-trained 2D CNNs for downstream vision problems. We also explore a multi-view 2.75D strategy, 2.75D 3 channels (2.75Dx3), to boost the advantage of 2.75D. We evaluated the proposed methods on three public datasets with different modalities or organs (Lung CT, Breast MRI, and Prostate MRI), against their 2D, 2.5D, and 3D counterparts in classification tasks. Results show that the proposed methods significantly outperform other counterparts when all methods were trained from scratch on the lung dataset. Such performance gain is more pronounced with transfer learning or in the case of limited training data. Our methods also achieved comparable performance on other datasets. In addition, our methods achieved a substantial reduction in time consumption of training and inference compared with the 2.5D or 3D method.
|
https://arxiv.org/abs/2002.04251v3
|
The quadrupole coupling constant $C_{\text{Q}}$ and the asymmetry parameter $\eta$ of the aluminium nuclei in two polymorphs of the complex aluminium hydride CsAlH4 are determined from both $^{27}$Al MAS NMR spectra and $^{27}$Al NMR spectra recorded for stationary samples by using the Solomon echo sequence. The accuracy with which these parameters can be determined from the static spectra (CsAlH4(o): $C_{\text{Q}}=(1.42\pm0.01)$ MHz, $\eta=(0.62\pm0.01)$ and CsAlH4(t): $C_{\text{Q}}=(1.43\pm0.02)$ MHz, $\eta<0.03$) seems to be slightly higher than via the MAS approach. The experimentally determined parameters ($\delta_{\text{iso}}$, $C_{\text{Q}}$ and $\eta$) are compared with those obtained from DFT-GIPAW (density functional theory - gauge-including projected augmented wave) calculations. When using DFT-optimized structures, the magnitude of the quadrupole coupling constant is overestimated by about 45% for both polymorphs. Further calculations in which the geometry of the AlH4 tetrahedra was varied show a high sensitivity of $C_{\text{Q}}$ on the H--Al--H angles in particular. Modest changes in the angles on the order of one to three degrees are sufficient to achieve near-perfect agreement between GIPAW calculations and experiment. The deviations found for the DFT-optimized structures are explained with the neglect of thermal motion, which typically leads to a reduction of distortions of the AlH4 tetrahedra. From a broader perspective, the uncertainty in the positions of the hydrogen atoms renders the accurate reproduction or prediction of quadrupole coupling constants for aluminium hydrides challenging.
|
https://arxiv.org/abs/2410.07731v1
|
Motivated by the recent observations of electronic correlation effect [M. Corasaniti \textit{et al}., Phys. Rev. B \textbf{104}, L121112 (2021)] and topology-stabilized magnetic fluctuations [N. Drucker \textit{et al}., Nat. Commun. \textbf{14}, 5182 (2023)] in the noncentrosymmetric magnetic Weyl semimetal candidate CeAlGe, we performed systematic studies on the local static and dynamic spin susceptibilities by $^{27}$Al nuclear magnetic resonance. Due to the large spin susceptibility from Ce-$4f$ electrons, the theoretically predicted responses from Weyl fermions are overwhelmed. A Knight-shift anomaly is observed below $T^*\sim50$ K, a signature of the onset of coherent Kondo coupling. In addition, an anomalous peak is found in $1/T_1T$ near 15 K, well above the magnetic ordering temperature $T_N \approx 5$ K, which probably is a consequence of topology-stabilized magnetic fluctuations. These results highlight the interplay among electronic correlation, magnetism and band topology in this family of Kondo Weyl semimetals.
|
https://arxiv.org/abs/2403.06476v2
|
NMR study has been performed on an S = 1/2 antiferromagnet KCu6AlBiO4(SO4)5Cl on the square-Kagome lattice, which has three slightly inequivalent nearest-neighbor interactions. Because of the geometrical frustration inherited from triangles within the square kagome lattice and of the low dimensionality, a long range magnetic order is strongly suppressed; its absence has so far been confirmed in low temperatures down to dilution refrigerator region. 27Al-NMR spectra and the longitudinal relaxation time T1 were measured by a conventional pulsed spectrometer on powder sample under several magnetic fields between 3 and 10 T and in low temperatures down to 0.35 K. The NMR line width due to the inhomogeneous broadening increased with lowering temperatures and leveled off below 3 K, where FWHM reached the value as large as 0.1 T, implying that the ground state is magnetic one, consistent with previous reports. On the other hand, the longitudinal nuclear spin relaxation rate 1/T1 obeyed the Arrhenius law with the thermal activation energy {\Delta} = 2K at low temperatures, suggesting that a small gap is formed in the spin excitation spectrum.
|
https://arxiv.org/abs/2402.18125v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.