text
string | source
string |
|---|---|
We theoretically propose that hexagonal silicon-based crystals, $P6/m$-Si$_6$
and $P6/m$-NaSi$_6$, are topological Dirac semimetals with superconducting
critical temperatures of 12 K and 13 K, respectively, at ambient pressure. Band
inversion occurs with the Fu-Kane topological invariant $\mathbb{Z}_2=1$, even
in the absence of spin-orbit coupling. The Dirac nodes protected by $C_6$
crystal rotational symmetry remain gapless with spin-orbit coupling. Using
first-principles calculations, we find pressure-induced topological phase
transitions for $P6/m$-Si$_6$ and $P6/m$-NaSi$_6$ with critical external
pressures of 11.5 GPa and 14.9 GPa, respectively. Above the critical pressures,
the Dirac bands are gapped with $\mathbb{Z}_2=0$, while the superconducting
states and the crystal symmetries are retained.Our results may shed light into
a search for silicon-based topological materials with superconductivity.
|
http://arxiv.org/abs/2303.17953v1
|
Memory interference may heavily inflate task execution times in Heterogeneous
Systems-on-Chips (HeSoCs). Knowing worst-case interference is consequently
fundamental for supporting the correct execution of time-sensitive
applications. In most of the literature, worst-case interference is assumed to
be generated by, and therefore is estimated through read-intensive synthetic
workloads with no caching. Yet these workloads do not always generate
worst-case interference. This is the consequence of the general results
reported in this work. By testing on multiple architectures, we determined that
the highest interference generation traffic pattern is actually hardware
dependant, and that making assumptions could lead to a severe underestimation
of the worst-case (in our case, of more than 9x).
|
http://arxiv.org/abs/2309.12864v1
|
Nontrivial dark sector physics continues to be an interesting avenue in our
quest to the nature of dark matter. In this paper, we study the cosmological
signatures of mass-varying dark matter where its mass changes from zero to a
nonzero value in the early Universe. We compute the changes in various
observables, such as, the linear matter power spectrum and the cosmic microwave
background anisotropy power spectrum. We explain the origin of the effects and
point out a qualitative similarity between this model and a warm dark matter
cosmology with no sudden mass transition. Finally, we do a simple analytical
study to estimate the constraint on the parameters of this model from the
Lyman-$\alpha$ forest data.
|
http://arxiv.org/abs/2303.17947v2
|
The problem of transitioning smoothly from one audio clip to another arises
in many music consumption scenarios, especially as music consumption has moved
from professionally curated and live-streamed radios to personal playback
devices and services. we present the first steps toward a new method of
automatically transitioning from one audio clip to another by discretizing the
frequency spectrum into bins and then finding transition times for each bin. We
phrase the problem as one of graph flow optimization; specifically
min-cut/max-flow.
|
http://arxiv.org/abs/2301.13380v1
|
The phase diagram and the equation of state of QCD is investigated in the
presence of weak background electric fields by means of continuum extrapolated
lattice simulations. The complex action problem at nonzero electric field is
circumvented by a novel Taylor expansion, enabling the determination of the
linear response of the thermal QCD medium to constant electric fields -- in
contrast to simulations at imaginary electric fields, which, as we demonstrate,
involve an infrared singularity. Besides the electric susceptibility of QCD
matter, we determine the dependence of the Polyakov loop on the field strength
to leading order. Our results indicate a plasma-type behavior with a negative
susceptibility at all temperatures, as well as an increase in the transition
temperature as the electric field grows.
|
http://arxiv.org/abs/2309.07058v1
|
3D hand tracking methods based on monocular RGB videos are easily affected by
motion blur, while event camera, a sensor with high temporal resolution and
dynamic range, is naturally suitable for this task with sparse output and low
power consumption. However, obtaining 3D annotations of fast-moving hands is
difficult for constructing event-based hand-tracking datasets. In this paper,
we provided an event-based speed adaptive hand tracker (ESAHT) to solve the
hand tracking problem based on event camera. We enabled a CNN model trained on
a hand tracking dataset with slow motion, which enabled the model to leverage
the knowledge of RGB-based hand tracking solutions, to work on fast hand
tracking tasks. To realize our solution, we constructed the first 3D hand
tracking dataset captured by an event camera in a real-world environment,
figured out two data augment methods to narrow the domain gap between slow and
fast motion data, developed a speed adaptive event stream segmentation method
to handle hand movements in different moving speeds, and introduced a new
event-to-frame representation method adaptive to event streams with different
lengths. Experiments showed that our solution outperformed RGB-based as well as
previous event-based solutions in fast hand tracking tasks, and our codes and
dataset will be publicly available.
|
http://arxiv.org/abs/2302.14430v1
|
Methods of continuation of holomorphic functions of several complex variables
are investigated within the axiomatic framework of Araki, Haag, and Kastler in
local quantum field theory. The motivation comes from the analysis of a mass
gap in an energy-momentum spectrum without vacuum vector. The main conclusion
is some non-restrictedness property in a mass gap situation. Prior to that,
some results on holomorphic functions related to a mass-gap-situation are
obtained and investigated.
|
http://arxiv.org/abs/2309.06346v1
|
In this paper we present the results of a new kaonic helium-4 measurement
with a 1.37 g/l gaseous target by the SIDDHARTA-2 experiment at the DA{\Phi}NE
collider. We measured, for the first time, the energies and yields of three
transitions belonging to the Mseries. Moreover, we improved by a factor about
three, the statistical precision of the 2p level energy shift and width induced
by the strong interaction, obtaining the most precise measurement for gaseous
kaonic helium, and measured the yield of the L{\alpha} transition at the
employed density, providing a new experimental input to investigate the density
dependence of kaonic atoms transitions yield.
|
http://arxiv.org/abs/2310.20584v1
|
In this note, we propose several unsolved problems concerning the
irrotational oscillation of a water droplet under zero gravity. We will derive
the governing equation of this physical model, and convert it to a quasilinear
dispersive partial differential equation defined on the sphere, which formally
resembles the capillary water waves equation but describes oscillation defined
on curved manifold instead. Three types of unsolved mathematical problems
related to this model will be discussed in observation of hydrodynamical
experiments under zero gravity: (1) Strichartz type inequalities for the
linearized problem (2) existence of periodic solutons (3) normal form reduction
and generic lifespan estimate. It is pointed out that all of these problems are
closely related to certain Diophantine equations, especially the third one.
|
http://arxiv.org/abs/2301.00115v2
|
In a recent work, Chen, Hoza, Lyu, Tal and Wu (FOCS 2023) showed an improved
error reduction framework for the derandomization of regular read-once
branching programs (ROBPs). Their result is based on a clever modification to
the inverse Laplacian perspective of space-bounded derandomization, which was
originally introduced by Ahmadinejad, Kelner, Murtagh, Peebles, Sidford and
Vadhan (FOCS 2020).
In this work, we give an alternative error reduction framework for regular
ROBPs. Our new framework is based on a binary recursive formula from the work
of Chattopadhyay and Liao (CCC 2020), that they used to construct weighted
pseudorandom generators (WPRGs) for general ROBPs.
Based on our new error reduction framework, we give alternative proofs to the
following results for regular ROBPs of length $n$ and width $w$, both of which
were proved in the work of Chen et al. using their error reduction:
$\bullet$ There is a WPRG with error $\varepsilon$ that has seed length
$\tilde{O}(\log(n)(\sqrt{\log(1/\varepsilon)}+\log(w))+\log(1/\varepsilon)).$
$\bullet$ There is a (non-black-box) deterministic algorithm which estimates
the expectation of any such program within error $\pm\varepsilon$ with space
complexity $\tilde{O}(\log(nw)\cdot\log\log(1/\varepsilon)).$ (This was first
proved in the work of Ahmadinejad et al., but the proof by Chen et al. is
simpler.)
Because of the binary recursive nature of our new framework, both of our
proofs are based on a straightforward induction that is arguably simpler than
the Laplacian-based proof in the work of Chen et al.
|
http://arxiv.org/abs/2309.04551v2
|
Understanding the nematic phase observed in the iron-chalcogenide materials
is crucial for describing their superconducting pairing. Experiments on
FeSe$_{1-x}$S$_x$ showed that one of the slow Shubnikov--de Haas quantum
oscillation frequencies disappears when tuning the material out of the nematic
phase via chemical substitution or pressure, which has been interpreted as a
Lifshitz transition [Coldea et al., npj Quant Mater 4, 2 (2019), Reiss et al.,
Nat. Phys. 16, 89-94 (2020)]. Here, we present a generic, alternative scenario
for a nematicity-induced sharp quantum oscillation frequency which disappears
in the tetragonal phase and is not connected to an underlying Fermi surface
pocket. We show that different microscopic interband scattering mechanisms -
for example, orbital-selective scattering - in conjunction with nematic order
can give rise to this quantum oscillation frequency beyond the standard Onsager
relation. We discuss implications for iron-chalcogenides and the interpretation
of quantum oscillations in other correlated materials.
|
http://arxiv.org/abs/2309.04237v1
|
By Rabinowitsch' trick Hilbert's Nullstellensatz follows from the weak
Nullstellensatz (Rabinowitsch 1929). The weak version can be shown with
elimination theory. Hilbert's original proof is also based on successive
elimination. Lasker obtained a new proof using primary decomposition. We
describe these early proofs and place them in the development of commutative
algebra up to the appearance of van der Waerden's Moderne Algebra. We also
explain Hentzelt's Nullstellensatz.
|
http://arxiv.org/abs/2309.14024v1
|
Maintaining factual consistency is a critical issue in abstractive text
summarisation, however, it cannot be assessed by traditional automatic metrics
used for evaluating text summarisation, such as ROUGE scoring. Recent efforts
have been devoted to developing improved metrics for measuring factual
consistency using pre-trained language models, but these metrics have
restrictive token limits, and are therefore not suitable for evaluating long
document text summarisation. Moreover, there is limited research and resources
available for evaluating whether existing automatic evaluation metrics are fit
for purpose when applied in long document settings. In this work, we evaluate
the efficacy of automatic metrics for assessing the factual consistency of long
document text summarisation. We create a human-annotated data set for
evaluating automatic factuality metrics, LongSciVerify, which contains
fine-grained factual consistency annotations for long document summaries from
the scientific domain. We also propose a new evaluation framework,
LongDocFACTScore, which is suitable for evaluating long document summarisation.
This framework allows metrics to be efficiently extended to any length document
and outperforms existing state-of-the-art metrics in its ability to correlate
with human measures of factuality when used to evaluate long document
summarisation data sets. We make our code and LongSciVerify data set publicly
available: https://github.com/jbshp/LongDocFACTScore.
|
http://arxiv.org/abs/2309.12455v2
|
We revisit the problem of classification and explicit construction of the
conformal three-point correlation functions of currents of arbitrary integer
spin in arbitrary dimensions. For the conserved currents, we set up the
equations for the conservation conditions and solve them completely for some
values of spins, confirming the earlier counting of the number of independent
structures matching them with the higher-spin cubic vertices in one higher
dimension. The general solution for the correlators of conserved currents we
delegate to a follow-up work.
|
http://arxiv.org/abs/2309.05129v2
|
In reverberant conditions with multiple concurrent speakers, each microphone
acquires a mixture signal of multiple speakers at a different location. In
over-determined conditions where the microphones out-number speakers, we can
narrow down the solutions to speaker images and realize unsupervised speech
separation by leveraging each mixture signal as a constraint (i.e., the
estimated speaker images at a microphone should add up to the mixture).
Equipped with this insight, we propose UNSSOR, an algorithm for
$\textbf{u}$nsupervised $\textbf{n}$eural $\textbf{s}$peech
$\textbf{s}$eparation by leveraging $\textbf{o}$ver-determined training
mixtu$\textbf{r}$es. At each training step, we feed an input mixture to a deep
neural network (DNN) to produce an intermediate estimate for each speaker,
linearly filter the estimates, and optimize a loss so that, at each microphone,
the filtered estimates of all the speakers can add up to the mixture to satisfy
the above constraint. We show that this loss can promote unsupervised
separation of speakers. The linear filters are computed in each sub-band based
on the mixture and DNN estimates through the forward convolutive prediction
(FCP) algorithm. To address the frequency permutation problem incurred by using
sub-band FCP, a loss term based on minimizing intra-source magnitude scattering
is proposed. Although UNSSOR requires over-determined training mixtures, we can
train DNNs to achieve under-determined separation (e.g., unsupervised monaural
speech separation). Evaluation results on two-speaker separation in reverberant
conditions show the effectiveness and potential of UNSSOR.
|
http://arxiv.org/abs/2305.20054v2
|
The ultra-luminous X-ray source CXO~J133815.6+043255 is a strong candidate
for a bona-fide intermediate mass black hole, residing in the outskirts of
NGC~5252. We present 22~GHz radio observations of this source obtained
serendipitously in an ongoing high-frequency imaging survey of radio-quiet
Active Galactic Nuclei (AGN), and use this new data point to construct the
broad-band radio spectral energy distribution (SED). We find that the SED
exhibits a spectral slope of $\alpha=-0.66\pm0.02$, consistent with a steep
spectrum from optically-thin synchrotron emission from an unresolved jet. We
also find that the $L_R / L_X$ ratio is approximately $10^{-3}$, inconsistent
with radio-quiet AGN and many ULXs but consistent with low-luminosity AGN
(LLAGN) and radio-loud quasars. Together, these observations support the
conclusion that CXO~J133815.6+043255 is an intermediate-mass black hole
producing a low-mass analog of radio jets seen in classical quasars.
|
http://arxiv.org/abs/2309.00051v1
|
In clinical settings, intracranial hemorrhages (ICH) are routinely diagnosed
using non-contrast CT (NCCT) for severity assessment. Accurate automated
segmentation of ICH lesions is the initial and essential step, immensely useful
for such assessment. However, compared to other structural imaging modalities
such as MRI, in NCCT images ICH appears with very low contrast and poor SNR.
Over recent years, deep learning (DL)-based methods have shown great potential,
however, training them requires a huge amount of manually annotated
lesion-level labels, with sufficient diversity to capture the characteristics
of ICH. In this work, we propose a novel weakly supervised DL method for ICH
segmentation on NCCT scans, using image-level binary classification labels,
which are less time-consuming and labor-efficient when compared to the manual
labeling of individual ICH lesions. Our method initially determines the
approximate location of ICH using class activation maps from a classification
network, which is trained to learn dependencies across contiguous slices. We
further refine the ICH segmentation using pseudo-ICH masks obtained in an
unsupervised manner. The method is flexible and uses a computationally light
architecture during testing. On evaluating our method on the validation data of
the MICCAI 2022 INSTANCE challenge, our method achieves a Dice value of 0.55,
comparable with those of existing weakly supervised method (Dice value of
0.47), despite training on a much smaller training data.
|
http://arxiv.org/abs/2309.16627v1
|
The nature of the first Pop III stars is still a mystery and the energy
distribution of the first supernovae is completely unexplored. For the first
time we account simultaneously for the unknown initial mass function (IMF),
stellar mixing, and energy distribution function (EDF) of Pop III stars in the
context of a cosmological model for the formation of a MW-analogue. Our
data-calibrated semi-analytic model is based on a N-body simulation and follows
the formation and evolution of both Pop III and Pop II/I stars in their proper
timescales. We discover degeneracies between the adopted Pop III unknowns, in
the predicted metallicity and carbonicity distribution functions and the
fraction of C-enhanced stars. Nonetheless, we are able to provide the first
available constraints on the EDF, $dN/dE_\star \propto E_{\star}^{-\alpha_e}$
with $1\leq \alpha_e \leq2.5$. In addition, the characteristic mass of the Pop
III IMF should be $m_{\rm ch}<100\:{\rm M_\odot}$, assuming a mass range
consistent with hydrodynamical simulations (0.1-1000$\:{\rm M_\odot}$).
Independent of the assumed Pop III properties, we find that all [C/Fe]>+0.7
stars (with [Fe/H]<-2.8) have been enriched by Pop III supernovae at a $>20\%$
level, and all [C/Fe]>+2 stars at a $>95\%$ level. All very metal-poor stars
with $\rm [C/Fe]<0$ are predicted to be predominantly enriched by Pop III
hypernovae and/or pair instabillity supernovae. To better constrain the
primordial EDF, it is absolutely crucial to have a complete and accurate
determination of the metallicity distribution function, and the properties of
C-enhanced metal-poor stars (frequency and [C/Fe]) in the Galactic halo.
|
http://arxiv.org/abs/2309.00045v1
|
This short note provides explicit solutions to the linearized Boussinesq
equations around the stably stratified Couette flow posed on
$\mathbb{T}\times\mathbb{R}$. We consider the long-time behavior of such
solutions and prove inviscid damping of the perturbed density and velocity
field for any positive Richardson number, with optimal rates. The explicit
solution is obtained through the limiting absorption principle whereas the
inviscid damping is proved using oscillatory integral methods.
|
http://arxiv.org/abs/2309.08419v2
|
System-level testing of healthcare Internet of Things (IoT) applications
requires creating a test infrastructure with integrated medical devices and
third-party applications. A significant challenge in creating such test
infrastructure is that healthcare IoT applications evolve continuously with the
addition of new medical devices from different vendors and new services offered
by different third-party organizations following different architectures.
Moreover, creating test infrastructure with a large number of different types
of medical devices is time-consuming, financially expensive, and practically
infeasible. Oslo City's healthcare department faced these challenges while
working with various healthcare IoT applications. To address these challenges,
this paper presents a real-world test infrastructure software architecture
(HITA) designed for healthcare IoT applications. We evaluated HITA's digital
twin (DT) generation component implemented using model-based and machine
learning (ML) approaches in terms of DT fidelity, scalability, and time cost of
generating DTs. Results show that the fidelity of DTs created using model-based
and ML approaches reach 94% and 95%, respectively. Results from operating 100
DTs concurrently show that the DT generation component is scalable and ML-based
DTs have a higher time cost.
|
http://arxiv.org/abs/2309.04223v3
|
Meta-analysis aims to combine effect measures from several studies. For
continuous outcomes, the most popular effect measures use simple or
standardized differences in sample means. However, a number of applications
focus on the absolute values of these effect measures (i.e., unsigned magnitude
effects). We provide statistical methods for meta-analysis of magnitude effects
based on standardized mean differences. We propose a suitable statistical model
for random-effects meta-analysis of absolute standardized mean differences
(ASMD), investigate a number of statistical methods for point and interval
estimation, and provide practical recommendations for choosing among them.
|
http://arxiv.org/abs/2310.00126v1
|
Mobile robots often have limited battery life and need to recharge
periodically. This paper presents an RRT- based path-planning algorithm that
addresses battery power management. A path is generated continuously from the
robot's current position to its recharging station. The robot decides if a
recharge is needed based on the energy required to travel on that path and the
robot's current power. RRT* is used to generate the first path, and then
subsequent paths are made using information from previous trees. Finally, the
presented algorithm was compared with Extended Rate Random Tree (ERRT)
algorithm
|
http://arxiv.org/abs/2310.20590v1
|
Radiation therapy is a critical component of cancer treatment. However, the
delivery of radiation poses inherent challenges, particularly in minimizing
radiation exposure to healthy organs surrounding the tumor site. One
significant contributing factor to this challenge is the patient's respiration,
which introduces uncertainties in the precise targeting of radiation. Managing
these uncertainties during radiotherapy is essential to ensure effective tumor
treatment while minimizing the adverse effects on healthy tissues. This
research addresses the crucial objective of achieving a balanced dose
distribution during radiation therapy under conditions of respiration
uncertainty. To tackle this issue, we begin by developing a motion uncertainty
model employing probability density functions that characterize breathing
motion patterns. This model forms the foundation for our efforts to optimize
radiation dose delivery. Next, we employ three bio-inspired optimization
techniques: Cuckoo search optimization (CSO), flower pollination algorithm
(FPA), and bat search Optimization (BSO). Our research evaluates the dose
distribution in Gy on both the tumor and healthy organs by applying these
bio-inspired optimization methods to identify the most effective approach. This
research ultimately aids in refining the strategies used in radiation therapy
planning under the challenging conditions posed by respiration uncertainty.
Through the application of bio-inspired optimization techniques and a
comprehensive evaluation of dose distribution, we seek to improve the precision
and safety of radiation therapy, thereby advancing cancer treatment outcomes.
|
http://arxiv.org/abs/2309.15448v1
|
While resonant modes do not exist within band gaps in infinite periodic
materials, they may appear as in-gap localized edge modes once the material is
truncated to form a finite periodic structure. Here, we provide an analysis
framework that reveals the topological origins of truncation resonances,
elucidating formally the conditions that influence their existence and
properties. Elastic beams with sinusoidal and step-wise property modulations
are considered as classical examples of periodic structures. Their non-trivial
topological characteristics stem from the consideration of a phason parameter
that produces spatial shifts of the property modulation while continuously
varying how the boundaries are truncated. In this context, non-trivial band
gaps are characterized by an integer topological invariant, the Chern number,
which is equal to the number of truncation resonances that traverse a band gap
as the phason is varied. We highlight the existence of multiple chiral edge
states that may be localized at opposite boundaries, and illustrate how these
can be independently tuned by modified boundary-specific phason parameters.
Furthermore, we show that the frequency location of a truncation resonance is
influenced by the modulation volume fraction, boundary conditions, and number
of cells comprising the finite structure, thus quantifying its robustness to
these factors. Non-topological in-gap resonances induced by a defect are also
demonstrated, showing that these can be coupled with topological modes when the
defect is located at an edge. Finally, experimental investigations on
bi-material phononic-crystal beams are conducted to support these findings. The
tunability of truncation resonances by material-property modulation may be
exploited in applications ranging from vibration attenuation and thermal
conductivity reduction to filtering and flow control by phononic subsurfaces.
|
http://arxiv.org/abs/2301.00101v1
|
Recent advances in diffusion models have led to a quantum leap in the quality
of generative visual content. However, quantification of realism of the content
is still challenging. Existing evaluation metrics, such as Inception Score and
Fr\'echet inception distance, fall short on benchmarking diffusion models due
to the versatility of the generated images. Moreover, they are not designed to
quantify realism of an individual image. This restricts their application in
forensic image analysis, which is becoming increasingly important in the
emerging era of generative models. To address that, we first propose a metric,
called Image Realism Score (IRS), computed from five statistical measures of a
given image. This non-learning based metric not only efficiently quantifies
realism of the generated images, it is readily usable as a measure to classify
a given image as real or fake. We experimentally establish the model- and
data-agnostic nature of the proposed IRS by successfully detecting fake images
generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN.
We further leverage this attribute of our metric to minimize an IRS-augmented
generative loss of SDM, and demonstrate a convenient yet considerable quality
improvement of the SDM-generated content with our modification. Our efforts
have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes
generated by four high-quality models. We will release the dataset and code.
|
http://arxiv.org/abs/2309.14756v1
|
Ultrabroadband frequency combs coherently unite distant portions of the
electromagnetic spectrum. They underpin discoveries in ultrafast science and
serve as the building blocks of modern photonic technologies. Despite
tremendous progress in integrated sources of frequency combs, achieving
multi-octave operation on chip has remained elusive mainly because of the
energy demand of typical spectral broadening processes. Here we break this
barrier and demonstrate multi-octave frequency comb generation using an optical
parametric oscillator (OPO) in nanophotonic lithium niobate with only
femtojoules of pump energy. The energy-efficient and robust coherent spectral
broadening occurs far above the oscillation threshold of the OPO and detuned
from its linear synchrony with the pump. We show that the OPO can undergo a
temporal self-cleaning mechanism by transitioning from an incoherent operation
regime, which is typical for operation far above threshold, to an ultrabroad
coherent regime, corresponding to the nonlinear phase compensating the OPO
cavity detuning. Such a temporal self-cleaning mechanism and the subsequent
multi-octave coherent spectrum has not been explored in previous OPO designs
and features a relaxed requirement for the quality factor and relatively narrow
spectral coverage of the cavity. We achieve orders of magnitude reduction in
the energy requirement compared to the other techniques, confirm the coherence
of the comb, and present a path towards more efficient and wider spectral
broadening. Our results pave the way for ultrashort-pulse and ultrabroadband
on-chip nonlinear photonic systems for numerous applications.
|
http://arxiv.org/abs/2309.04545v1
|
The simultaneous laser-driven acceleration and angular manipulation of the
fast electron beam is experimentally demonstrated. The bunch of multi-MeV
energy charged particles is generated during the propagation of the femtosecond
laser pulse through the near-critical plasma slab accompanied by plasma
channeling. Plasma is formed by the controlled breakdown of a thin-tape target
by a powerful nanosecond prepulse. The electron beam pointing approach is based
on the refraction of a laser pulse in the presence of a strong radial density
gradient in the breakdown of the tape with a small displacement of the
femtosecond laser beam relative to the breakdown symmetry axis. A shift of
several micrometers makes it possible to achieve beam deflection by an angle up
to 10 degrees with acceptable beam charge and spectrum conservation. This opens
up opportunities for in-situ applications for scanning objects with an electron
beam and the multistage electron beam energy gain in consecutive laser
accelerators without bulk magnetic optics for particles. Experimental findings
are supported by numerical Particle-In-Cell calculations of laser-plasma
acceleration and hydrodynamic simulations.
|
http://arxiv.org/abs/2309.10530v2
|
This paper describes the full end-to-end design of our primary scoring agent
in an aerial autonomous robotics competition from April 2023. As open-ended
robotics competitions become more popular, we wish to begin documenting
successful team designs and approaches. The intended audience of this paper is
not only any future or potential participant in this particular national Defend
The Republic (DTR) competition, but rather anyone thinking about designing
their first robot or system to be entered in a competition with clear goals.
Future DTR participants can and should either build on the ideas here, or find
new alternate strategies that can defeat the most successful design last time.
For non-DTR participants but students interested in robotics competitions,
identifying the minimum viable system needed to be competitive is still
important in helping manage time and prioritizing tasks that are crucial to
competition success first.
|
http://arxiv.org/abs/2309.06352v1
|
Anomalous cancellation of fractions is a mathematically inaccurate method
where cancelling the common digits of the numerator and denominator correctly
reduces it. While it appears to be accidentally successful, the property of
anomalous cancellation is intricately connected to the number of digits of the
denominator as well as the base in which the fraction is represented. Previous
work have been mostly surrounding three digit solutions or specific properties
of the same. This paper seeks to get general results regarding the structure of
numbers that follow the cancellation property (denoted by $P^*_{\ell; k}$) and
an estimate of the total number of solutions possible in a given base
representation. In particular, interesting properties regarding the saturation
of the number of solutions in general and $p^n$ bases (where $p$ is a prime)
have been studied in detail.
|
http://arxiv.org/abs/2302.00479v1
|
We develop a reliable parameter-free analytic continuation method for quantum
many-body calculations. Our method is based on a kernel grid, a causal spline,
a regularization using the second-derivative roughness penalty, and the L-curve
criterion. We also develop the L-curve averaged deviation to estimate the
precision of our analytic continuation. To deal with statistically obtained
data more efficiently, we further develop a bootstrap-averaged analytic
continuation method. In the test using the exact imaginary-frequency Green's
function with added statistical error, our method produces the spectral
function that converges systematically to the exact one as the statistical
error decreases. As an application, we simulate the two-orbital Hubbard model
for various electron numbers with the dynamical-mean field theory in the
imaginary time and obtain the real-frequency self-energy with our analytic
continuation method, clearly identifying a non-Fermi liquid behavior as the
electron number approaches the half filling from the quarter filling. Our
analytic continuation can be used widely and it will facilitate drawing clear
conclusions from imaginary-time quantum many-body calculations.
|
http://arxiv.org/abs/2301.00129v1
|
We use the two-flavor linear sigma model with quarks to study the phase
structure of isospin asymmetric matter at zero temperature. The meson degrees
of freedom provide the mean field chiral- and isospin-condensates on top of
which we compute the effective potential accounting for constituent quark
fluctuations at one-loop order. Using the renormalizability of the model, we
absorb the ultraviolet divergences into suitable counter-terms that are added
respecting the original structure of the theory. These counter-terms are
determined from the stability conditions which require the effective potential
to have minima in the condensates directions at the classical values, as well
as the transition from the non-condensed to the condensed phase to be smooth as
a function of the isospin chemical potential. We use the model to study the
evolution of the condensates as well as the pressure, energy and isospin
densities and the sound velocity as functions of the isospin chemical
potential. The approach does a good average description up to isospin chemical
potentials values not too large as compared to the vacuum pion mass.
|
http://arxiv.org/abs/2301.13633v2
|
We study a challenging problem of unsupervised discovery of object landmarks.
Many recent methods rely on bottlenecks to generate 2D Gaussian heatmaps
however, these are limited in generating informed heatmaps while training,
presumably due to the lack of effective structural cues. Also, it is assumed
that all predicted landmarks are semantically relevant despite having no ground
truth supervision. In the current work, we introduce a consistency-guided
bottleneck in an image reconstruction-based pipeline that leverages landmark
consistency, a measure of compatibility score with the pseudo-ground truth to
generate adaptive heatmaps. We propose obtaining pseudo-supervision via forming
landmark correspondence across images. The consistency then modulates the
uncertainty of the discovered landmarks in the generation of adaptive heatmaps
which rank consistent landmarks above their noisy counterparts, providing
effective structural information for improved robustness. Evaluations on five
diverse datasets including MAFL, AFLW, LS3D, Cats, and Shoes demonstrate
excellent performance of the proposed approach compared to the existing
state-of-the-art methods. Our code is publicly available at
https://github.com/MamonaAwan/CGB_ULD.
|
http://arxiv.org/abs/2309.10518v1
|
In the dynamic landscape of digital forensics, the integration of Artificial
Intelligence (AI) and Machine Learning (ML) stands as a transformative
technology, poised to amplify the efficiency and precision of digital forensics
investigations. However, the use of ML and AI in digital forensics is still in
its nascent stages. As a result, this paper gives a thorough and in-depth
analysis that goes beyond a simple survey and review. The goal is to look
closely at how AI and ML techniques are used in digital forensics and incident
response. This research explores cutting-edge research initiatives that cross
domains such as data collection and recovery, the intricate reconstruction of
cybercrime timelines, robust big data analysis, pattern recognition,
safeguarding the chain of custody, and orchestrating responsive strategies to
hacking incidents. This endeavour digs far beneath the surface to unearth the
intricate ways AI-driven methodologies are shaping these crucial facets of
digital forensics practice. While the promise of AI in digital forensics is
evident, the challenges arising from increasing database sizes and evolving
criminal tactics necessitate ongoing collaborative research and refinement
within the digital forensics profession. This study examines the contributions,
limitations, and gaps in the existing research, shedding light on the potential
and limitations of AI and ML techniques. By exploring these different research
areas, we highlight the critical need for strategic planning, continual
research, and development to unlock AI's full potential in digital forensics
and incident response. Ultimately, this paper underscores the significance of
AI and ML integration in digital forensics, offering insights into their
benefits, drawbacks, and broader implications for tackling modern cyber
threats.
|
http://arxiv.org/abs/2309.07064v2
|
Unlike perfect information games, where all elements are known to every
player, imperfect information games emulate the real-world complexities of
decision-making under uncertain or incomplete information. GPT-4, the recent
breakthrough in large language models (LLMs) trained on massive passive data,
is notable for its knowledge retrieval and reasoning abilities. This paper
delves into the applicability of GPT-4's learned knowledge for imperfect
information games. To achieve this, we introduce \textbf{Suspicion-Agent}, an
innovative agent that leverages GPT-4's capabilities for performing in
imperfect information games. With proper prompt engineering to achieve
different functions, Suspicion-Agent based on GPT-4 demonstrates remarkable
adaptability across a range of imperfect information card games. Importantly,
GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning it
can understand others and intentionally impact others' behavior. Leveraging
this, we design a planning strategy that enables GPT-4 to competently play
against different opponents, adapting its gameplay style as needed, while
requiring only the game rules and descriptions of observations as input. In the
experiments, we qualitatively showcase the capabilities of Suspicion-Agent
across three different imperfect information games and then quantitatively
evaluate it in Leduc Hold'em. The results show that Suspicion-Agent can
potentially outperform traditional algorithms designed for imperfect
information games, without any specialized training or examples. In order to
encourage and foster deeper insights within the community, we make our
game-related data publicly available.
|
http://arxiv.org/abs/2309.17277v3
|
Radioactive decays from ^{42}Ar and its progeny ^{42}K are potential
background sources in large-scale liquid-argon-based neutrino and dark matter
experiments. In the atmosphere, ^{42}Ar is produced primarily by cosmogenic
activation on ^{40}Ar. The use of low radioactivity argon from cosmogenically
shielded underground sources can expand the reach and sensitivity of
liquid-argon-based rare event searches. We estimate ^{42}Ar production
underground by nuclear reactions induced by natural radioactivity and
cosmic-ray muon-induced interactions. At 3,000 mwe, ^{42}Ar production rate is
1.8E-3 atoms per ton of crust per year, 7 orders of magnitude smaller than the
^{39}Ar production rate at a similar depth in the crust. By comparing the
calculated production rate of ^{42}Ar to that of ^{39}Ar for which the
concentration has been measured in an underground gas sample, we estimate the
activity of ^{42}Ar in gas extracted from 3,000 mwe depth to be less than 2
decays per ton of argon per year.
|
http://arxiv.org/abs/2309.16169v1
|
A set of vertices of a graph $G$ is said to be decycling if its removal
leaves an acyclic subgraph. The size of a smallest decycling set is the
decycling number of $G$. Generally, at least $\lceil(n+2)/4\rceil$ vertices
have to be removed in order to decycle a cubic graph on $n$ vertices. In 1979,
Payan and Sakarovitch proved that the decycling number of a cyclically
$4$-edge-connected cubic graph of order $n$ equals $\lceil (n+2)/4\rceil$. In
addition, they characterised the structure of minimum decycling sets and their
complements. If $n\equiv 2\pmod4$, then $G$ has a decycling set which is
independent and its complement induces a tree. If $n\equiv 0\pmod4$, then one
of two possibilities occurs: either $G$ has an independent decycling set whose
complement induces a forest of two trees, or the decycling set is
near-independent (which means that it induces a single edge) and its complement
induces a tree. In this paper we strengthen the result of Payan and Sakarovitch
by proving that the latter possibility (a near-independent set and a tree) can
always be guaranteed. Moreover, we relax the assumption of cyclic
$4$-edge-connectivity to a significantly weaker condition expressed through the
canonical decomposition of 3-connected cubic graphs into cyclically
$4$-edge-connected ones. Our methods substantially use a surprising and
seemingly distant relationship between the decycling number and the maximum
genus of a cubic graph.
|
http://arxiv.org/abs/2309.11606v1
|
The experiment involving the entanglement of two massive particles through
gravitational fields has been devised to discern the quantum attributes of
gravity. In this paper, we present a scheme to extend this experiment's
applicability to more generalized curved spacetimes, with the objective of
validating universal quantum gravity within broader contexts. Specifically, we
direct our attention towards the quantum gravity induced entanglement of mass
(QGEM) in astrophysical phenomena, such as particles traversing the
interstellar medium. Notably, we ascertain that the gravitational field within
curved spacetime can induce observable entanglement between particle pairs in
both scenarios, even when dealing with particles significantly smaller than
mesoscopic masses. Furthermore, we obtain the characteristic spectra of QGEM
across diverse scenarios, shedding light on potential future experimental
examinations. This approach not only establishes a more pronounced and
extensive manifestation of the quantum influences of gravity compared to the
original scheme but also opens avenues for prospective astronomical
experiments. These experiments, aligned with our postulates, hold immense
advantages and implications for the detection of quantum gravity and can be
envisioned for future design.
|
http://arxiv.org/abs/2308.16526v2
|
Nowadays, billions of phones, IoT and edge devices around the world generate
data continuously, enabling many Machine Learning (ML)-based products and
applications. However, due to increasing privacy concerns and regulations,
these data tend to reside on devices (clients) instead of being centralized for
performing traditional ML model training. Federated Learning (FL) is a
distributed approach in which a single server and multiple clients
collaboratively build an ML model without moving data away from clients.
Whereas existing studies on FL have their own experimental evaluations, most
experiments were conducted using a simulation setting or a small-scale testbed.
This might limit the understanding of FL implementation in realistic
environments. In this empirical study, we systematically conduct extensive
experiments on a large network of IoT and edge devices (called IoT-Edge
devices) to present FL real-world characteristics, including learning
performance and operation (computation and communication) costs. Moreover, we
mainly concentrate on heterogeneous scenarios, which is the most challenging
issue of FL. By investigating the feasibility of on-device implementation, our
study provides valuable insights for researchers and practitioners, promoting
the practicality of FL and assisting in improving the current design of real FL
systems.
|
http://arxiv.org/abs/2305.19831v1
|
Constant product markets with concentrated liquidity (CL) are the most
popular type of automated market makers. In this paper, we characterise the
continuous-time wealth dynamics of strategic LPs who dynamically adjust their
range of liquidity provision in CL pools. Their wealth results from fee income,
the value of their holdings in the pool, and rebalancing costs. Next, we derive
a self-financing and closed-form optimal liquidity provision strategy where the
width of the LP's liquidity range is determined by the profitability of the
pool (provision fees minus gas fees), the predictable losses (PL) of the LP's
position, and concentration risk. Concentration risk refers to the decrease in
fee revenue if the marginal exchange rate (akin to the midprice in a limit
order book) in the pool exits the LP's range of liquidity. When the drift in
the marginal rate is stochastic, we show how to optimally skew the range of
liquidity to increase fee revenue and profit from the expected changes in the
marginal rate. Finally, we use Uniswap v3 data to show that, on average, LPs
have traded at a significant loss, and to show that the out-of-sample
performance of our strategy is superior to the historical performance of LPs in
the pool we consider.
|
http://arxiv.org/abs/2309.08431v3
|
The Mathisson-Papapetrou-Dixon (MPD) equations describe the motion of
spinning test particles. It is well-known that these equations, which couple
the Riemann curvature tensor with the antisymmetric spin tensor S, together
with the normalization condition for the four-velocity, is a system of eleven
equations relating fourteen unknowns. To ``close'' the system, it is necessary
to introduce a constraint of the form V_\mu S^{\mu \nu} = 0, usually known as
the spin supplementary condition (SSC), where V_\mu is a future-oriented
reference vector satisfying the normalization condition V_\alpha V^\alpha = -1.
There are several SSCs in the literature. In particular, the Tulzcyjew-Dixon,
Mathisson-Pirani, and Ohashi-Kyrian-Semer\'ak are the most used by the
community. From the physical point of view, choosing a different SSC (a
different reference vector $V^\mu$) is equivalent to fixing the centroid of the
test particle. In this manuscript, we compare different SSCs for spinning test
particles moving around a Morris-Thorne traversable wormhole. To do so, we
first obtain the orbital frequency and expand it up to third-order in the
particle's spin; as expected, the zero-order coincides with the Keplerian
frequency, the same in all SSCs; nevertheless, we found that differences appear
in the second order of the expansion, similar to the Schwarzschild and Kerr
black holes. We also compare the behavior of the innermost stable circular
orbit (ISCO). Since each SSC is associated with a different centroid of the
test particle, we analyze (separately) the radial and spin corrections for each
SSC. We found that the radial corrections improve the convergence, especially
between Tulzcyjew-Dixon and Mathisson-Pirani SSCs. In the case of
Ohashi-Kyrian-Semer\'ak, we found that the spin corrections remove the
divergence for the ISCO and extend its existence for higher values of the
particle's spin.
|
http://arxiv.org/abs/2306.17394v1
|
In breast surgical planning, accurate registration of MR images across
patient positions has the potential to improve the localisation of tumours
during breast cancer treatment. While learning-based registration methods have
recently become the state-of-the-art approach for most medical image
registration tasks, these methods have yet to make inroads into breast image
registration due to certain difficulties-the lack of rich texture information
in breast MR images and the need for the deformations to be diffeomophic. In
this work, we propose learning strategies for breast MR image registration that
are amenable to diffeomorphic constraints, together with early experimental
results from in-silico and in-vivo experiments. One key contribution of this
work is a registration network which produces superior registration outcomes
for breast images in addition to providing diffeomorphic guarantees.
|
http://arxiv.org/abs/2309.13777v2
|
On social media, users often express their personal feelings, which may
exhibit cognitive distortions or even suicidal tendencies on certain specific
topics. Early recognition of these signs is critical for effective
psychological intervention. In this paper, we introduce two novel datasets from
Chinese social media: SOS-HL-1K for suicidal risk classification and
SocialCD-3K for cognitive distortions detection. The SOS-HL-1K dataset
contained 1,249 posts and SocialCD-3K dataset was a multi-label classification
dataset that containing 3,407 posts. We propose a comprehensive evaluation
using two supervised learning methods and eight large language models (LLMs) on
the proposed datasets. From the prompt engineering perspective, we experimented
with two types of prompt strategies, including four zero-shot and five few-shot
strategies. We also evaluated the performance of the LLMs after fine-tuning on
the proposed tasks. The experimental results show that there is still a huge
gap between LLMs relying only on prompt engineering and supervised learning. In
the suicide classification task, this gap is 6.95% points in F1-score, while in
the cognitive distortion task, the gap is even more pronounced, reaching 31.53%
points in F1-score. However, after fine-tuning, this difference is
significantly reduced. In the suicide and cognitive distortion classification
tasks, the gap decreases to 4.31% and 3.14%, respectively. This research
highlights the potential of LLMs in psychological contexts, but supervised
learning remains necessary for more challenging tasks. All datasets and code
are made available.
|
http://arxiv.org/abs/2309.03564v3
|
Galaxy properties primarily depend on their host halo mass. Halo mass, in
turn, depends on the cosmic web environment. We explore if the effect of the
cosmic web on galaxy properties is entirely transitive via host halo mass, or
if the cosmic web has an effect independent of mass. The secondary galaxy bias,
sometimes referred to as ``galaxy assembly bias'', is the beyond-mass component
of the galaxy-halo connection. We investigate the link between the cosmic web
environment and the secondary galaxy bias in simulations. We measure the
secondary galaxy bias through the following summary statistics: projected
two-point correlation function, $\wprp$, and counts-in-cylinders statistics,
$\Pncic$. First, we examine the extent to which the secondary galaxy bias can
be accounted for with a measure of the environment as a secondary halo
property. We find that the total secondary galaxy bias preferentially places
galaxies in more strongly clustered haloes. In particular, haloes at fixed mass
tend to host more galaxies when they are more strongly associated with nodes or
filaments. This tendency accounts for a significant portion, but not the
entirety, of the total secondary galaxy bias effect. Second, we quantify how
the secondary galaxy bias behaves differently depending on the host halo
proximity to nodes and filaments. We find that the total secondary galaxy bias
is relatively stronger in haloes more associated with nodes or filaments. We
emphasise the importance of removing halo mass effects when considering the
cosmic web environment as a factor in the galaxy-halo connection.
|
http://arxiv.org/abs/2309.15306v2
|
We investigate the hypothetical X17 boson on neutron stars and Quark Stars
(QSs) using various hadronic Equation of States (EoSs) with phenomenological or
microscopic origin. Our aim is to set realistic constraints on its coupling
constant and the mass scaling, with respect to causality and various possible
upper mass limits and the dimensionless tidal deformability $\Lambda_{1.4}$. In
particular, we pay special attention on two main phenomenological parameters of
the X17, the one is related to the coupling constant $\mathrm{g}$ that it has
with hadrons or quarks and the other with the in-medium effects through the
regulator $\mathrm{C}$. Both are very crucial concerning the contribution on
the total energy density and pressure. In the case of considering the X17 as a
carrier of nuclear force in Relativistic Mean Field (RMF) theory, an admixture
into vector boson segment was constrained by 20\% and 30\%. In our
investigation, we came to the general conclusion that the effect of the
hypothetical X17 both on neutron and QSs constrained mainly by the causality
limit, which is a specific property of each EoS. Moreover, it depends on the
interplay between the main two parameters that is the interaction coupling
$\mathrm{g}$ and the in-medium effects regulator $\mathrm{C}$. These effects
are more pronounced in the case of QSs concerning all the bulk properties.
|
http://arxiv.org/abs/2309.12469v1
|
Decision trees remain one of the most popular machine learning models today,
largely due to their out-of-the-box performance and interpretability. In this
work, we present a Bayesian approach to decision tree induction via maximum a
posteriori inference of a posterior distribution over trees. We first
demonstrate a connection between maximum a posteriori inference of decision
trees and AND/OR search. Using this connection, we propose an AND/OR search
algorithm, dubbed MAPTree, which is able to recover the maximum a posteriori
tree. Lastly, we demonstrate the empirical performance of the maximum a
posteriori tree both on synthetic data and in real world settings. On 16 real
world datasets, MAPTree either outperforms baselines or demonstrates comparable
performance but with much smaller trees. On a synthetic dataset, MAPTree also
demonstrates greater robustness to noise and better generalization than
existing approaches. Finally, MAPTree recovers the maxiumum a posteriori tree
faster than existing sampling approaches and, in contrast with those
algorithms, is able to provide a certificate of optimality. The code for our
experiments is available at https://github.com/ThrunGroup/maptree.
|
http://arxiv.org/abs/2309.15312v3
|
We investigate the vacuum expectation value of the surface energy-momentum
tensor (SEMT) for a scalar field with general curvature coupling in the
geometry of two branes orthogonal to the boundary of anti-de Sitter (AdS)
spacetime. For Robin boundary conditions on the branes, the SEMT is decomposed
into the contributions corresponding to the self-energies of the branes and the
parts induced by the presence of the second brane. The renormalization is
required for the first parts only and for the corresponding regularization the
generalized zeta function method is employed. The induced SEMT is finite and is
free from renormalization umbiguities. For an observer living on the brane, the
corresponding equation of state is of the cosmological constant type. Depending
on the boundary conditions and on the separation between the branes, the
surface energy densities can be either positive or negative. The energy density
induced on the brane vanishes in special cases of Dirichlet and Neumann
boundary conditions on that brane. The effect of gravity on the induced SEMT is
essential at separations between the branes of the order or larger than the
curvature radius for AdS spacetime. In the large separation limit the decay of
the SEMT, as a function of the proper separation, follows a power law for both
massless and massive fields. For parallel plates in Minkowski bulk and for
massive fields the fall-off of the corresponding expectation value is
exponential.
|
http://arxiv.org/abs/2309.06408v2
|
In previous literature, backward error analysis was used to find ordinary
differential equations (ODEs) approximating the gradient descent trajectory. It
was found that finite step sizes implicitly regularize solutions because terms
appearing in the ODEs penalize the two-norm of the loss gradients. We prove
that the existence of similar implicit regularization in RMSProp and Adam
depends on their hyperparameters and the training stage, but with a different
"norm" involved: the corresponding ODE terms either penalize the (perturbed)
one-norm of the loss gradients or, conversely, impede its reduction (the latter
case being typical). We also conduct numerical experiments and discuss how the
proven facts can influence generalization.
|
http://arxiv.org/abs/2309.00079v4
|
We study certain categories associated to symmetric quivers with potential,
called quasi-BPS categories. We construct semiorthogonal decompositions of the
categories of matrix factorizations for moduli stacks of representations of
(framed or unframed) symmetric quivers with potential, where the summands are
categorical Hall products of quasi-BPS categories. These results generalize our
previous results about the three loop quiver.
We prove several properties of quasi-BPS categories: wall-crossing
equivalence, strong generation, and categorical support lemma in the case of
tripled quivers with potential. We also introduce reduced quasi-BPS categories
for preprojective algebras, which have trivial relative Serre functor and are
indecomposable when the weight is coprime with the total dimension. In this
case, we regard the reduced quasi-BPS categories as noncommutative local
hyperk\"ahler varieties, and as (twisted) categorical versions of crepant
resolutions of singularities of good moduli spaces of representations of
preprojective algebras.
The studied categories include the local models of quasi-BPS categories of K3
surfaces. In a follow-up paper, we establish analogous properties for quasi-BPS
categories of K3 surfaces.
|
http://arxiv.org/abs/2309.08425v1
|
We present computation of the next-to-leading power corrections for Higgs
plus one jet production in a hadron collider via gluon fusion channel. Shifting
of spinors in the helicity amplitudes without additional radiation captures the
leading next-to-soft radiative behaviour and makes the calculation tractable.
We establish the connection between the shifted dipole spinors and the colour
ordered radiative amplitudes. We find that next-to-maximal helicity violating
amplitudes do not play a role in this correction. Compact analytic expressions
of next-to-leading power leading logarithms coming from different helicity
configurations are shown.
|
http://arxiv.org/abs/2309.08343v2
|
Extension of point-to-point communication model to the realm of multi-node
configurations finds a plethora of applications in internet and
telecommunication networks. Here, we establish a novel advantage of quantum
communication in a commonly encountered network configuration known as the
Multiple Access Channel (MAC). A MAC consists of multiple distant senders
aiming to send their respective messages to a common receiver. Unlike the
quantum superdense coding protocol, the advantage reported here is realized
without invoking entanglement between the senders and the receiver. Notably,
such an advantage is unattainable in traditional point-to-point communication
involving one sender and one receiver, where the limitations imposed by the
Holevo and Frankel Weiner no-go theorems come into play. Within the MAC setup,
this distinctive advantage materializes through the receiver's unique ability
to simultaneously decode the quantum systems received from multiple senders.
Intriguingly, some of our MAC designs draw inspiration from various other
constructs in quantum foundations, such as the Pusey-Barrett-Rudolph theorem
and the concept of `nonlocality without entanglement', originally explored for
entirely different purposes. Beyond its immediate applications in network
communication, the presented quantum advantage hints at a profound connection
with the concept of `quantum nonlocality without inputs' and holds the
potential for semi-device-independent certification of entangled measurements.
|
http://arxiv.org/abs/2309.17263v2
|
Measuring the bioelectric signals is one of the key functions in wearable
healthcare devices and implantable medical devices. The use of wearable
healthcare devices has made continuous and immediate monitoring of personal
health status possible. Implantable medical devices have played an important
role throughout the fields of neuroscience, brain-machine (or brain-computer)
interface, and rehabilitation technology. Over the last five decades, the
bioelectric signals have been observed through a variety of biopotential
recording front-ends, along with advances in semiconductor technology scaling
and circuit techniques. Also, for reliable and continuous signal acquisition,
the front-end architectures have evolved while maintaining low power and low
noise performance. In this article, the architecture history of the
biopotential recording front-ends developed since the 1970s is surveyed, and
overall key circuit techniques are discussed. Depending on the bioelectric
signals being measured, appropriate front-end architecture needs to be chosen,
and the characteristics and challenges of each architecture are also covered in
this article.
|
http://arxiv.org/abs/2309.11612v1
|
In this talk, we review a loop-by-loop approach used to generate differential
equations for multi-scale (dual) Feynman integrals. We illustrate the method on
a well-established example: the unequal mass elliptic sunrise.
|
http://arxiv.org/abs/2309.04592v1
|
Creating a good contact between electrodes and graphene nanoribbons (GNRs)
has been a longstanding challenge in searching for the next GNR-based
nanoelectronics. This quest requires the controlled fabrication of sub-20 nm
metallic gaps, a clean GNR transfer minimizing damage and organic contamination
during the device fabrication, as well as work function matching to minimize
the contact resistance. Here, we transfer 9-atom-wide armchair-edged GNRs
(9-AGNRs) grown on Au(111)/mica substrates to pre-patterned platinum
electrodes, yielding polymer-free 9-AGNR field-effect transistor devices. Our
devices have a resistance in the range of $10^6$ to $10^8$ $\Omega$ in the
low-bias regime, which is 2 to 4 orders of magnitude lower than previous
reports. Density functional theory (DFT) calculations combined with the
non-equilibrium Green's function method (NEGF) explain the observed p-type
electrical characteristics and further demonstrate that platinum gives strong
coupling and higher transmission in comparison to other materials such as
graphene.
|
http://arxiv.org/abs/2301.13814v1
|
An irredundant base of a group $G$ acting faithfully on a finite set $\Gamma$
is a sequence of points in $\Gamma$ that produces a strictly descending chain
of pointwise stabiliser subgroups in $G$, terminating at the trivial subgroup.
Suppose that $G$ is $\operatorname{S}_n$ or $\operatorname{A}_n$ acting
primitively on $\Gamma$, and that the point stabiliser is primitive in its
natural action on $n$ points. We prove that the maximum size of an irredundant
base of $G$ is $O\left(\sqrt{n}\right)$, and in most cases $O\left((\log
n)^2\right)$. We also show that these bounds are best possible.
|
http://arxiv.org/abs/2309.00092v2
|
We study two adaptive importance sampling schemes for estimating the
probability of a rare event in the high-dimensional regime $d \to \infty$ with
$d$ the dimension. The first scheme, motivated by recent results, seeks to use
as auxiliary distribution a projection of the optimal auxiliary distribution
(optimal among Gaussian distributions, and in the sense of the
Kullback--Leibler divergence); the second scheme is the prominent cross-entropy
method. In these schemes, two samples are used: the first one to learn the
auxiliary distribution and the second one, drawn according to the learnt
distribution, to perform the final probability estimation. Contrary to the
common belief that the sample size needs to grow exponentially in the dimension
to make the estimator consistent and avoid the weight degeneracy phenomenon, we
find that a polynomial sample size in the first learning step is enough. We
prove this result assuming that the sought probability is bounded away from
$0$. For the first scheme, we show that the sample size only needs to grow like
$rd$ with $r$ the effective dimension of the projection, while for
cross-entropy, the polynomial growth rate remains implicit although insight on
its value is provided. In addition to proving consistency, we also prove that
in the regimes studied, the importance sampling weights do not degenerate.
|
http://arxiv.org/abs/2309.16828v1
|
In this article, we prove a generalized Rodrigues formula for a wide class of
holonomic Laurent series, which yields a new linear independence criterion
concerning their values at algebraic points. This generalization yields a new
construction of Pad\'e approximations including those for Gauss hypergeometric
functions. In particular, we obtain a linear independence criterion over a
number field concerning values of Gauss hypergeometric functions, allowing the
parameters of Gauss hypergeometric functions to vary.
|
http://arxiv.org/abs/2305.19616v2
|
The gap between the randomly initialized item ID embedding and the
well-trained warm item ID embedding makes the cold items hard to suit the
recommendation system, which is trained on the data of historical warm items.
To alleviate the performance decline of new items recommendation, the
distribution of the new item ID embedding should be close to that of the
historical warm items. To achieve this goal, we propose an Adversarial
Variational Auto-encoder Warm-up model (AVAEW) to generate warm-up item ID
embedding for cold items. Specifically, we develop a conditional variational
auto-encoder model to leverage the side information of items for generating the
warm-up item ID embedding. Particularly, we introduce an adversarial module to
enforce the alignment between warm-up item ID embedding distribution and
historical item ID embedding distribution. We demonstrate the effectiveness and
compatibility of the proposed method by extensive offline experiments on public
datasets and online A/B tests on a real-world large-scale news recommendation
platform.
|
http://arxiv.org/abs/2302.14395v1
|
Optical photon propagation is an embarrassingly parallel operation, well
suited to acceleration on GPU devices. Rendering of images employs similar
techniques -- for this reason, a pipeline to offload optical photon propagation
from Geant4 to the industry-standard open-source renderer Mitsuba3 has been
devised. With the creation of a dedicated plugin for single point multi-source
emission, we find a photon propagation rate of $2\times10^{5}$ photons per
second per CPU thread using LLVM and $1.2\times10^{6}$ photons per second per
GPU using CUDA. This represents a speed-up of 70 on CPU and 400 on GPU over
Geant4 and is competitive with other similar applications. The potential for
further applications is discussed.
|
http://arxiv.org/abs/2309.12496v1
|
In this article, we study the powers of the generalized binomial edge ideal
$\mathcal{J}_{K_m,P_n}$ of a path graph $P_n$. We explicitly compute their
regularities and determine the limit of their depths. We also show that these
ordinary powers coincide with their symbolic powers. Additionally, we study the
Rees algebra and the special fiber ring of $\mathcal{J}_{K_m,P_n}$ via Sagbi
basis theory. In particular, we obtain exact formulas for the regularity of
these blowup algebras.
|
http://arxiv.org/abs/2310.20235v1
|
Invariant solutions of the Navier-Stokes equations play an important role in
the spatiotemporally chaotic dynamics of turbulent shear flows. Despite the
significance of these solutions, their identification remains a computational
challenge, rendering many solutions inaccessible and thus hindering progress
towards a dynamical description of turbulence in terms of invariant solutions.
We compute equilibria of three-dimensional wall-bounded shear flows using an
adjoint-based matrix-free variational approach. To address the challenge of
computing pressure in the presence of solid walls, we develop a formulation
that circumvents the explicit construction of pressure and instead employs the
influence matrix method. Together with a data-driven convergence acceleration
technique based on dynamic mode decomposition, this yields a practically
feasible alternative to state-of-the-art Newton methods for converging
equilibrium solutions. We compute multiple equilibria of plane Couette flow
starting from inaccurate guesses extracted from a turbulent time series. The
variational method outperforms Newton(-hookstep) iterations in successfully
converging from poor initial guesses, suggesting a larger convergence radius.
|
http://arxiv.org/abs/2306.00165v2
|
The realization of efficient quantum light sources relies on the integration
of self-assembled quantum dots (QDs) into photonic nanostructures with high
spatial positioning accuracy. In this work, we present a comprehensive
investigation of the QD position accuracy, obtained using two marker-based QD
positioning techniques, photoluminescence (PL) and cathodoluminescence (CL)
imaging, as well as using a marker-free in-situ electron beam lithography
(in-situ EBL) technique. We employ four PL imaging configurations with three
different image processing approaches and compare them with CL imaging. We
fabricate circular mesa structures based on the obtained QD coordinates from
both PL and CL image processing to evaluate the final positioning accuracy.
This yields final position offset of the QD relative to the mesa center of
$\mu_x$ = (-40$\pm$58) nm and $\mu_y$ = (-39$\pm$85) nm with PL imaging and
$\mu_x$ = (-39$\pm$30) nm and $\mu_y$ = (25$\pm$77) nm with CL imaging, which
are comparable to the offset $\mu_x$ = (20$\pm$40) nm and $\mu_y$ =
(-14$\pm$39) nm obtained using the in-situ EBL method. We discuss the possible
causes of the observed offsets, which are significantly larger than the QD
localization uncertainty obtained from simply imaging the QD light emission
from an unstructured wafer. Our study highlights the influences of the image
processing technique and the subsequent fabrication process on the final
positioning accuracy for a QD placed inside a photonic nanostructure.
|
http://arxiv.org/abs/2309.14795v2
|
We propose a novel mechanism for cancelling the leading order contribution to
the potential in composite Higgs scenarios. The mechanism relies on the
splitting of a real representation of the global symmetry into a complex
representation and its conjugate of the unbroken group. We identify two cosets
one of which includes a custodial symmetry. A numerical analysis is performed
in a phenomenological three-site model and the resulting fine-tuning is
analysed. The cancelling of the leading order potential results in a drastic
reduction of the fine-tuning. For a symmetry breaking scale of the strong
sector as high as $f=1600$ GeV, fine-tuning can be as good as $10\%$ or even
better. We discuss a possible interpretation in the 5D holographic dual. Unique
signatures of the model include quarks with baryon number $B=2/3$ with highly
distinctive decays which can be looked for at the LHC.
|
http://arxiv.org/abs/2309.05698v1
|
Non-destructive subsurface imaging methods based on the absorption or
scattering of photons or neutrons are becoming increasingly popular in cultural
asset conservation. However, these techniques are limited by physical and
practical issues: their penetration depth may be insufficient for large items,
and they usually necessitate transferring the objects of interest to
specialised laboratories. The latter issue is recently being addressed by the
development of portable sources, but artificial radiation can be harmful and is
thus subjected to strict regulation. Muons are elementary particles that are
abundantly and freely created in the atmosphere by cosmic-ray interactions.
Their absorption and scattering in matter are respectively dependent on the
density and elemental composition of the substance they traverse, suggesting
that they could be used for subsurface remote imaging. This novel technique,
dubbed "muography", has been used in applications ranging from geophysics to
archaeology, but has remained largely unexplored for a wide range of cultural
heritage objects that are small by muography standards but whose size and
density are too large for conventional imaging methods. This document outlines
the general arguments and some early simulation studies that aim at exploring
the low-size limit of muography and its relevance for cultural heritage
preservation.
|
http://arxiv.org/abs/2309.08394v1
|
Detecting the salient objects in a remote sensing image has wide applications
for the interdisciplinary research. Many existing deep learning methods have
been proposed for Salient Object Detection (SOD) in remote sensing images and
get remarkable results. However, the recent adversarial attack examples,
generated by changing a few pixel values on the original remote sensing image,
could result in a collapse for the well-trained deep learning based SOD model.
Different with existing methods adding perturbation to original images, we
propose to jointly tune adversarial exposure and additive perturbation for
attack and constrain image close to cloudy image as Adversarial Cloud. Cloud is
natural and common in remote sensing images, however, camouflaging cloud based
adversarial attack and defense for remote sensing images are not well studied
before. Furthermore, we design DefenseNet as a learn-able pre-processing to the
adversarial cloudy images so as to preserve the performance of the deep
learning based remote sensing SOD model, without tuning the already deployed
deep SOD model. By considering both regular and generalized adversarial
examples, the proposed DefenseNet can defend the proposed Adversarial Cloud in
white-box setting and other attack methods in black-box setting. Experimental
results on a synthesized benchmark from the public remote sensing SOD dataset
(EORSSD) show the promising defense against adversarial cloud attacks.
|
http://arxiv.org/abs/2306.17431v2
|
Conformal prediction (CP) is a framework to quantify uncertainty of machine
learning classifiers including deep neural networks. Given a testing example
and a trained classifier, CP produces a prediction set of candidate labels with
a user-specified coverage (i.e., true class label is contained with high
probability). Almost all the existing work on CP assumes clean testing data and
there is not much known about the robustness of CP algorithms w.r.t
natural/adversarial perturbations to testing examples. This paper studies the
problem of probabilistically robust conformal prediction (PRCP) which ensures
robustness to most perturbations around clean input examples. PRCP generalizes
the standard CP (cannot handle perturbations) and adversarially robust CP
(ensures robustness w.r.t worst-case perturbations) to achieve better
trade-offs between nominal performance and robustness. We propose a novel
adaptive PRCP (aPRCP) algorithm to achieve probabilistically robust coverage.
The key idea behind aPRCP is to determine two parallel thresholds, one for data
samples and another one for the perturbations on data (aka
"quantile-of-quantile" design). We provide theoretical analysis to show that
aPRCP algorithm achieves robust coverage. Our experiments on CIFAR-10,
CIFAR-100, and ImageNet datasets using deep neural networks demonstrate that
aPRCP achieves better trade-offs than state-of-the-art CP and adversarially
robust CP algorithms.
|
http://arxiv.org/abs/2307.16360v1
|
This study investigates the effect of thermal modification on the flexural
properties, transverse fracture energy, and hardness of western hemlock, a
material which is finding increasing applications in construction. Flexure
tests on specimens featuring longitudinal and transverse grains showed that
thermal modification at 167C slightly improves the flexural modulus and
strength and leads to less statistical variability compared to unmodified
samples. On the other hand, the fracture and Janka hardness tests revealed a
more pronounced brittleness of the thermally modified samples. In fact, the
total mode I fracture energy of modified Single Edge Notch Bending (SENB)
samples was about 47% lower for radial-longitudinal systems and 60% lower for
tangential-longitudinal systems. Similarly, the average Janka hardness in the
tangential, radial, and transverse planes was 8.5%, 3.9%, and 9.4% lower in the
modified specimens, respectively. The results presented in this work show that
thermal modification can have a significant effect on the fracturing behavior
of western hemlock and its energy dissipation capabilities. For design, this
must be taken into serious consideration as these properties significantly
influence the damage tolerance of this wood in the presence of stress
concentrations such as e.g., those induced in bolted joints and cut outs.
Fracture energy and hardness are also strongly correlated to ballistic
performance.
|
http://arxiv.org/abs/2304.00052v1
|
We study the Schr\"{o}dinger-Poisson type system: \begin{equation*} \left\{
\begin{array}{ll} -\Delta u+\lambda u+\left( \mu _{11}\phi _{u}-\mu _{12}\phi
_{v}\right) u=% \frac{1}{2\pi }\int_{0}^{2\pi }\left\vert u+e^{i\theta
}v\right\vert ^{p-1}\left( u+e^{i\theta }v\right) d\theta & \text{ in
}\mathbb{R}^{3}, \\ -\Delta v+\lambda v+\left( \mu _{22}\phi _{v}-\mu _{12}\phi
_{u}\right) v=% \frac{1}{2\pi }\int_{0}^{2\pi }\left\vert v+e^{i\theta
}u\right\vert ^{p-1}\left( v+e^{i\theta }u\right) d\theta & \text{ in
}\mathbb{R}^{3},% \end{array}% \right. \end{equation*}% where $1<p<3$ with
parameters $\lambda ,\mu_{ij}>0$. Novel approaches are employed to prove the
existence of a positive solution for $1<p<3$ including, particularly, the
finding of a ground state solution for $2\leq p<3$ using established linear
algebra techniques and demonstrating the existence of two distinct positive
solutions for $1<p<2.$ The analysis here, by employing alternative techniques,
yields additional and improved results to those obtained in the study of Jin
and Seok [Calc. Var. (2023) 62:72].
|
http://arxiv.org/abs/2306.17343v1
|
Sequential decision-making agents struggle with long horizon tasks, since
solving them requires multi-step reasoning. Most reinforcement learning (RL)
algorithms address this challenge by improved credit assignment, introducing
memory capability, altering the agent's intrinsic motivation (i.e. exploration)
or its worldview (i.e. knowledge representation). Many of these components
could be learned from offline data. In this work, we follow the hypothesis that
exploration and representation learning can be improved by separately learning
two different models from a single offline dataset. We show that learning a
state representation using noise-contrastive estimation and a model of
auxiliary reward separately from a single collection of human demonstrations
can significantly improve the sample efficiency on the challenging NetHack
benchmark. We also ablate various components of our experimental setting and
highlight crucial insights.
|
http://arxiv.org/abs/2304.00046v1
|
With continuous advances in deep learning, distributed training is becoming
common in GPU clusters. Specifically, for emerging workloads with diverse
amounts, ratios, and patterns of communication, we observe that network
contention can significantly degrade training throughput. However, widely used
scheduling policies often face limitations as they are agnostic to network
contention between jobs. In this paper, we present a new approach to mitigate
network contention in GPU clusters using reinforcement learning. We formulate
GPU cluster scheduling as a reinforcement learning problem and opt to learn a
network contention-aware scheduling policy that efficiently captures contention
sensitivities and dynamically adapts scheduling decisions through continuous
evaluation and improvement. We show that compared to widely used scheduling
policies, our approach reduces average job completion time by up to 18.2\% and
effectively cuts the tail job completion time by up to 20.7\% while allowing a
preferable trade-off between average job completion time and resource
utilization.
|
http://arxiv.org/abs/2310.20209v1
|
In 5G New Radio (NR), beam management entails periodic and continuous
transmission and reception of control signals in the form of synchronization
signal blocks (SSBs), used to perform initial access and/or channel estimation.
However, this procedure demands continuous energy consumption, which is
particularly challenging to handle for low-cost, low-complexity, and
battery-constrained devices, such as RedCap devices to support mid-market
Internet of Things (IoT) use cases. In this context, this work aims at reducing
the energy consumption during beam management for RedCap devices, while
ensuring that the desired Quality of Service (QoS) requirements are met. To do
so, we formalize an optimization problem in an Indoor Factory (InF) scenario to
select the best beam management parameters, including the beam update
periodicity and the beamwidth, to minimize energy consumption based on users'
distribution and their speed. The analysis yields the regions of feasibility,
i.e., the upper limit(s) on the beam management parameters for RedCap devices,
that we use to provide design guidelines accordingly.
|
http://arxiv.org/abs/2309.14971v1
|
Test-Time Adaptation aims to adapt source domain model to testing data at
inference stage with success demonstrated in adapting to unseen corruptions.
However, these attempts may fail under more challenging real-world scenarios.
Existing works mainly consider real-world test-time adaptation under non-i.i.d.
data stream and continual domain shift. In this work, we first complement the
existing real-world TTA protocol with a globally class imbalanced testing set.
We demonstrate that combining all settings together poses new challenges to
existing methods. We argue the failure of state-of-the-art methods is first
caused by indiscriminately adapting normalization layers to imbalanced testing
data. To remedy this shortcoming, we propose a balanced batchnorm layer to swap
out the regular batchnorm at inference stage. The new batchnorm layer is
capable of adapting without biasing towards majority classes. We are further
inspired by the success of self-training~(ST) in learning from unlabeled data
and adapt ST for test-time adaptation. However, ST alone is prone to over
adaption which is responsible for the poor performance under continual domain
shift. Hence, we propose to improve self-training under continual domain shift
by regularizing model updates with an anchored loss. The final TTA model,
termed as TRIBE, is built upon a tri-net architecture with balanced batchnorm
layers. We evaluate TRIBE on four datasets representing real-world TTA
settings. TRIBE consistently achieves the state-of-the-art performance across
multiple evaluation protocols. The code is available at
\url{https://github.com/Gorilla-Lab-SCUT/TRIBE}.
|
http://arxiv.org/abs/2309.14949v1
|
The nuclear time-dependent density functional theory (TDDFT) is a tool of
choice for describing various dynamical phenomena in atomic nuclei. In a recent
study, we reported an extension of the framework - the multiconfigurational
TDDFT (MC-TDDFT) model - that takes into account quantum fluctuations in the
collective space by mixing several TDDFT trajectories. In this article, we
focus on technical and numerical aspects of the model. We outline the
properties of the time-dependent variational principle that is employed to
obtain the equation of motion for the mixing function. Furthermore, we discuss
evaluation of various ingredients of the equation of motion, including the
Hamiltonian kernel, norm kernel, and kernels with explicit time derivatives. We
detail the numerical methods for resolving the equation of motion and outline
the major assumptions underpinning the model. A technical discussion is
supplemented with numerical examples that consider collective quadrupole
vibrations in $^{40}$Ca, particularly focusing on the issues of convergence,
treatment of linearly dependent bases, energy conservation, and prescriptions
for the density-dependent part of an interaction.
|
http://arxiv.org/abs/2310.20557v2
|
The Advanced Wakefield Experiment (AWAKE) at CERN relies on the seeded
Self-Modulation (SM) of a long relativistic proton bunch in plasma to
accelerate an externally injected MeV witness electron bunch to GeV energies.
During AWAKE Run 1 (2016-2018) and Run 2a (2021-2022), two seeding methods were
investigated experimentally: relativistic ionization front seeding and electron
bunch seeding. In the first one, a short laser pulse copropagates within the
proton bunch and ionizes the rubidium vapor, generating the plasma. In the
second, a short electron bunch propagates in plasma ahead of the proton bunch
and drives the seed wakefields. Both seeding methods will be further employed
during AWAKE Run 2b (2023-2024) to study their effect on the SM evolution in
the presence of a plasma density step. In this contribution, we will show the
main experimental results and discuss their impact for the future design of the
experiment, in particular for Run 2c (starting in 2028), where the plasma will
be split in two sections: one dedicated to SM of the proton bunch, and the
other to the electron acceleration process.
|
http://arxiv.org/abs/2305.00431v1
|
Current methods of deploying robots that operate in dynamic, uncertain
environments, such as Uncrewed Aerial Systems in search \& rescue missions,
require nearly continuous human supervision for vehicle guidance and operation.
These methods do not consider high-level mission context resulting in
cumbersome manual operation or inefficient exhaustive search patterns. We
present a human-centered autonomous framework that infers geospatial mission
context through dynamic feature sets, which then guides a probabilistic target
search planner. Operators provide a set of diverse inputs, including priority
definition, spatial semantic information about ad-hoc geographical areas, and
reference waypoints, which are probabilistically fused with geographical
database information and condensed into a geospatial distribution representing
an operator's preferences over an area. An online, POMDP-based planner,
optimized for target searching, is augmented with this reward map to generate
an operator-constrained policy. Our results, simulated based on input from five
professional rescuers, display effective task mental model alignment, 18\% more
victim finds, and 15 times more efficient guidance plans then current
operational methods.
|
http://arxiv.org/abs/2309.06395v3
|
For prime $p$ and small $n$, Jones and Roberts have developed a database
recording invariants for $p$-adic extensions of degree $n$. We contributed to
this database by computing the Galois slope content, Galois mean slope, and
inertia subgroup for a variety of wildly ramified extensions of composite
degree using the idea of Galois splitting models. We will describe a number of
strategies to find Galois splitting models including an original technique
using generic polynomials and Panayi's root finding algorithm.
|
http://arxiv.org/abs/2305.00357v1
|
This paper examines the problems of severe image-text misalignment and high
redundancy in the widely-used large-scale Vision-Language Pre-Training (VLP)
datasets. To address these issues, we propose an efficient and straightforward
Vision-Language learning algorithm called TL;DR, which aims to compress the
existing large VLP data into a small, high-quality set. Our approach consists
of two major steps. First, a codebook-based encoder-decoder captioner is
developed to select representative samples. Second, a new caption is generated
to complement the original captions for selected samples, mitigating the
text-image misalignment problem while maintaining uniqueness. As the result,
TL;DR enables us to reduce the large dataset into a small set of high-quality
data, which can serve as an alternative pre-training dataset. This algorithm
significantly speeds up the time-consuming pretraining process. Specifically,
TL;DR can compress the mainstream VLP datasets at a high ratio, e.g., reduce
well-cleaned CC3M dataset from 2.82M to 0.67M ($\sim$24\%) and noisy YFCC15M
from 15M to 2.5M ($\sim$16.7\%). Extensive experiments with three popular VLP
models over seven downstream tasks show that VLP model trained on the
compressed dataset provided by TL;DR can perform similar or even better results
compared with training on the full-scale dataset. The code will be made
available at \url{https://github.com/showlab/datacentric.vlp}.
|
http://arxiv.org/abs/2305.20087v3
|
Visual model-based RL methods typically encode image observations into
low-dimensional representations in a manner that does not eliminate redundant
information. This leaves them susceptible to spurious variations -- changes in
task-irrelevant components such as background distractors or lighting
conditions. In this paper, we propose a visual model-based RL method that
learns a latent representation resilient to such spurious variations. Our
training objective encourages the representation to be maximally predictive of
dynamics and reward, while constraining the information flow from the
observation to the latent representation. We demonstrate that this objective
significantly bolsters the resilience of visual model-based RL methods to
visual distractors, allowing them to operate in dynamic environments. We then
show that while the learned encoder is resilient to spirious variations, it is
not invariant under significant distribution shift. To address this, we propose
a simple reward-free alignment procedure that enables test time adaptation of
the encoder. This allows for quick adaptation to widely differing environments
without having to relearn the dynamics and policy. Our effort is a step towards
making model-based RL a practical and useful tool for dynamic, diverse domains.
We show its effectiveness in simulation benchmarks with significant spurious
variations as well as a real-world egocentric navigation task with noisy TVs in
the background. Videos and code at https://zchuning.github.io/repo-website/.
|
http://arxiv.org/abs/2309.00082v2
|
Fill each box in a Young diagram with the number of paths from the bottom of
its column to the end of its row, using steps north and east. Then, any square
sub-matrix of this array starting on the south-east boundary has determinant
one. We provide a - to our knowledge - new bijective argument for this result.
Using the same ideas, we prove further identities involving these numbers which
correspond to an integral orthonormal basis of the inner product space with
Gram matrix given by the array in question. This provides an explicit answer to
a question (listed as unsolved) raised in Exercise 6.27 c) of Stanley's
Enumerative Combinatorics.
|
http://arxiv.org/abs/2305.19606v1
|
Accurate long-term predictions are the foundations for many machine learning
applications and decision-making processes. However, building accurate
long-term prediction models remains challenging due to the limitations of
existing temporal models like recurrent neural networks (RNNs), as they capture
only the statistical connections in the training data and may fail to learn the
underlying dynamics of the target system. To tackle this challenge, we propose
a novel machine learning model based on Koopman operator theory, which we call
Koopman Invertible Autoencoders (KIA), that captures the inherent
characteristic of the system by modeling both forward and backward dynamics in
the infinite-dimensional Hilbert space. This enables us to efficiently learn
low-dimensional representations, resulting in more accurate predictions of
long-term system behavior. Moreover, our method's invertibility design
guarantees reversibility and consistency in both forward and inverse
operations. We illustrate the utility of KIA on pendulum and climate datasets,
demonstrating 300% improvements in long-term prediction capability for pendulum
while maintaining robustness against noise. Additionally, our method excels in
long-term climate prediction, further validating our method's effectiveness.
|
http://arxiv.org/abs/2309.10291v1
|
In this paper, we investigate how the initial models and the final models for
the polynomial functors can be uniformly specified in matching logic.
|
http://arxiv.org/abs/2309.13798v1
|
People with blindness and low vision (pBLV) encounter substantial challenges
when it comes to comprehensive scene recognition and precise object
identification in unfamiliar environments. Additionally, due to the vision
loss, pBLV have difficulty in accessing and identifying potential tripping
hazards on their own. In this paper, we present a pioneering approach that
leverages a large vision-language model to enhance visual perception for pBLV,
offering detailed and comprehensive descriptions of the surrounding
environments and providing warnings about the potential risks. Our method
begins by leveraging a large image tagging model (i.e., Recognize Anything
(RAM)) to identify all common objects present in the captured images. The
recognition results and user query are then integrated into a prompt, tailored
specifically for pBLV using prompt engineering. By combining the prompt and
input image, a large vision-language model (i.e., InstructBLIP) generates
detailed and comprehensive descriptions of the environment and identifies
potential risks in the environment by analyzing the environmental objects and
scenes, relevant to the prompt. We evaluate our approach through experiments
conducted on both indoor and outdoor datasets. Our results demonstrate that our
method is able to recognize objects accurately and provide insightful
descriptions and analysis of the environment for pBLV.
|
http://arxiv.org/abs/2310.20225v2
|
Electromagnetic waves are an inherent part of all plasmas -- laboratory
fusion plasmas or astrophysical plasmas. The conventional methods for studying
properties of electromagnetic waves rely on discretization of Maxwell equations
suitable for implementing on classical, present day, computers. The traditional
methodology is not efficient for quantum computing implementation -- a future
computational source offering a tantalizing possibility of enormous speed up
and a significant reduction in computational cost. This paper addresses two
topics relevant to implementing Maxwell equations on a quantum computer. The
first is on formulating a quantum Schrodinger representation of Maxwell
equations for wave propagation in a cold, inhomogeneous, magnetized plasma.
This representation admits unitary, energy preserving, evolution and
conveniently lends itself to appropriate discretization for a quantum computer.
Riding on the coattails of these results, the second topic is on developing a
sequence of unitary operators which form the basis for a qubit lattice
algorithm (QLA). The QLA, suitable for quantum computers, can be implemented
and tested on existing classical computers for accuracy as well as scaling of
computational time with the number of available processors. In order to
illustrate the QLA for Maxwell equations, results are presented from a time
evolving, full wave simulation of propagation and scattering of an
electromagnetic wave packet by non-dispersive dielectric medium localized in
space.
|
http://arxiv.org/abs/2309.12492v2
|
We consider a general optimization problem of minimizing a composite
objective functional defined over a class of probability distributions. The
objective is composed of two functionals: one is assumed to possess the
variational representation and the other is expressed in terms of the
expectation operator of a possibly nonsmooth convex regularizer function. Such
a regularized distributional optimization problem widely appears in machine
learning and statistics, such as proximal Monte-Carlo sampling, Bayesian
inference and generative modeling, for regularized estimation and generation.
We propose a novel method, dubbed as Moreau-Yoshida Variational Transport
(MYVT), for solving the regularized distributional optimization problem. First,
as the name suggests, our method employs the Moreau-Yoshida envelope for a
smooth approximation of the nonsmooth function in the objective. Second, we
reformulate the approximate problem as a concave-convex saddle point problem by
leveraging the variational representation, and then develope an efficient
primal-dual algorithm to approximate the saddle point. Furthermore, we provide
theoretical analyses and report experimental results to demonstrate the
effectiveness of the proposed method.
|
http://arxiv.org/abs/2307.16358v2
|
We introduce OpenIllumination, a real-world dataset containing over 108K
images of 64 objects with diverse materials, captured under 72 camera views and
a large number of different illuminations. For each image in the dataset, we
provide accurate camera parameters, illumination ground truth, and foreground
segmentation masks. Our dataset enables the quantitative evaluation of most
inverse rendering and material decomposition methods for real objects. We
examine several state-of-the-art inverse rendering methods on our dataset and
compare their performances. The dataset and code can be found on the project
page: https://oppo-us-research.github.io/OpenIllumination.
|
http://arxiv.org/abs/2309.07921v2
|
Cold atom magnetometers exploit a dense ensemble of quanta with long
coherence times to realise leading sensitivity on the micrometer scale.
Configured as a Ramsey interferometer, a cold atom sensor can approach atom
shot-noise limited precision but suffers from fringe ambiguity, producing gross
errors when the field falls outside a narrow predefined range. We describe how
Hilbert-demodulated optical magnetometry can be realised on cold atom sensors
to provide field measurements both precise and unambiguous. Continuous
reconstruction of the Larmor phase allows us to determine the dc magnetic field
unambiguously in an unshielded environment, as well as measure ac variation of
the field, in a single shot. The ac measurement allows us to characterize, and
then neutralise, line-synchronous magnetic interference, extending
reconstruction times. Using $1.6 \times 10^6$ $^{87}$Rb atoms in a volume of
$(68 \,\mathrm{\mu m})^3$, we measure a test field to be $ 86.0121261(4) \;
\mathrm{\mu T}$ in a single shot, achieving dc sensitivity of 380 fT in a
duration of 1000 ms. Our results demonstrate that Hilbert-demodulated optical
readout yields metrologically-significant sensitivity without the fringe
ambiguity inherent to Ramsey interferometry.
|
http://arxiv.org/abs/2309.11825v2
|
Crystalline phase structure is essential for understanding the performance
and properties of a material. Therefore, this study identified and quantified
the crystalline phase structure of a sample based on the diffraction pattern
observed when the crystalline sample was irradiated with electromagnetic waves
such as X-rays. Conventional analysis necessitates experienced and
knowledgeable researchers to shorten the list from many candidate crystalline
phase structures. However, the Conventional diffraction pattern analysis is
highly analyst-dependent and not objective. Additionally, there is no
established method for discussing the confidence intervals of the analysis
results. Thus, this study aimed to establish a method for automatically
inferring crystalline phase structures from diffraction patterns using Bayesian
inference. Our method successfully identified true crystalline phase structures
with a high probability from 50 candidate crystalline phase structures.
Further, the mixing ratios of selected crystalline phase structures were
estimated with a high degree of accuracy. This study provided reasonable
results for well-crystallized samples that clearly identified the crystalline
phase structures.
|
http://arxiv.org/abs/2309.14785v1
|
In real-world human-robot systems, it is essential for a robot to comprehend
human objectives and respond accordingly while performing an extended series of
motor actions. Although human objective alignment has recently emerged as a
promising paradigm in the realm of physical human-robot interaction, its
application is typically confined to generating simple motions due to inherent
theoretical limitations. In this work, our goal is to develop a general
formulation to learn manipulation functional modules and long-term task goals
simultaneously from physical human-robot interaction. We show the feasibility
of our framework in enabling robots to align their behaviors with the long-term
task objectives inferred from human interactions.
|
http://arxiv.org/abs/2309.04596v1
|
Perturbative availability poisons (PAPs) add small changes to images to
prevent their use for model training. Current research adopts the belief that
practical and effective approaches to countering PAPs do not exist. In this
paper, we argue that it is time to abandon this belief. We present extensive
experiments showing that 12 state-of-the-art PAP methods are vulnerable to
Image Shortcut Squeezing (ISS), which is based on simple compression. For
example, on average, ISS restores the CIFAR-10 model accuracy to $81.73\%$,
surpassing the previous best preprocessing-based countermeasures by $37.97\%$
absolute. ISS also (slightly) outperforms adversarial training and has higher
generalizability to unseen perturbation norms and also higher efficiency. Our
investigation reveals that the property of PAP perturbations depends on the
type of surrogate model used for poison generation, and it explains why a
specific ISS compression yields the best performance for a specific type of PAP
perturbation. We further test stronger, adaptive poisoning, and show it falls
short of being an ideal defense against ISS. Overall, our results demonstrate
the importance of considering various (simple) countermeasures to ensure the
meaningfulness of analysis carried out during the development of PAP methods.
|
http://arxiv.org/abs/2301.13838v2
|
While humans can use parts of their arms other than the hands for
manipulations like gathering and supporting, whether robots can effectively
learn and perform the same type of operations remains relatively unexplored. As
these manipulations require joint-level control to regulate the complete poses
of the robots, we develop AirExo, a low-cost, adaptable, and portable dual-arm
exoskeleton, for teleoperation and demonstration collection. As collecting
teleoperated data is expensive and time-consuming, we further leverage AirExo
to collect cheap in-the-wild demonstrations at scale. Under our in-the-wild
learning framework, we show that with only 3 minutes of the teleoperated
demonstrations, augmented by diverse and extensive in-the-wild data collected
by AirExo, robots can learn a policy that is comparable to or even better than
one learned from teleoperated demonstrations lasting over 20 minutes.
Experiments demonstrate that our approach enables the model to learn a more
general and robust policy across the various stages of the task, enhancing the
success rates in task completion even with the presence of disturbances.
Project website: https://airexo.github.io/
|
http://arxiv.org/abs/2309.14975v2
|
In this work, we introduce a flow based machine learning approach, called
reaction coordinate (RC) flow, for discovery of low-dimensional kinetic models
of molecular systems. The RC flow utilizes a normalizing flow to design the
coordinate transformation and a Brownian dynamics model to approximate the
kinetics of RC, where all model parameters can be estimated in a data-driven
manner. In contrast to existing model reduction methods for molecular kinetics,
RC flow offers a trainable and tractable model of reduced kinetics in
continuous time and space due to the invertibility of the normalizing flow.
Furthermore, the Brownian dynamics-based reduced kinetic model investigated in
this work yields a readily discernible representation of metastable states
within the phase space of the molecular system. Numerical experiments
demonstrate how effectively the proposed method discovers interpretable and
accurate low-dimensional representations of given full-state kinetics from
simulations.
|
http://arxiv.org/abs/2309.05878v1
|
This paper studies the inferential theory for estimating low-rank matrices.
It also provides an inference method for the average treatment effect as an
application. We show that the least square estimation of eigenvectors following
the nuclear norm penalization attains the asymptotic normality. The key
contribution of our method is that it does not require sample splitting. In
addition, this paper allows dependent observation patterns and heterogeneous
observation probabilities. Empirically, we apply the proposed procedure to
estimating the impact of the presidential vote on allocating the U.S. federal
budget to the states.
|
http://arxiv.org/abs/2307.16370v1
|
Estimating the Shannon information associated with individual neurons is a
non-trivial problem. Three key methods used to estimate the mutual information
between neuron inputs and outputs are described, and a list of further readings
is provided.
|
http://arxiv.org/abs/2304.01348v1
|
The machine translation mechanism translates texts automatically between
different natural languages, and Neural Machine Translation (NMT) has gained
attention for its rational context analysis and fluent translation accuracy.
However, processing low-resource languages that lack relevant training
attributes like supervised data is a current challenge for Natural Language
Processing (NLP). We incorporated a technique known Active Learning with the
NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of
low-resource language translation. With active learning, a semi-supervised
machine learning strategy, the training algorithm determines which unlabeled
data would be the most beneficial for obtaining labels using selected query
techniques. We implemented two model-driven acquisition functions for selecting
the samples to be validated. This work uses transformer-based NMT systems;
baseline model (BM), fully trained model (FTM) , active learning least
confidence based model (ALLCM), and active learning margin sampling based model
(ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy
(BLEU) metric has been used to evaluate system results. The BLEU scores of BM,
FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively.
The findings in this paper demonstrate that active learning techniques helps
the model to converge early and improve the overall quality of the translation
system.
|
http://arxiv.org/abs/2301.00688v1
|
We provide elliptic extensions of elementary identities such as the sum of
the first $n$ odd or even numbers, the geometric sum and the sum of the first
$n$ cubes. Many such identities, and their $q$-analogues, are indefinite sums,
and can be obtained from telescoping. So we used telescoping in our study to
find elliptic extensions of these identities. In the course of our study, we
obtained an identity with many parameters, which appears to be new even in the
$q$-case. In addition, we recover some $q$-identities due to Warnaar.
|
http://arxiv.org/abs/2310.20219v1
|
We perform a global analysis of a vector-like extension of the Standard
Model, which also features additional doublet and singlet scalars. The usual
Yukawa interactions are forbidden in this setup by an extra U(1) global
symmetry and the masses of the second and third family quarks and leptons are
generated via the mixing with the vector-like sector. We identify three
best-fit benchmark scenarios which satisfy the constraints imposed by the
stability of the scalar potential, the perturbativity of the coupling
constants, the measurement of the muon anomalous magnetic moment and the
non-observation of the flavor violating tau decays. We show that dominant
contributions to the muon $(g-2)$ originate in this model from the charged
Higgs/neutral lepton one-loop diagrams, thus correcting an inaccurate statement
than can be found in the literature. We also perform a detailed LHC analysis of
the benchmark scenarios. We investigate the experimental constraints stemming
from direct searches for vector-like quarks, vector-like leptons and exotic
scalars. While we show that the model is not currently tested by any collider
experiment, we point out that decays of a heavy Higgs boson into two tau
leptons may offer a smoking gun signature for the model verification in
upcoming runs at the LHC.
|
http://arxiv.org/abs/2309.13968v1
|
In this paper, we discuss measurements of the stellar population and star
forming properties for 43 spectroscopically confirmed publicly available
high-redshift $z > 7$ JWST galaxies in the JADES and CEERS observational
programs. We carry out a thorough study investigating the relationship between
spectroscopic features and photometrically derived ones, including from
spectral energy distribution (SED) fitting of models, as well as morphological
and structural properties. We find that the star formation rates (SFRs)
measured from H$\beta$ line emission are higher than those estimated from
Bayesian SED fitting and UV luminosity, with ratios SFR$_{H\beta}$/ SFR$_{UV}$
ranging from 2~13. This is a sign that the star formation history is
consistently rising given the timescales of H$\beta$ vs UV star formation
probes. In addition, we investigate how well equivalent widths (EWs) of
H$\beta$ $\lambda$4861, [O III] $\lambda$4959, and [O III] $\lambda$5007 can be
measured from photometry, finding that on average the EW derived from
photometric excesses in filters is 30% smaller than the direct spectroscopic
measurement. We also discover that a stack of the line emitting galaxies shows
a distinct morphology after subtracting imaging that contains only the
continuum. This gives us a first view of the line or ionized gas emission from
$z > 7$ galaxies, demonstrating that this material has a similar distribution,
statistically, as the continuum. We also compare the derived SFRs and stellar
masses for both parametric and non-parametric star formation histories, where
we find that 35% of our sample formed at least 30% of their stellar mass in
recent (< 10 Myr) starburst events.
|
http://arxiv.org/abs/2309.14961v1
|
In the last few decades, gravastars have been proposed as an alternative to
black holes. The stability of the gravastar has been studied in many modified
theories of gravity along with Einstein's GR. The $f(Q,T)$ gravity, a
successfully modified theory of gravity for describing the current accelerated
expansion of the Universe, has been used in this article to study gravastar in
different aspects. According to Mazur and Mottola (Proc. Natl. Acad. Sci 101,
9545 (2004)), it has three regions with three different equations of state.
Here in this work, we have studied the interior of the gravastar by considering
the $p=-\rho$ EoS to describe the dark sector for the interior region. The next
region is a thin shell of ultrarelativistic stiff fluid, in which we have
investigated several physical properties, viz., the proper length, energy,
entropy, surface energy density, etc. In addition, we have studied the surface
redshift and speed of sound to check the potential stability of our proposed
thin-shell gravastar model. Apart from that, we have used the entropy
maximization technique to verify the stability of the gravastar model. The
gravastar's outer region is a complete vacuum described by exterior
Schwarzschild geometry. Finally, we have presented a stable gravastar model
which is singularity-free and devoid of any incompleteness in classical black
hole theory.
|
http://arxiv.org/abs/2306.17435v1
|
Planetary mass loss is governed by several physical mechanisms, including
photoionisation that may impact the evolution of the atmosphere. Stellar
radiation energy deposited as heat depends strongly on the energy of the
primary electrons following photoionisation and on the local fractional
ionisation. All these factors affect the model-estimated atmospheric mass loss
rates and other characteristics of the outflow in ways that have not been
clearly elucidated. The shape of the XUV stellar spectra influences strongly
the photoionisation and heating deposition on the atmosphere. We elaborate on
the local and planet-wise effects, to clearly demonstrate the significance of
such interactions. Using the PLUTO code, we performed 1D hydrodynamics
simulations from Neptune to Jupiter size planets and stars from M dwarfs to
Sun-like. Our results indicate a significant decrease of the planetary mass
loss rate for all planetary systems when secondary ionisation is taken into
account. The mass loss rate is found to decrease by 43$\%$ for the more massive
exoplanet to 54$\%$ for the less massive exoplanet orbiting solar-like stars,
and up to 52$\%$ for a Jupiter-like planet orbiting a M type star. Our results
also indicate much faster ionisation of the atmosphere due to photoelectrons.
We built a self-consistent model including secondary ionisation by
photoelectron to evaluate its impact on mass loss rates. We find that
photoelectrons affect the mass loss rates by factors that are potentially
important for planetary evolution theories. We also find that enhanced
ionisation occurs at altitudes that are often probed with specific atomic lines
in transmission spectroscopy. Future modelling of these processes should
include the role of photoelectrons. Finally, we make available a simple yet
accurate parameterisation for atomic hydrogen atmospheres.
|
http://arxiv.org/abs/2309.08390v1
|
Large Language Models (LLMs) have acquired ubiquitous attention for their
performances across diverse domains. Our study here searches through LLMs'
cognitive abilities and confidence dynamics. We dive deep into understanding
the alignment between their self-assessed confidence and actual performance. We
exploit these models with diverse sets of questionnaires and real-world
scenarios and extract how LLMs exhibit confidence in their responses. Our
findings reveal intriguing instances where models demonstrate high confidence
even when they answer incorrectly. This is reminiscent of the Dunning-Kruger
effect observed in human psychology. In contrast, there are cases where models
exhibit low confidence with correct answers revealing potential underestimation
biases. Our results underscore the need for a deeper understanding of their
cognitive processes. By examining the nuances of LLMs' self-assessment
mechanism, this investigation provides noteworthy revelations that serve to
advance the functionalities and broaden the potential applications of these
formidable language models.
|
http://arxiv.org/abs/2309.16145v1
|
The majority of research on estimation-of-distribution algorithms (EDAs)
concentrates on pseudo-Boolean optimization and permutation problems, leaving
the domain of EDAs for problems in which the decision variables can take more
than two values, but which are not permutation problems, mostly unexplored. To
render this domain more accessible, we propose a natural way to extend the
known univariate EDAs to this setting. Different from a naive reduction to the
binary case, our approach avoids additional constraints.
Since understanding genetic drift is crucial for an optimal parameter choice,
we extend the known quantitative analysis of genetic drift to EDAs for
multi-valued variables. Roughly speaking, when the variables take $r$ different
values, the time for genetic drift to become significant is $r$ times shorter
than in the binary case. Consequently, the update strength of the probabilistic
model has to be chosen $r$ times lower now.
To investigate how desired model updates take place in this framework, we
undertake a mathematical runtime analysis on the $r$-valued \leadingones
problem. We prove that with the right parameters, the multi-valued UMDA solves
this problem efficiently in $O(r\ln(r)^2 n^2 \ln(n))$ function evaluations.
This bound is nearly tight as our lower bound $\Omega(r\ln(r) n^2 \ln(n))$
shows.
Overall, our work shows that our good understanding of binary EDAs naturally
extends to the multi-valued setting, and it gives advice on how to set the main
parameters of multi-values EDAs.
|
http://arxiv.org/abs/2302.14420v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.