text
string | source
string |
|---|---|
In computed tomographic imaging, model based iterative reconstruction methods
have generally shown better image quality than the more traditional, faster
filtered backprojection technique. The cost we have to pay is that MBIR is
computationally expensive. In this work we train a 2.5D deep learning (DL)
network to mimic MBIR quality image. The network is realized by a modified
Unet, and trained using clinical FBP and MBIR image pairs. We achieve the
quality of MBIR images faster and with a much smaller computation cost.
Visually and in terms of noise power spectrum (NPS), DL-MBIR images have
texture similar to that of MBIR, with reduced noise power. Image profile plots,
NPS plots, standard deviation, etc. suggest that the DL-MBIR images result from
a successful emulation of an MBIR operator.
|
http://arxiv.org/abs/2309.13399v1
|
Self-supervised learning models extract general-purpose representations from
data. Quantifying the reliability of these representations is crucial, as many
downstream models rely on them as input for their own tasks. To this end, we
introduce a formal definition of representation reliability: the representation
for a given test point is considered to be reliable if the downstream models
built on top of that representation can consistently generate accurate
predictions for that test point. However, accessing downstream data to quantify
the representation reliability is often infeasible or restricted due to privacy
concerns. We propose an ensemble-based method for estimating the representation
reliability without knowing the downstream tasks a priori. Our method is based
on the concept of neighborhood consistency across distinct pre-trained
representation spaces. The key insight is to find shared neighboring points as
anchors to align these representation spaces before comparing them. We
demonstrate through comprehensive numerical experiments that our method
effectively captures the representation reliability with a high degree of
correlation, achieving robust and favorable performance compared with baseline
methods.
|
http://arxiv.org/abs/2306.00206v2
|
We conducted regression discontinuity design models in order to evaluate
changes in access to healthcare services and financial protection, using as a
natural experiment the age required to retire in Argentina, the moment in which
people are able to enroll in the free social health insurance called PAMI. The
dependent variables were indicators of the population with health insurance,
out-of-pocket health expenditure, and use of health services. The results show
that PAMI causes a high increase in the population with health insurance and
marginal reductions in health expenditure. No effects on healthcare use were
found.
|
http://arxiv.org/abs/2302.14784v1
|
We propose a Digit-Serial Left-tO-righT (DSLOT) arithmetic based processing
technique called DSLOT-NN with aim to accelerate inference of the convolution
operation in the deep neural networks (DNNs). The proposed work has the ability
to assess and terminate the ineffective convolutions which results in massive
power and energy savings. The processing engine is comprised of low-latency
most-significant-digit-first (MSDF) (also called online) multipliers and adders
that processes data from left-to-right, allowing the execution of subsequent
operations in digit-pipelined manner. Use of online operators eliminates the
need for the development of complex mechanism of identifying the negative
activation, as the output with highest weight value is generated first, and the
sign of the result can be identified as soon as first non-zero digit is
generated. The precision of the online operators can be tuned at run-time,
making them extremely useful in situations where accuracy can be compromised
for power and energy savings. The proposed design has been implemented on
Xilinx Virtex-7 FPGA and is compared with state-of-the-art Stripes on various
performance metrics. The results show the proposed design presents power
savings, has shorter cycle time, and approximately 50% higher OPS per watt.
|
http://arxiv.org/abs/2309.06019v2
|
This note describes a technical overview of UXsim, an open source
macro/mesoscopic traffic simulator in pure Python programming language. UXsim
is based on Kinematic Wave model (more specifically, mesoscopic version of
Newell's simplified car-following model) and dynamic user optimum-like route
choice principle, which are well established methodology in the transportation
research field. It can compute dynamical network traffic flow and have basic
visualization and analysis capability. Furthermore, users can implement their
own models and control methods into the simulator by using Python, thanks to
the flexibility of the language. The simulator and its codes are freely
available at https://github.com/toruseo/UXsim under the MIT license.
|
http://arxiv.org/abs/2309.17114v2
|
Kohn-Sham density functional theory (KS-DFT) is a powerful method to obtain
key materials' properties, but the iterative solution of the KS equations is a
numerically intensive task, which limits its application to complex systems. To
address this issue, machine learning (ML) models can be used as surrogates to
find the ground-state charge density and reduce the computational overheads. We
develop a grid-centred structural representation, based on Jacobi and Legendre
polynomials combined with a linear regression, to accurately learn the
converged DFT charge density. This integrates into a ML pipeline that can
return any density-dependent observable, including energy and forces, at the
quality of a converged DFT calculation, but at a fraction of the computational
cost. Fast scanning of energy landscapes and producing starting densities for
the DFT self-consistent cycle are among the applications of our scheme.
|
http://arxiv.org/abs/2301.13550v2
|
Locks are a classic data structure for concurrent programming. We introduce a
type system to ensure that names of the asynchronous pi-calculus are used as
locks. Our calculus also features a construct to deallocate a lock once we know
that it will never be acquired again. Typability guarantees two properties:
deadlock-freedom, that is, no acquire operation on a lock waits forever; and
leak-freedom, that is, all locks are eventually deallocated.
We leverage the simplicity of our typing discipline to study the induced
typed behavioural equivalence. After defining barbed equivalence, we introduce
a sound labelled bisimulation, which makes it possible to establish equivalence
between programs that manipulate and deallocate locks.
|
http://arxiv.org/abs/2309.07307v1
|
We introduce ACEpotentials.jl, a Julia-language software package that
constructs interatomic potentials from quantum mechanical reference data using
the Atomic Cluster Expansion (Drautz, 2019). As the latter provides a complete
description of atomic environments, including invariance to overall translation
and rotation as well as permutation of like atoms, the resulting potentials are
systematically improvable and data efficient. Furthermore, the descriptor's
expressiveness enables use of a linear model, facilitating rapid evaluation and
straightforward application of Bayesian techniques for active learning. We
summarize the capabilities of ACEpotentials.jl and demonstrate its strengths
(simplicity, interpretability, robustness, performance) on a selection of
prototypical atomistic modelling workflows.
|
http://arxiv.org/abs/2309.03161v2
|
Versatile and adaptive semantic understanding would enable autonomous systems
to comprehend and interact with their surroundings. Existing fixed-class models
limit the adaptability of indoor mobile and assistive autonomous systems. In
this work, we introduce LEXIS, a real-time indoor Simultaneous Localization and
Mapping (SLAM) system that harnesses the open-vocabulary nature of Large
Language Models (LLMs) to create a unified approach to scene understanding and
place recognition. The approach first builds a topological SLAM graph of the
environment (using visual-inertial odometry) and embeds Contrastive
Language-Image Pretraining (CLIP) features in the graph nodes. We use this
representation for flexible room classification and segmentation, serving as a
basis for room-centric place recognition. This allows loop closure searches to
be directed towards semantically relevant places. Our proposed system is
evaluated using both public, simulated data and real-world data, covering
office and home environments. It successfully categorizes rooms with varying
layouts and dimensions and outperforms the state-of-the-art (SOTA). For place
recognition and trajectory estimation tasks we achieve equivalent performance
to the SOTA, all also utilizing the same pre-trained model. Lastly, we
demonstrate the system's potential for planning.
|
http://arxiv.org/abs/2309.15065v2
|
[abridged]Photoevaporation and dust-trapping are individually considered to
be important mechanisms in the evolution and morphology of protoplanetary
disks. We studied how the presence of early substructures affects the evolution
of the dust distribution and flux in the millimeter continuum of disks that are
undergoing photoevaporative dispersal. We also tested if the predicted
properties resemble those observed in the population of transition disks. We
used the numerical code Dustpy to simulate disk evolution considering gas
accretion, dust growth, dust-trapping at substructures, and mass loss due to
X-ray and EUV (XEUV) photoevaporation and dust entrainment. Then, we compared
how the dust mass and millimeter flux evolve for different disk models. We find
that, during photoevaporative dispersal, disks with primordial substructures
retain more dust and are brighter in the millimeter continuum than disks
without early substructures, regardless of the photoevaporative cavity size.
Once the photoevaporative cavity opens, the estimated fluxes for the disk
models that are initially structured are comparable to those found in the
bright transition disk population ($F_\textrm{mm} > 30\, \textrm{mJy}$), while
the disk models that are initially smooth have fluxes comparable to the
transition disks from the faint population ($F_\textrm{mm} < 30\,
\textrm{mJy}$), suggesting a link between each model and population. Our models
indicate that the efficiency of the dust trapping determines the millimeter
flux of the disk, while the gas loss due to photoevaporation controls the
formation and expansion of a cavity, decoupling the mechanisms responsible for
each feature. In consequence, even a planet with a mass comparable to Saturn
could trap enough dust to reproduce the millimeter emission of a bright
transition disk, while its cavity size is independently driven by
photoevaporative dispersal.
|
http://arxiv.org/abs/2309.08752v1
|
The widespread integration of Internet of Things (IoT) devices across all
facets of life has ushered in an era of interconnectedness, creating new
avenues for cybersecurity challenges and underscoring the need for robust
intrusion detection systems. However, traditional security systems are designed
with a closed-world perspective and often face challenges in dealing with the
ever-evolving threat landscape, where new and unfamiliar attacks are constantly
emerging. In this paper, we introduce a framework aimed at mitigating the open
set recognition (OSR) problem in the realm of Network Intrusion Detection
Systems (NIDS) tailored for IoT environments. Our framework capitalizes on
image-based representations of packet-level data, extracting spatial and
temporal patterns from network traffic. Additionally, we integrate stacking and
sub-clustering techniques, enabling the identification of unknown attacks by
effectively modeling the complex and diverse nature of benign behavior. The
empirical results prominently underscore the framework's efficacy, boasting an
impressive 88\% detection rate for previously unseen attacks when compared
against existing approaches and recent advancements. Future work will perform
extensive experimentation across various openness levels and attack scenarios,
further strengthening the adaptability and performance of our proposed solution
in safeguarding IoT environments.
|
http://arxiv.org/abs/2309.07461v2
|
Optimal kinematic observables are often defined in specific frames and then
approximated at the reconstruction level. We show how multi-dimensional
unfolding methods allow us to reconstruct these observables in their proper
rest frame and in a probabilistically faithful way. We illustrate our approach
with a measurement of a CP-phase in the top Yukawa coupling. Our method makes
use of key advantages of generative unfolding, but as a constructed observable
it fits into standard LHC analysis frameworks.
|
http://arxiv.org/abs/2308.00027v1
|
In this paper we examine the effect of delamination on wave scattering, with
the aim of creating a control measure for layered waveguides of various bonding
types. Previous works have considered specific widths of solitary waves for the
simulations, without analysing the effect of changing the soliton parameters.
We consider two multi-layered structures: one containing delamination
"sandwiched" by perfect bonding and one containing delamination but
"sandwiched" by soft bonding. These structures are modelled by coupled
Boussinesq-type equations. Matched asymptotic multiple-scale expansions lead to
coupled Ostrovsky equations in soft bonded regions and Korteweg-De Vries
equations in the perfectly bonded and delaminated region. We use the Inverse
Scattering Transform to predict the behaviour in the delaminated regions. In
both cases, numerical analysis shows that we can predict the delamination
length by changes in the wave structure, and that these changes depend upon the
Full Width at Half Magnitude (FWHM) of the incident soliton. In the case of
perfect bonding, we derive a theoretical prediction for the change and confirm
this numerically. For the soft bonding case, we numerically identify a similar
relationship using the change in amplitude. Therefore we only need to compute
one curve to determine the behaviour for any incident solitary wave, creating a
framework for designing measurement campaigns for rigorously testing the
integrity of layered structures.
|
http://arxiv.org/abs/2308.16645v1
|
Test-time adaptation is a promising research direction that allows the source
model to adapt itself to changes in data distribution without any supervision.
Yet, current methods are usually evaluated on benchmarks that are only a
simplification of real-world scenarios. Hence, we propose to validate test-time
adaptation methods using the recently introduced datasets for autonomous
driving, namely CLAD-C and SHIFT. We observe that current test-time adaptation
methods struggle to effectively handle varying degrees of domain shift, often
resulting in degraded performance that falls below that of the source model. We
noticed that the root of the problem lies in the inability to preserve the
knowledge of the source model and adapt to dynamically changing, temporally
correlated data streams. Therefore, we enhance well-established self-training
framework by incorporating a small memory buffer to increase model stability
and at the same time perform dynamic adaptation based on the intensity of
domain shift. The proposed method, named AR-TTA, outperforms existing
approaches on both synthetic and more real-world benchmarks and shows
robustness across a variety of TTA scenarios.
|
http://arxiv.org/abs/2309.10109v1
|
The SoccerNet 2023 tracking challenge requires the detection and tracking of
soccer players and the ball. In this work, we present our approach to tackle
these tasks separately. We employ a state-of-the-art online multi-object
tracker and a contemporary object detector for player tracking. To overcome the
limitations of our online approach, we incorporate a post-processing stage
using interpolation and appearance-free track merging. Additionally, an
appearance-based track merging technique is used to handle the termination and
creation of tracks far from the image boundaries. Ball tracking is formulated
as single object detection, and a fine-tuned YOLOv8l detector with proprietary
filtering improves the detection precision. Our method achieves 3rd place on
the SoccerNet 2023 tracking challenge with a HOTA score of 66.27.
|
http://arxiv.org/abs/2308.16651v1
|
Orbital-free density functional theory (OFDFT) is a quantum chemistry
formulation that has a lower cost scaling than the prevailing Kohn-Sham DFT,
which is increasingly desired for contemporary molecular research. However, its
accuracy is limited by the kinetic energy density functional, which is
notoriously hard to approximate for non-periodic molecular systems. Here we
propose M-OFDFT, an OFDFT approach capable of solving molecular systems using a
deep learning functional model. We build the essential non-locality into the
model, which is made affordable by the concise density representation as
expansion coefficients under an atomic basis. With techniques to address
unconventional learning challenges therein, M-OFDFT achieves a comparable
accuracy with Kohn-Sham DFT on a wide range of molecules untouched by OFDFT
before. More attractively, M-OFDFT extrapolates well to molecules much larger
than those seen in training, which unleashes the appealing scaling of OFDFT for
studying large molecules including proteins, representing an advancement of the
accuracy-efficiency trade-off frontier in quantum chemistry.
|
http://arxiv.org/abs/2309.16578v2
|
Simulation-based inference (SBI) is a promising approach to leverage high
fidelity cosmological simulations and extract information from the
non-Gaussian, non-linear scales that cannot be modeled analytically. However,
scaling SBI to the next generation of cosmological surveys faces the
computational challenge of requiring a large number of accurate simulations
over a wide range of cosmologies, while simultaneously encompassing large
cosmological volumes at high resolution. This challenge can potentially be
mitigated by balancing the accuracy and computational cost for different
components of the the forward model while ensuring robust inference. To guide
our steps in this, we perform a sensitivity analysis of SBI for galaxy
clustering on various components of the cosmological simulations: gravity
model, halo-finder and the galaxy-halo distribution models (halo-occupation
distribution, HOD). We infer the $\sigma_8$ and $\Omega_m$ using galaxy power
spectrum multipoles and the bispectrum monopole assuming a galaxy number
density expected from the luminous red galaxies observed using the Dark Energy
Spectroscopy Instrument (DESI). We find that SBI is insensitive to changing
gravity model between $N$-body simulations and particle mesh (PM) simulations.
However, changing the halo-finder from friends-of-friends (FoF) to Rockstar can
lead to biased estimate of $\sigma_8$ based on the bispectrum. For galaxy
models, training SBI on more complex HOD leads to consistent inference for less
complex HOD models, but SBI trained on simpler HOD models fails when applied to
analyze data from a more complex HOD model. Based on our results, we discuss
the outlook on cosmological simulations with a focus on applying SBI approaches
to future galaxy surveys.
|
http://arxiv.org/abs/2309.15071v1
|
Large Language Models (LLMs) have recently shown impressive abilities in
handling various natural language-related tasks. Among different LLMs, current
studies have assessed ChatGPT's superior performance across manifold tasks,
especially under the zero/few-shot prompting conditions. Given such successes,
the Recommender Systems (RSs) research community have started investigating its
potential applications within the recommendation scenario. However, although
various methods have been proposed to integrate ChatGPT's capabilities into
RSs, current research struggles to comprehensively evaluate such models while
considering the peculiarities of generative models. Often, evaluations do not
consider hallucinations, duplications, and out-of-the-closed domain
recommendations and solely focus on accuracy metrics, neglecting the impact on
beyond-accuracy facets. To bridge this gap, we propose a robust evaluation
pipeline to assess ChatGPT's ability as an RS and post-process ChatGPT
recommendations to account for these aspects. Through this pipeline, we
investigate ChatGPT-3.5 and ChatGPT-4 performance in the recommendation task
under the zero-shot condition employing the role-playing prompt. We analyze the
model's functionality in three settings: the Top-N Recommendation, the
cold-start recommendation, and the re-ranking of a list of recommendations, and
in three domains: movies, music, and books. The experiments reveal that ChatGPT
exhibits higher accuracy than the baselines on books domain. It also excels in
re-ranking and cold-start scenarios while maintaining reasonable
beyond-accuracy metrics. Furthermore, we measure the similarity between the
ChatGPT recommendations and the other recommenders, providing insights about
how ChatGPT could be categorized in the realm of recommender systems. The
evaluation pipeline is publicly released for future research.
|
http://arxiv.org/abs/2309.03613v2
|
In this paper we show some explicit results regarding non-linear diffusive
equations on Poincar\'e half plane. We obtain exact solutions by using the
generalized separation of variables and we also show the meaning of these
results in the context of the general theory of the invariant subspace method.
|
http://arxiv.org/abs/2309.13400v1
|
While machine translation (MT) systems have seen significant improvements, it
is still common for translations to reflect societal biases, such as gender
bias. Decoder-only Large Language Models (LLMs) have demonstrated potential in
MT, albeit with performance slightly lagging behind traditional encoder-decoder
Neural Machine Translation (NMT) systems. However, LLMs offer a unique
advantage: the ability to control the properties of the output through prompts.
In this study, we leverage this flexibility to explore LLaMa's capability to
produce gender-specific translations. Our results indicate that LLaMa can
generate gender-specific translations with translation accuracy and gender bias
comparable to NLLB, a state-of-the-art multilingual NMT system. Furthermore,
our experiments reveal that LLaMa's gender-specific translations rely on
coreference resolution to determine gender, showing higher gender variance in
gender-ambiguous datasets but maintaining consistency in less ambiguous
contexts. This research investigates the potential and challenges of using LLMs
for gender-specific translations as an instance of the controllability of
outputs offered by LLMs.
|
http://arxiv.org/abs/2309.03175v2
|
We consider the problem of learning multiple tasks in a continual learning
setting in which data from different tasks is presented to the learner in a
streaming fashion. A key challenge in this setting is the so-called
"catastrophic forgetting problem", in which the performance of the learner in
an "old task" decreases when subsequently trained on a "new task". Existing
continual learning methods, such as Averaged Gradient Episodic Memory (A-GEM)
and Orthogonal Gradient Descent (OGD), address catastrophic forgetting by
minimizing the loss for the current task without increasing the loss for
previous tasks. However, these methods assume the learner knows when the task
changes, which is unrealistic in practice. In this paper, we alleviate the need
to provide the algorithm with information about task changes by using an online
clustering-based approach on a dynamically updated finite pool of samples or
gradients. We thereby successfully counteract catastrophic forgetting in one of
the hardest settings, namely: domain-incremental learning, a setting for which
the problem was previously unsolved. We showcase the benefits of our approach
by applying these ideas to projection-based methods, such as A-GEM and OGD,
which lead to task-agnostic versions of them. Experiments on real datasets
demonstrate the effectiveness of the proposed strategy and its promising
performance compared to state-of-the-art methods.
|
http://arxiv.org/abs/2309.12078v1
|
We study quasar proximity zones in a simulation that includes a
self-consistent quasar formation model and realistic IGM environments. The
quasar host halo is $10^{13}\ M_{\mathrm{\odot}}$ at $z=6$, more massive than
typical halos studied in previous work. Between $6<z<7.5$, the quasar
luminosity varies rapidly, with a mean magnitude of $M_{UV,mean}=-24.8$ and the
fluctuation reaching up to two orders of magnitude. Using this light curve to
post-process the dense environment around the quasar, we find that the
proximity zone size ($R_{p}$) ranges between $0.5-5$ pMpc. We show that the
light curve variability causes a similar degree of scatter in $R_{p}$ as does
the density fluctuation, both of which result in a standard deviation of $\sim
0.3$ pMpc). The $R_{p}$ traces the light curve fluctuations closely but with a
time delay of $\sim 10^4\ \mathrm{yr}$, breaking the correspondence between the
$R_{p}$ and the contemporaneous $M_{UV}$. This also indicates that we can only
infer quasar activity within the past $\sim 10^4$ years instead of the
integrated lifetime from $R_{p}$ in the later part of cosmic reionization.
Compared with the variable light curve, a constant light curve underestimates
the $R_{p}$ by 13% at the dim end ($M_{UV}\sim -23.5$), and overestimates the
$R_{p}$ by 30% at the bright end ($M_{UV}\sim -26$). By calculating the $R_{p}$
generated by a number of quasars, we show that variable light curves predict a
wider $R_{p}$ distribution than lightbulb models, and readily explain the
extremely small $R_{p}$ values that have been observed.
|
http://arxiv.org/abs/2309.11571v1
|
Event logs are invaluable for conducting process mining projects, offering
insights into process improvement and data-driven decision-making. However,
data quality issues affect the correctness and trustworthiness of these
insights, making preprocessing tasks a necessity. Despite the recognized
importance, the execution of preprocessing tasks remains ad-hoc, lacking
support. This paper presents a systematic literature review that establishes a
comprehensive repository of preprocessing tasks and their usage in case
studies. We identify six high-level and 20 low-level preprocessing tasks in
case studies. Log filtering, transformation, and abstraction are commonly used,
while log enriching, integration, and reduction are less frequent. These
results can be considered a first step in contributing to more structured,
transparent event log preprocessing, enhancing process mining reliability.
|
http://arxiv.org/abs/2309.17100v2
|
As cosmic rays (CRs) propagate in the Galaxy, they can be affected by
magnetic structures that temporarily trap them and cause their trajectories to
display chaotic behavior, therefore modifying the simple diffusion scenario.
When CRs arrive at the Earth, they do so anisotropically. These chaotic effects
can be a fundamental contributor to this anisotropy. Accordingly, this requires
a comprehensive description of chaos in trapping conditions since assessing
their repercussions on the CR arrival directions is necessary. This study
utilizes a new method described in L\'opez-Barquero and Desiati (2021) to
characterize chaotic trajectories in bound systems. This method is based on the
Finite-Time Lyapunov Exponent (FTLE), a quantity that determines the levels of
chaos based on the trajectories' divergence rate. The FTLE is useful since it
adapts to trapping conditions in magnetic structures or even propagating media
changes. Here, we explore the effects that chaos and trapping can have on the
TeV CR anisotropy. Concretely, we apply this method to study the behavior of
CRs entering the heliosphere. Specifically, how the distinct heliospheric
structures and CR impinging directions from the ISM can affect chaos levels.
The heliosphere has an intrinsic directionality that affects CRs differently
depending on where they enter it. This feature causes preferential directions
from which particles tend to be more chaotic than others. This eventually
translates into changes in the arrival maps which are not uniformly
distributed. Instead, we expect sectors in the map to change separately from
others, creating a time variation that could be detected. Consequently, this
result points to the idea that time-variability in the maps is essential to
understanding the CR anisotropy's overall processes.
|
http://arxiv.org/abs/2301.10065v1
|
We present the largest and most comprehensive empirical study of pre-trained
visual representations (PVRs) or visual 'foundation models' for Embodied AI.
First, we curate CortexBench, consisting of 17 different tasks spanning
locomotion, navigation, dexterous, and mobile manipulation. Next, we
systematically evaluate existing PVRs and find that none are universally
dominant. To study the effect of pre-training data size and diversity, we
combine over 4,000 hours of egocentric videos from 7 different sources (over
4.3M images) and ImageNet to train different-sized vision transformers using
Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from
prior work, we find that scaling dataset size and diversity does not improve
performance universally (but does so on average). Our largest model, named
VC-1, outperforms all prior PVRs on average but does not universally dominate
either. Next, we show that task- or domain-specific adaptation of VC-1 leads to
substantial gains, with VC-1 (adapted) achieving competitive or superior
performance than the best known results on all of the benchmarks in
CortexBench. Finally, we present real-world hardware experiments, in which VC-1
and VC-1 (adapted) outperform the strongest pre-existing PVR. Overall, this
paper presents no new techniques but a rigorous systematic evaluation, a broad
set of findings about PVRs (that in some cases, refute those made in narrow
domains in prior work), and open-sourced code and models (that required over
10,000 GPU-hours to train) for the benefit of the research community.
|
http://arxiv.org/abs/2303.18240v2
|
The partial decay widths and production mechanism of the three pentaquark
states, $P_{\psi}^{N}(4312)$, $P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$,
discovered by the LHCb Collaboration in 2019, are still under debate. In this
work, we employ the contact-range effective field theory approach to construct
the $\bar{D}^{(*)}\Sigma_{c}^{(*)}$, $\bar{D}^{*}\Lambda_c$,
$\bar{D}\Lambda_c$, $J/\psi p$, and $\eta_c p$ coupled-channel interactions to
dynamically generate the multiplet of hidde-charm pentaquark molecules by
reproducing the masses and widths of $P_{\psi}^{N}(4312)$,
$P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$. Assuming that the pentaquark
molecules are produced in the $\Lambda_b$ decay via the triangle diagrams,
where $\Lambda_{b}$ firstly decays into $D_{s}^{(\ast)}\Lambda_{c}$, then
$D_{s}^{(\ast)}$ scatters into $\bar{D}^{(\ast)}K$, and finally the molecules
are dynamically generated by the $\bar{D}^{(\ast)}\Lambda_{c}$ interactions, we
calculate the branching fractions of the decays $\Lambda_b \to {P_{\psi}^{N}}K$
using the effective Lagrangian approach. With the partial decay widths of these
pentaquark molecules, we further estimate the branching fraction of the decays
$ \Lambda_b \to ( P_{\psi}^{N} \to J/\psi p )K $ and $ \Lambda_b \to (
P_{\psi}^{N}\to \bar{D}^* \Lambda_c )K $. Our results show that the pentaquark
states $P_{\psi}^{N}(4312)$, $P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$ as
hadronic molecules can be produced in the $\Lambda_b$ decay, and on the other
hand their heavy quark spin symmetry partners are invisible in the $J/\psi p$
invariant mass distribution because of the small production rates. Our studies
show that is possible to observe some of the pentaquark states in the
$\Lambda_b\to \bar{D}^*\Lambda_c K$ decays.
|
http://arxiv.org/abs/2309.12050v2
|
In this paper, we give a full classification of the separable hypersurfaces
of constant sectional curvature in the Euclidean $n$-space $\mathbb{R}^n$. In
dimension $n=3$, this classification was solved by Hasanis and L\'opez
[Manuscripta Math. 166, 403-417 (2021)]. When $n>3$, we prove that the
separable hypersurfaces of null sectional curvature are three particular
families of such hypersurfaces. Finally, we prove that hyperspheres are the
only separable hypersurfaces with nonzero constant sectional curvature.
|
http://arxiv.org/abs/2309.06025v1
|
Off-axis parabolic mirrors (OAPMs) are widely used in the THz and mm-wave
communities for spectroscopy and imaging applications, as a result of their
broadband, low-loss operation and high numerical apertures. However, the
aspherical shape of an OAPM creates significant geometric aberrations that make
achieving diffraction-limited performance a challenge, and which lowers the
peak electric field strength in the focal plane. Here we quantify the impact of
geometric aberrations on the performance of the most widely-used spectrometer
designs, by using ray tracing and physical optics calculations to investigate
whether diffraction-limited performance can be achieved in both the sample and
the detector plane. We identify simple rules, based on marginal ray
propagation, that allow spectrometers to be designed that are more robust to
misalignment errors, and which have minimal aberrations for THz beams. For a
given source this allows the design of optical paths that give the smallest THz
beam focal spot, with the highest THz electric field strength possible. This is
desirable for improved THz imaging, for better signal-to-noise ratios in linear
THz spectroscopy and optical-pump THz-probe spectroscopy, and to achieve higher
electric field strengths in non-linear THz spectroscopy
|
http://arxiv.org/abs/2309.10647v2
|
The performance of a wavelet-based optical flow velocimetry (wOFV) algorithm
to extract high accuracy and high resolution velocity fields from particle
images in wall-bounded turbulent flows is assessed. wOFV is first evaluated
using synthetic particle images generated from a channel flow DNS of a
turbulent boundary layer. The sensitivity of wOFV to the regularization
parameter (lambda) is quantified and results are compared to PIV. Results on
synthetic particle images indicated different sensitivity to
under-regularization or over-regularization depending on which region of the
boundary layer is analyzed. Synthetic data revealed that wOFV can modestly
outperform PIV in vector accuracy across a broad lambda range. wOFV showed
clear advantages over PIV in resolving the viscous sublayer and obtaining
highly accurate estimates of the wall shear stress. wOFV was also applied to
experimental data of a developing turbulent boundary layer. Overall, wOFV
revealed good agreement with both PIV and PIV + PTV. However, wOFV was able to
successfully resolve the wall shear stress and correctly normalize the boundary
layer streamwise velocity to wall units where PIV and PIV + PTV showed larger
deviations. Analysis of the turbulent velocity fluctuations revealed spurious
results for PIV in close proximity to the wall, leading to significantly
exaggerated and non-physical turbulence intensity. PIV + PTV showed a minor
improvement in this aspect. wOFV did not exhibit this same effect, revealing
that it is more accurate in capturing small-scale turbulent motion in the
vicinity of boundaries. The enhanced vector resolution of wOFV enabled improved
estimation of instantaneous derivative quantities and intricate flow structure
both closer to the wall. These aspects show that, within a reasonable lambda
range, wOFV can improve resolving the turbulent motion occurring in the
vicinity of physical boundaries.
|
http://arxiv.org/abs/2310.03980v1
|
In generative compressed sensing (GCS), we want to recover a signal
$\mathbf{x}^* \in \mathbb{R}^n$ from $m$ measurements ($m\ll n$) using a
generative prior $\mathbf{x}^*\in G(\mathbb{B}_2^k(r))$, where $G$ is typically
an $L$-Lipschitz continuous generative model and $\mathbb{B}_2^k(r)$ represents
the radius-$r$ $\ell_2$-ball in $\mathbb{R}^k$. Under nonlinear measurements,
most prior results are non-uniform, i.e., they hold with high probability for a
fixed $\mathbf{x}^*$ rather than for all $\mathbf{x}^*$ simultaneously. In this
paper, we build a unified framework to derive uniform recovery guarantees for
nonlinear GCS where the observation model is nonlinear and possibly
discontinuous or unknown. Our framework accommodates GCS with 1-bit/uniformly
quantized observations and single index models as canonical examples.
Specifically, using a single realization of the sensing ensemble and
generalized Lasso, {\em all} $\mathbf{x}^*\in G(\mathbb{B}_2^k(r))$ can be
recovered up to an $\ell_2$-error at most $\epsilon$ using roughly
$\tilde{O}({k}/{\epsilon^2})$ samples, with omitted logarithmic factors
typically being dominated by $\log L$. Notably, this almost coincides with
existing non-uniform guarantees up to logarithmic factors, hence the uniformity
costs very little. As part of our technical contributions, we introduce the
Lipschitz approximation to handle discontinuous observation models. We also
develop a concentration inequality that produces tighter bounds for product
processes whose index sets have low metric entropy. Experimental results are
presented to corroborate our theory.
|
http://arxiv.org/abs/2310.03758v2
|
This paper presents a novel task, zero-shot voice conversion based on face
images (zero-shot FaceVC), which aims at converting the voice characteristics
of an utterance from any source speaker to a newly coming target speaker,
solely relying on a single face image of the target speaker. To address this
task, we propose a face-voice memory-based zero-shot FaceVC method. This method
leverages a memory-based face-voice alignment module, in which slots act as the
bridge to align these two modalities, allowing for the capture of voice
characteristics from face images. A mixed supervision strategy is also
introduced to mitigate the long-standing issue of the inconsistency between
training and inference phases for voice conversion tasks. To obtain
speaker-independent content-related representations, we transfer the knowledge
from a pretrained zero-shot voice conversion model to our zero-shot FaceVC
model. Considering the differences between FaceVC and traditional voice
conversion tasks, systematic subjective and objective metrics are designed to
thoroughly evaluate the homogeneity, diversity and consistency of voice
characteristics controlled by face images. Through extensive experiments, we
demonstrate the superiority of our proposed method on the zero-shot FaceVC
task. Samples are presented on our demo website.
|
http://arxiv.org/abs/2309.09470v1
|
In this work, we propose a novel approach for the continuous-time control
synthesis of nonlinear systems under nested signal temporal logic (STL)
specifications. While the majority of existing literature focuses on control
synthesis for STL specifications without nested temporal operators, addressing
nested temporal operators poses a notably more challenging scenario and
requires new theoretical advancements. Our approach hinges on the concepts of
signal temporal logic tree (sTLT) and control barrier function (CBF).
Specifically, we detail the construction of an sTLT from a given STL formula
and a continuous-time dynamical system, the sTLT semantics (i.e., satisfaction
condition), and the equivalence or under-approximation relation between sTLT
and STL. Leveraging the fact that the satisfaction condition of an sTLT is
essentially keeping the state within certain sets during certain time
intervals, it provides explicit guidelines for the CBF design. The resulting
controller is obtained through the utilization of an online CBF-based program
coupled with an event-triggered scheme for online updating the activation time
interval of each CBF, with which the correctness of the system behavior can be
established by construction. We demonstrate the efficacy of the proposed method
for single-integrator and unicycle models under nested STL formulas.
|
http://arxiv.org/abs/2309.14347v2
|
Continuum robots with variable stiffness have gained wide popularity in the
last decade. Layer jamming (LJ) has emerged as a simple and efficient technique
to achieve tunable stiffness for continuum robots. Despite its merits, the
development of a control-oriented dynamical model tailored for this specific
class of robots remains an open problem in the literature. This paper aims to
present the first solution, to the best of our knowledge, to close the gap. We
propose an energy-based model that is integrated with the LuGre frictional
model for LJ-based continuum robots. Then, we take a comprehensive theoretical
analysis for this model, focusing on two fundamental characteristics of
LJ-based continuum robots: shape locking and adjustable stiffness. To validate
the modeling approach and theoretical results, a series of experiments using
our \textit{OctRobot-I} continuum robotic platform was conducted. The results
show that the proposed model is capable of interpreting and predicting the
dynamical behaviors in LJ-based continuum robots.
|
http://arxiv.org/abs/2309.04154v2
|
The ability to process idiomatic or literal multiword expressions is a
crucial aspect of understanding and generating any language. The task of
generating contextually relevant continuations for narratives containing
idiomatic (or literal) expressions can allow us to test the ability of
generative language models (LMs) in understanding nuanced language containing
non-compositional figurative text. We conduct a series of experiments using
datasets in two distinct languages (English and Portuguese) under three
different training settings (zero-shot, few-shot, and fine-tuned). Our results
suggest that the models are only slightly better at generating continuations
for literal contexts than idiomatic contexts, with exceedingly small margins.
Furthermore, the models studied in this work perform equally well across both
languages, indicating the robustness of generative models in performing this
task.
|
http://arxiv.org/abs/2310.20195v2
|
The aim of this note is to describe a geometric relation between simple plane
curve singularities classified by simply laced Cartan matrices and cluster
varieties of finite type also classified by the simply laced Cartan matrices.
We construct certain varieties of configurations of flags out of Dynkin
diagrams and out of singularities and show that they coincide if the Dynkin
diagram corresponds to the singularity.
|
http://arxiv.org/abs/2310.00245v2
|
Distributed optimization is a fundamental framework for collaborative
inference and decision making in decentralized multi-agent systems. The
operation is modeled as the joint minimization of a shared objective which
typically depends on observations gathered locally by each agent. Distributed
optimization algorithms, such as the common D-ADMM, tackle this task by
iteratively combining local computations and message exchanges. One of the main
challenges associated with distributed optimization, and particularly with
D-ADMM, is that it requires a large number of communications, i.e., messages
exchanged between the agents, to reach consensus. This can make D-ADMM costly
in power, latency, and channel resources. In this work we propose unfolded
D-ADMM, which follows the emerging deep unfolding methodology to enable D-ADMM
to operate reliably with a predefined and small number of messages exchanged by
each agent. Unfolded D-ADMM fully preserves the operation of D-ADMM, while
leveraging data to tune the hyperparameters of each iteration of the algorithm.
These hyperparameters can either be agent-specific, aiming at achieving the
best performance within a fixed number of iterations over a given network, or
shared among the agents, allowing to learn to distributedly optimize over
different networks. For both settings, our unfolded D-ADMM operates with
limited communications, while preserving the interpretability and flexibility
of the original D-ADMM algorithm. We specialize unfolded D-ADMM for two
representative settings: a distributed estimation task, considering a sparse
recovery setup, and a distributed learning scenario, where multiple agents
collaborate in learning a machine learning model. Our numerical results
demonstrate that the proposed approach dramatically reduces the number of
communications utilized by D-ADMM, without compromising on its performance.
|
http://arxiv.org/abs/2309.14353v2
|
The DEVStone benchmark allows us to evaluate the performance of
discrete-event simulators based on the DEVS formalism. It provides model sets
with different characteristics, enabling the analysis of specific issues of
simulation engines. However, this heterogeneity hinders the comparison of the
results among studies, as the results obtained on each research work depend on
the chosen subset of DEVStone models. We define the DEVStone metric based on
the DEVStone synthetic benchmark and provide a mechanism for specifying
objective ratings for DEVS-based simulators. This metric corresponds to the
average number of times that a simulator can execute a selection of 12 DEVStone
models in one minute. The variety of the chosen models ensures we measure
different particularities provided by DEVStone. The proposed metric allows us
to compare various simulators and to assess the impact of new features on their
performance. We use the DEVStone metric to compare some popular DEVS-based
simulators.
|
http://arxiv.org/abs/2309.16544v1
|
There are three types of fragmentation functions (FFs) which are used to
describe the twist-3 cross sections of the hard semi-inclusive processes under
QCD collinear factorization, and they are called intrinsic, kinematical, and
dynamical FFs. In this work, we investigate the theoretical relations among
these FFs for a tensor-polarized spin-1 hadron. Three Lorentz-invariance
relations are derived by using the identities between the nonlocal quark-quark
and quark-gluon-quark operators, which guarantee the frame independence of the
twist-3 spin observables. The QCD equation of motion relations are also
presented for the tensor-polarized FFs. In addition, we show that the intrinsic
and kinematical twist-3 FFs can be decomposed into the contributions of twist-2
FFs and twist-3 three-parton FFs, and the latter are also called dynamical FFs.
If one neglects the dynamical FFs, we can obtain relations which are analogous
to the Wandzura-Wilczek relation. Then, the intrinsic and kinematical twist-3
FFs are expressed in terms of the leading-twist ones. Since the FFs of a spin-1
hadron can be measured at various experimental facilities in the near future,
these theoretical relations will play an important role in the analysis of the
collinear tensor-polarized FFs.
|
http://arxiv.org/abs/2309.06757v2
|
A shortcut to an adiabatic scheme is proposed for preparing a massive object
in a macroscopic spatial superposition state. In this scheme we propose to
employ counterdiabatic driving to maintain the system in the ground state of
its instantaneous Hamiltonian while the trap potential is tuned from a parabola
to a double well. This, in turn, is performed by properly ramping a control
parameter. We show that a few counterdiabatic drives are enough for most
practical cases. A hybrid electromechanical setup in superconducting circuits
is proposed for the implementation. The efficiency of our scheme is benchmarked
by numerically solving the system dynamics in the presence of noises and
imperfections. The results show that a mechanical resonator with
very-high-fidelity spatially distinguishable cat states can be prepared with
our protocol. Furthermore, the protocol is robust against noises and
imperfections. We also discuss a method for verifying the final state via
spectroscopy of a coupled circuit electrodynamical cavity mode. Our work can
serve as the ground work to feasibly realize and verify macroscopic
superposition states in future experiments.
|
http://arxiv.org/abs/2309.06031v2
|
We report the experimental observation of intermittency in a regime dominated
by random shock waves on the surface of a fluid. We achieved such a
nondispersive surface-wave field using a magnetic fluid subjected to a high
external magnetic field. We found that the small-scale intermittency of the
wave-amplitude fluctuations is due to shock waves, leading to much more intense
intermittency than previously reported in three-dimensional hydrodynamics
turbulence or in wave turbulence. The statistical properties of intermittency
are found to be in good agreement with the predictions of a Burgerslike
intermittency model. Such experimental evidence of random shock-wave
intermittency could lead to applications in various fields.
|
http://arxiv.org/abs/2309.16222v1
|
This article presents an interactive system for stage acoustics
experimentation including considerations for hearing one's own and others'
instruments. The quality of real-time auralization systems for psychophysical
experiments on music performance depends on the system's calibration and
latency, among other factors (e.g. visuals, simulation methods, haptics, etc).
The presented system focuses on the acoustic considerations for laboratory
implementations. The calibration is implemented as a set of filters accounting
for the microphone-instrument distances and the directivity factors, as well as
the transducers' frequency responses. Moreover, sources of errors are
characterized using both state-of-the-art information and derivations from the
mathematical definition of the calibration filter. In order to compensate for
hardware latency without cropping parts of the simulated impulse responses, the
virtual direct sound of musicians hearing themselves is skipped from the
simulation and addressed by letting the actual direct sound reach the listener
through open headphones. The required latency compensation of the interactive
part (i.e. hearing others) meets the minimum distance requirement between
musicians, which is 2 m for the implemented system. Finally, a proof of concept
is provided that includes objective and subjective experiments, which give
support to the feasibility of the proposed setup.
|
http://arxiv.org/abs/2309.03149v1
|
The transformer architecture is widely used in machine learning models and
consists of two alternating sublayers: attention heads and MLPs. We prove that
an MLP neuron can be implemented by a masked attention head with internal
dimension 1 so long as the MLP's activation function comes from a restricted
class including SiLU and close approximations of ReLU and GeLU. This allows one
to convert an MLP-and-attention transformer into an attention-only transformer
at the cost of greatly increasing the number of attention heads. We also prove
that attention heads can perform the components of an MLP (linear
transformations and activation functions) separately. Finally, we prove that
attention heads can encode arbitrary masking patterns in their weight matrices
to within arbitrarily small error.
|
http://arxiv.org/abs/2309.08593v1
|
Affordable 3D scanners often produce sparse and non-uniform point clouds that
negatively impact downstream applications in robotic systems. While existing
point cloud upsampling architectures have demonstrated promising results on
standard benchmarks, they tend to experience significant performance drops when
the test data have different distributions from the training data. To address
this issue, this paper proposes a test-time adaption approach to enhance model
generality of point cloud upsampling. The proposed approach leverages
meta-learning to explicitly learn network parameters for test-time adaption.
Our method does not require any prior information about the test data. During
meta-training, the model parameters are learned from a collection of
instance-level tasks, each of which consists of a sparse-dense pair of point
clouds from the training data. During meta-testing, the trained model is
fine-tuned with a few gradient updates to produce a unique set of network
parameters for each test instance. The updated model is then used for the final
prediction. Our framework is generic and can be applied in a plug-and-play
manner with existing backbone networks in point cloud upsampling. Extensive
experiments demonstrate that our approach improves the performance of
state-of-the-art models.
|
http://arxiv.org/abs/2308.16484v2
|
The Short-Range Correlations between nucleons in nuclei is regarded as a
complex system. We investigate the relationship between the orbital
entanglement entropy of SRCs $S_{ij}$ in nuclear structures and Tan contact
$c_{ij}$, and find that the orbital entanglement entropies and Tan contacts
corresponding to proton-proton SRC pairs and neutron-proton SRC pairs in nuclei
demonstrate a scaling relation. More specifically, the proportionality of
entanglement entropy between proton-proton pairs and neutron-proton pairs is
directly related to the ratio of nuclear contacts within the atomic nucleus,
demonstrating an approximate ratio of 2.0. Our research suggests that this
scaling relationship should hold true for all symmetric nuclei, furthermore, we
offer a possible explanation for this phenomenon.
|
http://arxiv.org/abs/2309.05909v2
|
Many computational problems involve optimization over discrete variables with
quadratic interactions. Known as discrete quadratic models (DQMs), these
problems in general are NP-hard. Accordingly, there is increasing interest in
encoding DQMs as quadratic unconstrained binary optimization (QUBO) models to
allow their solution by quantum and quantum-inspired hardware with
architectures and solution methods designed specifically for such problem
types. However, converting DQMs to QUBO models often introduces invalid
solutions to the solution space of the QUBO models. These solutions must be
penalized by introducing appropriate constraints to the QUBO objective function
that are weighted by a tunable penalty parameter to ensure that the global
optimum is valid. However, selecting the strength of this parameter is
non-trivial, given its influence on solution landscape structure. Here, we
investigate the effects of choice of encoding and penalty strength on the
structure of QUBO DQM solution landscapes and their optimization, focusing
specifically on one-hot and domain-wall encodings.
|
http://arxiv.org/abs/2305.00568v3
|
Latent diffusers revolutionized the generative AI and inspired creative art.
When denoising the latent, the predicted original image at each step
collectively animates the formation. However, the animation is limited by the
denoising nature of the diffuser, and only renders a sharpening process. This
work presents Latent Painter, which uses the latent as the canvas, and the
diffuser predictions as the plan, to generate painting animation. Latent
Painter also transits one generated image to another, which can happen between
images from two different sets of checkpoints.
|
http://arxiv.org/abs/2308.16490v2
|
We present a new publicly available dataset that contains simulated data of a
novel calorimeter to be installed at the CERN Large Hadron Collider. This
detector will have more than six-million channels with each channel capable of
position, ionisation and precision time measurement. Reconstructing these
events in an efficient way poses an immense challenge which is being addressed
with the latest machine learning techniques. As part of this development a
large prototype with 12,000 channels was built and a beam of high-energy
electrons incident on it. Using machine learning methods we have reconstructed
the energy of incident electrons from the energies of three-dimensional hits,
which is known to some precision. By releasing this data publicly we hope to
encourage experts in the application of machine learning to develop efficient
and accurate image reconstruction of these electrons.
|
http://arxiv.org/abs/2309.06582v1
|
The elastic response of mechanical, chemical, and biological systems is often
modeled using a discrete arrangement of Hookean springs, either representing
finite material elements or even the molecular bonds of a system. However, to
date, there is no direct derivation of the relation between a general discrete
spring network and it's corresponding elastic continuum. Furthermore,
understanding the network's mechanical response requires simulations that may
be expensive computationally. Here we report a method to derive the exact
elastic continuum model of any discrete network of springs, requiring network
geometry and topology only. We identify and calculate the so-called
"non-affine" displacements. Explicit comparison of our calculations to
simulations of different crystalline and disordered configurations, shows we
successfully capture the mechanics even of auxetic materials. Our method is
valid for residually stressed systems with non-trivial geometries, is easily
generalizable to other discrete models, and opens the possibility of a rational
design of elastic systems.
|
http://arxiv.org/abs/2309.07844v4
|
For a group acting on a hyperbolic space, we set up an algorithm in the group
algebra showing that ideals generated by few elements are free, where few is a
function of the minimal displacement of the action, and derive algebraic,
geometric, and topological consequences.
|
http://arxiv.org/abs/2309.16791v1
|
The optimal branch number of MDS matrices has established their importance in
designing diffusion layers for various block ciphers and hash functions. As a
result, numerous matrix structures, including Hadamard and circulant matrices,
have been proposed for constructing MDS matrices. Also, in the literature,
significant attention is typically given to identifying MDS candidates with
optimal implementations or proposing new constructions across different orders.
However, this paper takes a different approach by not emphasizing efficiency
issues or introducing new constructions. Instead, its primary objective is to
enumerate Hadamard MDS and involutory Hadamard MDS matrices of order $4$ within
the field $\mathbb{F}_{2^r}$. Specifically, it provides an explicit formula for
the count of both Hadamard MDS and involutory Hadamard MDS matrices of order
$4$ over $\mathbb{F}_{2^r}$. Additionally, it derives the count of Hadamard
Near-MDS (NMDS) and involutory Hadamard NMDS matrices, each with exactly one
zero in each row, of order $4$ over $\mathbb{F}_{2^r}$. Furthermore, the paper
discusses some circulant-like matrices for constructing NMDS matrices and
proves that when $n$ is even, any $2n \times 2n$ Type-II circulant-like matrix
can never be an NMDS matrix. While it is known that NMDS matrices may be
singular, this paper establishes that singular Hadamard matrices can never be
NMDS matrices. Moreover, it proves that there exist exactly two orthogonal
Type-I circulant-like matrices of order $4$ over $\mathbb{F}_{2^r}$.
|
http://arxiv.org/abs/2310.00090v3
|
Coverage path planning is a fundamental challenge in robotics, with diverse
applications in aerial surveillance, manufacturing, cleaning, inspection,
agriculture, and more. The main objective is to devise a trajectory for an
agent that efficiently covers a given area, while minimizing time or energy
consumption. Existing practical approaches often lack a solid theoretical
foundation, relying on purely heuristic methods, or overly abstracting the
problem to a simple Traveling Salesman Problem in Grid Graphs. Moreover, the
considered cost functions only rarely consider turn cost, prize-collecting
variants for uneven cover demand, or arbitrary geometric regions.
In this paper, we describe an array of systematic methods for handling
arbitrary meshes derived from intricate, polygonal environments. This
adaptation paves the way to compute efficient coverage paths with a robust
theoretical foundation for real-world robotic applications. Through
comprehensive evaluations, we demonstrate that the algorithm also exhibits low
optimality gaps, while efficiently handling complex environments. Furthermore,
we showcase its versatility in handling partial coverage and accommodating
heterogeneous passage costs, offering the flexibility to trade off coverage
quality and time efficiency.
|
http://arxiv.org/abs/2310.20340v1
|
Transformative changes in our production and consumption habits are needed to
enable the sustainability transition towards carbon neutrality, no net loss of
biodiversity, and planetary well-being. Organizations are the way we humans
have organized our everyday life, and much of our negative environmental
impacts, also called carbon and biodiversity footprints, are caused by
organizations. Here we show how the financial accounts of any organization can
be exploited to develop an integrated carbon and biodiversity footprint
account. As a metric we utilize spatially explicit potential global loss of
species which, we argue, can be understood as the biodiversity equivalent, the
utility of which for biodiversity is similar to what carbon dioxide equivalent
is for climate. We provide a global Biodiversity Footprint Database that
organizations, experts and researchers can use to assess consumption-based
biodiversity footprints. We also argue that the current integration of
financial and environmental accounting is superficial, and provide a framework
for a more robust financial value-transforming accounting model. To test the
methodologies, we utilized a Finnish university as a living lab. Assigning an
offsetting cost to the footprints significantly altered the financial value of
the organization. We believe such value-transforming accounting is needed in
order to draw the attention of senior executives and investors to the negative
environmental impacts of their organizations.
|
http://arxiv.org/abs/2309.14186v1
|
The emergence of cryptoassets has sparked a paradigm shift in the world of
finance and investment, ushering in a new era of digital assets with profound
implications for the future of currency and asset management. A recent study
showed that during the bubble period around the year, 2018, the price of
cryptoasset, XRP has a strong anti correlation with the largest singular values
of the correlation tensors obtained from the weekly XRP transaction networks.
In this study, we provide a detailed analysis of the method of correlation
tensor spectra for XRP transaction networks. We calculate and compare the
distribution of the largest singular values of the correlation tensor using the
random matrix theory with the largest singular values of the empirical
correlation tensor. We investigate the correlation between the XRP price and
the largest singular values for a period spanning two years. We also uncover
the distinct dependence between XRP price and the singular values for bubble
and non-bubble periods. The significance of time evolution of singular values
is shown by comparison with the evolution of singular values of the reshuffled
correlation tensor. Furthermore, we identify a set of driver nodes in the
transaction networks that drives the market during the bubble period using the
singular vectors.
|
http://arxiv.org/abs/2309.05935v1
|
Quantum systems are inherently open and susceptible to environmental noise,
which can have both detrimental and beneficial effects on their dynamics. This
phenomenon has been observed in bio-molecular systems, where noise enables
novel functionalities, making the simulation of their dynamics a crucial target
for digital and analog quantum simulation. Nevertheless, the computational
capabilities of current quantum devices are often limited due to their inherent
noise. In this work, we present a novel approach that capitalizes on the
intrinsic noise of quantum devices to reduce the computational resources
required for simulating open quantum systems. Our approach combines quantum
noise characterization methods with quantum error mitigation techniques,
enabling us to manipulate and control the intrinsic noise in a quantum circuit.
Specifically, we selectively enhance or reduce decoherence rates in the quantum
circuit to achieve the desired simulation of open system dynamics. We provide a
detailed description of our methods and report on the results of noise
characterization and quantum error mitigation experiments conducted on both
real and emulated IBM Quantum computers. Additionally, we estimate the
experimental resource requirements for our techniques. Our approach holds the
potential to unlock new simulation techniques in Noisy Intermediate-Scale
Quantum (NISQ) devices, harnessing their intrinsic noise to enhance quantum
computations.
|
http://arxiv.org/abs/2302.14592v3
|
Contagion processes, representing the spread of infectious diseases,
information, or social behaviors, are often schematized as taking place on
networks, which encode for instance the interactions between individuals. The
impact of the network structure on spreading process has been widely
investigated, but not the reverse question: do different processes unfolding on
a given network lead to different infection patterns? How do the infection
patterns depend on a model's parameters or on the nature of the contagion
processes? Here we address this issue by investigating the infection patterns
for a variety of models. In simple contagion processes, where contagion events
involve one connection at a time, we find that the infection patterns are
extremely robust across models and parameters. In complex contagion models
instead, in which multiple interactions are needed for a contagion event,
non-trivial dependencies on models parameters emerge, as the infection pattern
depends on the interplay between pairwise and group contagions. In models
involving threshold mechanisms moreover, slight parameter changes can
significantly impact the spreading paths. Our results show that it is possible
to study crucial features of a spread from schematized models, and inform us on
the variations between spreading patterns in processes of different nature.
|
http://arxiv.org/abs/2309.10486v2
|
This paper presents an overview of methods for mitigating radio frequency
interference (RFI) in radio science data. The primary purpose of mitigation is
to assist observatories to take useful data outside frequency bands allocated
to the Science Services (RAS and EESS): mitigation should not be needed within
Passive bands. Mitigation methods may be introduced at a variety of points
within the data acquisition system in order to lessen the RFI intensity and to
limit the damage it does. These methods range from proactive methods to change
the local RFI environment by means of regulatory manners, to pre- and
post-detection methods, to various pre-processing methods, and to methods
applied at or post-processing.
|
http://arxiv.org/abs/2302.14586v1
|
Blockchain systems often rely on rationality assumptions for their security,
expecting that nodes are motivated to maximize their profits. These systems
thus design their protocols to incentivize nodes to execute the honest protocol
but fail to consider out-of-band collusion. Existing works analyzing
rationality assumptions are limited in their scope, either by focusing on a
specific protocol or relying on non-existing financial instruments. We propose
a general rational attack on rationality by leveraging an external channel that
incentivizes nodes to collude against the honest protocol. Our approach
involves an attacker creating an out-of-band bribery smart contract to motivate
nodes to double-spend their transactions in exchange for shares in the
attacker's profits. We provide a game theory model to prove that any rational
node is incentivized to follow the malicious protocol. We discuss our approach
to attacking the Bitcoin and Ethereum blockchains, demonstrating that
irrational behavior can be rational in real-world blockchain systems when
analyzing rationality in a larger ecosystem. We conclude that rational
assumptions only appear to make the system more secure and offer a false sense
of security under the flawed analysis.
|
http://arxiv.org/abs/2305.00554v1
|
The study of dynamics of single active particles plays an important role in
the development of artificial or hybrid micro-systems for bio-medical and other
applications at micro-scale. Here, we utilize the results of these studies to
better understand their implications for the specific application of drug
delivery. We analyze the variations in the capture efficiency for different
types of motion dynamics without inter-particle interactions and compare the
results. We also discuss the reasons for the same and describe the specific
parameters that affect the capture efficiency, which in turn helps in both
hardware and control design of a micro-bot swarm system for drug delivery.
|
http://arxiv.org/abs/2306.17578v1
|
The algebraic Joker module was originally described in the 1970s by Adams and
Priddy and is a $5$-dimensional module over the subHopf algebra
$\mathcal{A}(1)$ of the mod $2$ Steenrod algebra. It is a self-dual endotrivial
module, i.e., an invertible object in the stable module category of
$\mathcal{A}(1)$. Recently it has been shown that no analogues exist for
$\mathcal{A}(n)$ with $n>1$. Using iterated doubling this also gives an
iterated double which is an $\mathcal{A}(n)$-module but not stably invertible.
In previous work the author showed that for $n=1,2,3$ these iterated doubles
were realisable as cohomology of CW spectra, but no such realisation existed
for $n>3$.
The main point of the paper is to show that in the height $2$ chromatic
context, the Morava $K$-theory of double Jokers realise an exceptional
endotrivial module over the quaternion group of order $8$ that only exists over
a field of characteristic $2$ containing a primitive cube root of unity. This
has connections with certain Massey products in the cohomology of the
quaternion group.
|
http://arxiv.org/abs/2309.05921v4
|
Quantum computers have a potential for solving quantum chemistry problems
with higher accuracy than classical computers. Quantum computing quantum Monte
Carlo (QC-QMC) is a QMC with a trial state prepared in quantum circuit, which
is employed to obtain the ground state with higher accuracy than QMC alone. We
propose an algorithm combining QC-QMC with a hybrid tensor network to extend
the applicability of QC-QMC beyond a single quantum device size. In a two-layer
quantum-quantum tree tensor, our algorithm for the larger trial wave function
can be executed than preparable wave function in a device. Our algorithm is
evaluated on the Heisenberg chain model, graphite-based Hubbard model, hydrogen
plane model, and MonoArylBiImidazole using full configuration interaction QMC.
Our algorithm can achieve energy accuracy (specifically, variance) several
orders of magnitude higher than QMC, and the hybrid tensor version of QMC gives
the same energy accuracy as QC-QMC when the system is appropriately decomposed.
Moreover, we develop a pseudo-Hadamard test technique that enables efficient
overlap calculations between a trial wave function and an orthonormal basis
state. In a real device experiment by using the technique, we obtained almost
the same accuracy as the statevector simulator, indicating the noise robustness
of our algorithm. These results suggests that the present approach will pave
the way to electronic structure calculation for large systems with high
accuracy on current quantum devices.
|
http://arxiv.org/abs/2303.18095v3
|
We perform three-dimensional numerical simulations to understand the role of
viscous fingering in sweeping a high-viscous fluid (HVF). These fingers form
due to the injection of a low-viscous fluid (LVF) into a porous media
containing the high-viscous fluid. We find that the sweeping of HVF depends on
different parameters such as the Reynolds number ($Re$) based on the inflow
rate of the LVF, the P\'eclet number ($Pe$), and the logarithmic viscosity
ratio of HVF and LVF, $\mathfrak{R}$. At high values of $Re$, $Pe$, and
$\mathfrak{R}$, the fingers grow non-linearly, resulting in earlier tip
splitting of the fingers and breakthrough, further leading to poor sweeping of
the HVF. In contrast, the fingers evolve uniformly at low values of $Re$, $Pe$,
and $\mathfrak{R}$, resulting in an efficient sweeping of the HVF. We also
estimate the sweep efficiency and conclude that the parameters $Re$, $Pe$ and
$\mathfrak{R}$ be chosen optimally to minimize the non-linear growth of the
fingers to achieve an efficient sweeping of the HVF.
|
http://arxiv.org/abs/2305.19763v1
|
Solid-state atomic defects with optical transitions in the telecommunication
bands, potentially in a nuclear spin free environment, are important for
applications in fiber-based quantum networks. Erbium ions doped in CeO$_2$
offer such a desired combination. Here we report on the optical homogeneous
linewidth and electron spin coherence of Er$^{3+}$ ions doped in CeO$_2$
epitaxial film grown on a Si(111) substrate. The long-lived optical transition
near 1530 nm in the environmentally-protected 4f shell of Er$^{3+}$ shows a
narrow homogeneous linewidth of 440 kHz with an optical coherence time of 0.72
$\mu$s at 3.6 K. The reduced nuclear spin noise in the host allows for
Er$^{3+}$ electron spin polarization at 3.6 K, yielding an electron spin
coherence of 0.66 $\mu$s (in the isolated ion limit) and a spin relaxation of
2.5 ms. These findings indicate the potential of Er$^{3+}$:CeO$_2$ film as a
valuable platform for quantum networks and communication applications.
|
http://arxiv.org/abs/2309.16785v1
|
Fully Homomorphic Encryption (FHE) enables the processing of encrypted data
without decrypting it. FHE has garnered significant attention over the past
decade as it supports secure outsourcing of data processing to remote cloud
services. Despite its promise of strong data privacy and security guarantees,
FHE introduces a slowdown of up to five orders of magnitude as compared to the
same computation using plaintext data. This overhead is presently a major
barrier to the commercial adoption of FHE.
In this work, we leverage GPUs to accelerate FHE, capitalizing on a
well-established GPU ecosystem available in the cloud. We propose GME, which
combines three key microarchitectural extensions along with a compile-time
optimization to the current AMD CDNA GPU architecture. First, GME integrates a
lightweight on-chip compute unit (CU)-side hierarchical interconnect to retain
ciphertext in cache across FHE kernels, thus eliminating redundant memory
transactions. Second, to tackle compute bottlenecks, GME introduces special
MOD-units that provide native custom hardware support for modular reduction
operations, one of the most commonly executed sets of operations in FHE. Third,
by integrating the MOD-unit with our novel pipelined $64$-bit integer
arithmetic cores (WMAC-units), GME further accelerates FHE workloads by $19\%$.
Finally, we propose a Locality-Aware Block Scheduler (LABS) that exploits the
temporal locality available in FHE primitive blocks. Incorporating these
microarchitectural features and compiler optimizations, we create a synergistic
approach achieving average speedups of $796\times$, $14.2\times$, and
$2.3\times$ over Intel Xeon CPU, NVIDIA V100 GPU, and Xilinx FPGA
implementations, respectively.
|
http://arxiv.org/abs/2309.11001v1
|
In this paper, we investigate the asymptotic behavior of the non-simple
systole, which is the length of a shortest non-simple closed geodesic, on a
random closed hyperbolic surface on the moduli space $\mathcal{M}_g$ of Riemann
surfaces of genus $g$ endowed with the Weil-Petersson measure. We show that as
the genus $g$ goes to infinity, the non-simple systole of a generic hyperbolic
surface in $\mathcal{M}_g$ behaves exactly like $\log g$.
|
http://arxiv.org/abs/2308.16447v1
|
We prove that a free boundary curve shortening flow on closed surfaces with a
strictly convex boundary remains noncollapsed for a finite time in the sense of
the reflected chord-arc profile introduced by Langford-Zhu. This shows that
such flow converges to free boundary embedded geodesic in infinite time, or
shrinks to a round half-point on the boundary. As a consequence, we prove the
existence of two free boundary embedded geodesics on a Riemannian $2$-disk with
a strictly convex boundary. Moreover, we prove that there exists a simple
closed geodesic with Morse Index $1$ and $2$. This settles the free boundary
analog of Grayson's theorem.
|
http://arxiv.org/abs/2309.09896v2
|
In this paper, we propose a language-universal adapter learning framework
based on a pre-trained model for end-to-end multilingual automatic speech
recognition (ASR). For acoustic modeling, the wav2vec 2.0 pre-trained model is
fine-tuned by inserting language-specific and language-universal adapters. An
online knowledge distillation is then used to enable the language-universal
adapters to learn both language-specific and universal features. The linguistic
information confusion is also reduced by leveraging language identifiers
(LIDs). With LIDs we perform a position-wise modification on the multi-head
attention outputs. In the inference procedure, the language-specific adapters
are removed while the language-universal adapters are kept activated. The
proposed method improves the recognition accuracy and addresses the linear
increase of the number of adapters' parameters with the number of languages in
common multilingual ASR systems. Experiments on the BABEL dataset confirm the
effectiveness of the proposed framework. Compared to the conventional
multilingual model, a 3.3% absolute error rate reduction is achieved. The code
is available at: https://github.com/shen9712/UniversalAdapterLearning.
|
http://arxiv.org/abs/2303.01249v1
|
Simulation of autonomous vehicle systems requires that simulated traffic
participants exhibit diverse and realistic behaviors. The use of prerecorded
real-world traffic scenarios in simulation ensures realism but the rarity of
safety critical events makes large scale collection of driving scenarios
expensive. In this paper, we present DJINN - a diffusion based method of
generating traffic scenarios. Our approach jointly diffuses the trajectories of
all agents, conditioned on a flexible set of state observations from the past,
present, or future. On popular trajectory forecasting datasets, we report state
of the art performance on joint trajectory metrics. In addition, we demonstrate
how DJINN flexibly enables direct test-time sampling from a variety of valuable
conditional distributions including goal-based sampling, behavior-class
sampling, and scenario editing.
|
http://arxiv.org/abs/2309.12508v2
|
Variational quantum algorithms are tailored to perform within the constraints
of current quantum devices, yet they are limited by performance-degrading
errors. In this study, we consider a noise model that reflects realistic gate
errors inherent to variational quantum algorithms. We investigate the
decoherence of a variationally prepared quantum state due to this noise model,
which causes a deviation from the energy estimation in the variational
approach. By performing a perturbative analysis of optimized circuits, we
determine the noise threshold at which the criteria set by the stability lemma
is met. We assess our findings against the variational quantum eigensolver and
quantum approximate optimization algorithm for various problems with up to 14
qubits. Moreover, we show that certain gate errors have a significantly smaller
impact on the coherence of the state, allowing us to reduce the execution time
without compromising performance.
|
http://arxiv.org/abs/2301.00048v3
|
This paper evaluates the sustainability of Advanced Air Mobility (AAM) in
urban and regional mobility, using Paris as a case study. Paris is committed to
eco-friendly transportation and has introduced AAM, including electric Vertical
Take-Off and Landing (eVTOL) air taxis for the 2024 Olympic Games. We assess
eVTOL energy consumption and CO$_2$ emissions on urban and regional routes,
comparing them with cars, public transport, and helicopters. Urban eVTOLs save
around 23 minutes over cars and 22 minutes over public transport on 50 km
routes. For regional routes (300 km), eVTOLs save 76 minutes over cars and 69
minutes over trains. However, eVTOLs' eco-friendliness depends on context. In
urban areas, they consume more energy than electric cars, but beat traditional
helicopters by 47%. For regional travel, eVTOLs outperform helicopters and some
cars but lag behind electric vehicles and trains. To maximize AAM's
sustainability in Paris, stakeholders must consider real-world operations and
integrate eVTOLs into the broader transportation system. This approach can lead
to greener urban and regional transportation.
|
http://arxiv.org/abs/2310.01417v1
|
Over the past few years, deep learning has been getting progressively more
popular for the exploitation of side-channel vulnerabilities in embedded
cryptographic applications, as it offers advantages in terms of the amount of
attack traces required for effective key recovery. A number of effective
attacks using neural networks have already been published, but reducing their
cost in terms of the amount of computing resources and data required is an
ever-present goal, which we pursue in this work. We focus on the ANSSI
Side-Channel Attack Database (ASCAD), and produce a JAX-based framework for
deep-learning-based SCA, with which we reproduce a selection of previous
results and build upon them in an attempt to improve their performance. We also
investigate the effectiveness of various Transformer-based models.
|
http://arxiv.org/abs/2309.13170v1
|
This work makes progress on the issue of global- vs. local- master equations.
Global master equations like the Redfield master equation (following from
standard Born- and Markov- approximation) require a full diagonalization of the
system Hamiltonian. This is especially challenging for interacting quantum
many-body systems. We discuss a short-bath-correlation-time expansion in
reciprocal (energy) space, leading to a series expansion of the jump operator,
which avoids a diagonalization of the Hamiltonian. For a bath that is coupled
locally to one site, this typically leads to an expansion of the global
Redfield jump operator in terms of local operators. We additionally map the
local Redfield master equation to a novel local Lindblad form, giving an
equation which has the same conceptual advantages of traditional local Lindblad
approaches, while being applicable in a much broader class of systems. Our
ideas give rise to a non-heuristic foundation of local master equations, which
can be combined with established many-body methods.
|
http://arxiv.org/abs/2309.07105v3
|
Catastrophic forgetting of previous knowledge is a critical issue in
continual learning typically handled through various regularization strategies.
However, existing methods struggle especially when several incremental steps
are performed. In this paper, we extend our previous approach (RECALL) and
tackle forgetting by exploiting unsupervised web-crawled data to retrieve
examples of old classes from online databases. In contrast to the original
methodology, which did not incorporate an assessment of web-based data, the
present work proposes two advanced techniques: an adversarial approach and an
adaptive threshold strategy. These methods are utilized to meticulously choose
samples from web data that exhibit strong statistical congruence with the no
longer available training data. Furthermore, we improved the pseudo-labeling
scheme to achieve a more accurate labeling of web data that also considers
classes being learned in the current step. Experimental results show that this
enhanced approach achieves remarkable results, particularly when the
incremental scenario spans multiple steps.
|
http://arxiv.org/abs/2309.10479v2
|
The interest in studying quantum mechanics is always increasing in our
society and schools. Especially in the latter case, this leads researchers to
implement suitable actions to meet social needs of knowledge of quantum
physics. We present an online laboratory on wave-particle duality for high
school students (17-19 years old). The activity has been carried out in the
period December 2021 - May 2022 at the Physics Department of the University of
Cagliari and more than 100 students from different high schools in Sardinia
have been involved. We will show the design of the activity and the experiments
performed. We will show and discuss qualitatively results about a satisfaction
questionnaire. A brief discussion about motivational issues will be done.
|
http://arxiv.org/abs/2301.13752v1
|
Non-contact Tonometry (NCT) is a non-invasive ophthalmologic technique to
measure intraocular pressure (IOP) using an air puff for routine glaucoma
testing. Although IOP measurement using NCT has been perfected over many years,
various phenomenological aspects of interfacial physics, fluid structure
interaction, waves on corneal surface, and pathogen transmission routes to name
a few are inherently unexplored. Research investigating the interdisciplinary
physics of the ocular biointerface and of the NCT procedure is sparse and hence
remains to be explored in sufficient depth. In this perspective piece, we
introduce NCT and propose future research prospects that can be undertaken for
a better understanding of the various hydrodynamic processes that occur during
NCT from a pathogen transmission viewpoint. In particular, the research
directions include the characterization and measurement of the incoming air
puff, understanding the complex fluid-solid interactions occurring between the
air puff and the human eye for measuring IOP, investigating the various waves
that form and travel; tear film breakup and subsequent droplet formation
mechanisms at various spatiotemporal length scales. Further, from ocular
disease transmission perspective, the disintegration of the tear film into
droplets and aerosols poses a potential pathogen transmission route during NCT
for pathogens residing in nasolacrimal and nasopharynx pathways. Adequate
precautions by opthalmologist and medical practioners are therefore necessary
to conduct the IOP measurements in a clinically safer way to prevent the risk
associated with pathogen transmission from ocular diseases like conjunctivitis,
keratitis and COVID-19 during the NCT procedure.
|
http://arxiv.org/abs/2309.08236v1
|
We introduce a clipping strategy for Stochastic Gradient Descent (SGD) which
uses quantiles of the gradient norm as clipping thresholds. We prove that this
new strategy provides a robust and efficient optimization algorithm for smooth
objectives (convex or non-convex), that tolerates heavy-tailed samples
(including infinite variance) and a fraction of outliers in the data stream
akin to Huber contamination. Our mathematical analysis leverages the connection
between constant step size SGD and Markov chains and handles the bias
introduced by clipping in an original way. For strongly convex objectives, we
prove that the iteration converges to a concentrated distribution and derive
high probability bounds on the final estimation error. In the non-convex case,
we prove that the limit distribution is localized on a neighborhood with low
gradient. We propose an implementation of this algorithm using rolling
quantiles which leads to a highly efficient optimization procedure with strong
robustness properties, as confirmed by our numerical experiments.
|
http://arxiv.org/abs/2309.17316v1
|
We formulate measures of spin ordering in the $q$-state ferromagnetic Potts
model in a generalized external magnetic field that favors or disfavors spin
values in a subset $I_s = \{1,...,s\}$ of the total set of $q$ values. The
results are contrasted with the corresponding measures of spin ordering in the
case of a conventional external magnetic field that favors or disfavors a
single spin value out of total set of $q$ values. Some illustrative
calculations are included.
|
http://arxiv.org/abs/2301.13746v1
|
The Lyman-$\alpha$ (Ly$\alpha$) three-dimensional correlation functions have
been widely used to perform cosmological inference using the baryon acoustic
oscillation (BAO) scale. While the traditional inference approach employs a
data vector with several thousand data points, we apply near-maximal score
compression down to tens of compressed data elements. We show that carefully
constructed additional data beyond those linked to each inferred model
parameter are required to preserve meaningful goodness-of-fit tests that guard
against unknown systematics, and to avoid information loss due to non-linear
parameter dependencies. We demonstrate, on suites of realistic mocks and DR16
data from the Extended Baryon Oscillation Spectroscopic Survey, that our
compression approach is lossless and unbiased, yielding a posterior that is
indistinguishable from that of the traditional analysis. As an early
application, we investigate the impact of a covariance matrix estimated from a
limited number of mocks, which is only well-conditioned in compressed space.
|
http://arxiv.org/abs/2309.13164v2
|
6G promises a paradigm shift in which positioning and sensing are inherently
integrated, enhancing not only the communication performance but also enabling
location- and context-aware services. Historically, positioning and sensing
have been viewed through the lens of cost and performance trade-offs, implying
an escalated demand for resources, such as radio, physical, and computational
resources, for improved performance. However, 6G goes beyond this traditional
perspective to encompass a set of broader values, namely sustainability,
inclusiveness, and trustworthiness. From a joint industrial/academic
perspective, this paper aims to shed light on these important value indicators
and their relationship with the conventional key performance indicators in the
context of positioning and sensing.
|
http://arxiv.org/abs/2309.13602v2
|
Deep Kernel Learning (DKL) combines the representational power of neural
networks with the uncertainty quantification of Gaussian Processes. Hence, it
is potentially a promising tool to learn and control complex dynamical systems.
In this work, we develop a scalable abstraction-based framework that enables
the use of DKL for control synthesis of stochastic dynamical systems against
complex specifications. Specifically, we consider temporal logic specifications
and create an end-to-end framework that uses DKL to learn an unknown system
from data and formally abstracts the DKL model into an Interval Markov Decision
Process (IMDP) to perform control synthesis with correctness guarantees.
Furthermore, we identify a deep architecture that enables accurate learning and
efficient abstraction computation. The effectiveness of our approach is
illustrated on various benchmarks, including a 5-D nonlinear stochastic system,
showing how control synthesis with DKL can substantially outperform
state-of-the-art competitive methods.
|
http://arxiv.org/abs/2309.06569v2
|
Consider the moduli space, $\mathcal{M}_{3},$ of cubic polynomials over
$\mathbb{C}$, with a marked critical point. Let $\mathscr{S}_{k,n}$ be the set
of all points in $\mathcal{M}_{3}$ for which the marked critical point is
strictly $(k,n)$-preperiodic. Milnor conjectured that the affine algebraic
curves $\mathscr{S}_{k,n}$ are irreducible, for all $k \geq 0, n>0$. In this
article, we show the irreducibility of eventually $2$-periodic curves, i.e.
$\mathscr{S}_{k,2},\; k\geq 0$ curves. We also note that the curves,
$\mathscr{S}_{k,2},\; k\geq 0$, exhibit a possible splitting-merging phenomenon
that has not been observed in earlier studies of $\mathscr{S}_{k,n}$ curves.
Finally, using the irreducibility of $\mathscr{S}_{k,2}$ curves, we give a new
and short proof of Galois conjugacy of unicritical points lying on
$\mathscr{S}_{k,2}$, for even natural number $k$.
|
http://arxiv.org/abs/2305.19944v2
|
We examine properties of the mean-field wave function of the one-dimensional
Kitaev model supporting Majorana Zero Modes (MZMs) \emph{when restricted} to a
fixed number of particles. Such wave functions can in fact be realized as exact
ground states of interacting number-conserving Hamiltonians and amount to a
more realistic description of the finite isolated superconductors. Akin to
their mean-field parent, the fixed-number wave functions encode a single
electron spectral function at zero energy that decays exponentially away from
the edges, with a localization length that agrees with the mean-field value.
Based purely on the structure of the number-projected ground states, we
construct the fixed particle number generalization of the MZM operators. They
can be used to compute the edge tunneling conductance; however, notably the
value of the zero-bias conductance remains the same as in the mean-field case,
quantized to $2e^2/h$. We also compute the topological entanglement entropy for
the number-projected wave functions and find that it contains a `robust'
$\log(2)$ component as well as a logarithmic correction to the mean field
result, which depends on the precise partitioning used to compute it. The
presence of the logarithmic term in the entanglement entropy indicates the
absence of a spectral gap above the ground state; as one introduces
fluctuations in the number of particles, the correction vanishes smoothly.
|
http://arxiv.org/abs/2309.00118v1
|
This paper presents a novel evaluation approach to text-based speaker
diarization (SD), tackling the limitations of traditional metrics that do not
account for any contextual information in text. Two new metrics are proposed,
Text-based Diarization Error Rate and Diarization F1, which perform utterance-
and word-level evaluations by aligning tokens in reference and hypothesis
transcripts. Our metrics encompass more types of errors compared to existing
ones, allowing us to make a more comprehensive analysis in SD. To align tokens,
a multiple sequence alignment algorithm is introduced that supports multiple
sequences in the reference while handling high-dimensional alignment to the
hypothesis using dynamic programming. Our work is packaged into two tools,
align4d providing an API for our alignment algorithm and TranscribeView for
visualizing and evaluating SD errors, which can greatly aid in the creation of
high-quality data, fostering the advancement of dialogue systems.
|
http://arxiv.org/abs/2309.07677v1
|
We consider a mathematical model which describes the quasistatic frictionless
contact of a viscoelastic body with a rigid-plastic foundation. We describe the
mechanical assumptions, list the hypotheses on the data and provide three
different variational formulations of the model in which the unknowns are the
displacement field, the stress field and the strain field, respectively. These
formulations have a different structure. Nevertheless, we prove that they are
pairwise dual of each other. Then, we deduce the unique weak solvability of the
contact problem as well as the Lipschitz continuity of its weak solution with
respect to the data. The proofs are based on recent results on
history-dependent variational inequalities and inclusions. Finally, we present
numerical simulations in the study of the contact problem, together with the
corresponding mechanical interpretations.
|
http://arxiv.org/abs/2309.04356v1
|
We propose personalized Tucker decomposition (perTucker) to address the
limitations of traditional tensor decomposition methods in capturing
heterogeneity across different datasets. perTucker decomposes tensor data into
shared global components and personalized local components. We introduce a mode
orthogonality assumption and develop a proximal gradient regularized block
coordinate descent algorithm that is guaranteed to converge to a stationary
point. By learning unique and common representations across datasets, we
demonstrate perTucker's effectiveness in anomaly detection, client
classification, and clustering through a simulation study and two case studies
on solar flare detection and tonnage signal classification.
|
http://arxiv.org/abs/2309.03439v1
|
Amidst the sharp rise in the evaluation of large language models (LLMs) on
various tasks, we find that semantic textual similarity (STS) has been
under-explored. In this study, we show that STS can be cast as a text
generation problem while maintaining strong performance on multiple STS
benchmarks. Additionally, we show generative LLMs significantly outperform
existing encoder-based STS models when characterizing the semantic similarity
between two texts with complex semantic relationships dependent on world
knowledge. We validate this claim by evaluating both generative LLMs and
existing encoder-based STS models on three newly collected STS challenge sets
which require world knowledge in the domains of Health, Politics, and Sports.
All newly collected data is sourced from social media content posted after May
2023 to ensure the performance of closed-source models like ChatGPT cannot be
credited to memorization. Our results show that, on average, generative LLMs
outperform the best encoder-only baselines by an average of 22.3% on STS tasks
requiring world knowledge. Our results suggest generative language models with
STS-specific prompting strategies achieve state-of-the-art performance in
complex, domain-specific STS tasks.
|
http://arxiv.org/abs/2309.06541v1
|
Cosmic rays (CRs) may drive outflows and alter the phase structure of the
circumgalactic medium, with potentially important implications on galaxy
formation. However, these effects ultimately depend on the dominant mode of
transport of CRs within and around galaxies, which remains highly uncertain. To
explore potential observable constraints on CR transport, we investigate a set
of cosmological FIRE-2 CR-MHD simulations of L$_{\ast}$ galaxies which evolve
CRs with transport models motivated by self-confinement (SC) and extrinsic
turbulence (ET) paradigms. To first order, the synchrotron properties diverge
between SC and ET models due to a CR physics driven hysteresis. SC models show
a higher tendency to undergo `ejective' feedback events due to a runaway
buildup of CR pressure in dense gas due to the behavior of SC transport
scalings at extremal CR energy densities. The corresponding CR wind-driven
hysteresis results in brighter, smoother, and more extended synchrotron
emission in SC runs relative to ET and constant diffusion runs. The differences
in synchrotron arise from different morphology, ISM gas and \textbf{B}
properties, potentially ruling out SC as the dominant mode of CR transport in
typical star-forming L$_{\ast}$ galaxies, and indicating the potential for
non-thermal radio continuum observations to constrain CR transport physics.
|
http://arxiv.org/abs/2309.16752v2
|
Manual labeling of gestures in robot-assisted surgery is labor intensive,
prone to errors, and requires expertise or training. We propose a method for
automated and explainable generation of gesture transcripts that leverages the
abundance of data for image segmentation. Surgical context is detected using
segmentation masks by examining the distances and intersections between the
tools and objects. Next, context labels are translated into gesture transcripts
using knowledge-based Finite State Machine (FSM) and data-driven Long Short
Term Memory (LSTM) models. We evaluate the performance of each stage of our
method by comparing the results with the ground truth segmentation masks, the
consensus context labels, and the gesture labels in the JIGSAWS dataset. Our
results show that our segmentation models achieve state-of-the-art performance
in recognizing needle and thread in Suturing and we can automatically detect
important surgical states with high agreement with crowd-sourced labels (e.g.,
contact between graspers and objects in Suturing). We also find that the FSM
models are more robust to poor segmentation and labeling performance than
LSTMs. Our proposed method can significantly shorten the gesture labeling
process (~2.8 times).
|
http://arxiv.org/abs/2302.14237v2
|
We consider a problem concerning the distribution of points with missing
digits coordinates that are close to non-degenerate analytic submanifolds. We
show that large enough (to be specified in the paper) sets of points with
missing digits coordinates distribute 'equally' around non-degenerate
submanifolds. As a consequence, we show that intersecting those missing digits
sets with non-degenerate submanifolds always achieve the optimal dimension
reduction. On the other hand, we also prove that there is no lack of points
with missing digits that are contained in non-degenerate submanifolds. Among
the other results,
1. we prove that the pinned distance sets of those missing digits sets
contain non-trivial intervals regardless of where the pin is.
2. we prove that for each $\epsilon>0,$ for missing digits sets $K$ with
large bases, simple digit sets (to be specified in the paper), and $\dim_{H}
K>3/4+\epsilon,$ the arithmetic product sets $K\cdot K$ contains non-trivial
intervals.
|
http://arxiv.org/abs/2309.00130v1
|
Honeywords are decoy passwords that can be added to a credential database; if
a login attempt uses a honeyword, this indicates that the site's credential
database has been leaked. In this paper we explore the basic requirements for
honeywords to be effective, in a threat model where the attacker knows
passwords for the same users at other sites. First, we show that for
user-chosen (vs. algorithmically generated, i.e., by a password manager)
passwords, existing honeyword-generation algorithms do not simultaneously
achieve false-positive and false-negative rates near their ideals of $\approx
0$ and $\approx \frac{1}{1+n}$, respectively, in this threat model, where $n$
is the number of honeywords per account. Second, we show that for users
leveraging algorithmically generated passwords, state-of-the-art methods for
honeyword generation will produce honeywords that are not sufficiently
deceptive, yielding many false negatives. Instead, we find that only a
honeyword-generation algorithm that uses the \textit{same} password generator
as the user can provide deceptive honeywords in this case. However, when the
defender's ability to infer the generator from the (one) account password is
less accurate than the attacker's ability to infer the generator from
potentially many, this deception can again wane. Taken together, our results
provide a cautionary note for the state of honeyword research and pose new
challenges to the field.
|
http://arxiv.org/abs/2309.10323v3
|
In this article, we construct a family of integrals which represent the
product of Rankin-Selberg $L$-functions of $\mathrm{GL}_{l}\times
\mathrm{GL}_m$ and of $\mathrm{GL}_{l}\times \mathrm{GL}_n $ when $m+n<l$. When
$n=0$, these integrals are those defined by Jacquet--Piatetski-Shapiro--Shalika
up to a shift. In this sense, these new integrals generalize
Jacquet--Piatetski-Shapiro--Shalika's Rankin-Selberg convolution integrals. We
study basic properties of these integrals. In particular, we define local gamma
factors using this new family of integrals. As an application, we obtain a new
proof of Jacquet's local converse conjecture using these new integrals.
|
http://arxiv.org/abs/2309.10445v2
|
Advancements in nanotechnology and material science are paving the way toward
nanoscale devices that combine sensing, computing, data and energy storage, and
wireless communication. In precision medicine, these nanodevices show promise
for disease diagnostics, treatment, and monitoring from within the patients'
bloodstreams. Assigning the location of a sensed biological event with the
event itself, which is the main proposition of flow-guided in-body nanoscale
localization, would be immensely beneficial from the perspective of precision
medicine. The nanoscale nature of the nanodevices and the challenging
environment that the bloodstream represents, result in current flow-guided
localization approaches being constrained in their communication and
energy-related capabilities. The communication and energy constraints of the
nanodevices result in different features of raw data for flow-guided
localization, in turn affecting its performance. An analytical modeling of the
effects of imperfect communication and constrained energy causing intermittent
operation of the nanodevices on the raw data produced by the nanodevices would
be beneficial. Hence, we propose an analytical model of raw data for
flow-guided localization, where the raw data is modeled as a function of
communication and energy-related capabilities of the nanodevice. We evaluate
the model by comparing its output with the one obtained through the utilization
of a simulator for objective evaluation of flow-guided localization, featuring
comparably higher level of realism. Our results across a number of scenarios
and heterogeneous performance metrics indicate high similarity between the
model and simulator-generated raw datasets.
|
http://arxiv.org/abs/2309.16034v2
|
We study measurement-induced symmetry-protected topological (SPT) order in a
wide class of quantum random circuit models by combining calculations within
the stabilizer formalism with tensor network simulations. We construct a family
of quantum random circuits, generating the out-of-equilibrium version of all
generalized cluster models, and derive a set of non-local string order
parameters to distinguish different SPT phases. We apply this framework to
investigate a random circuit realization of the XZX cluster model, and use the
string order parameter to demonstrate that the phase diagram is stable against
extending the class of unitary gates in the circuit, from Clifford gates to
Haar unitaries. We then turn to the XZZX generalized cluster model, and
demonstrate the coexistence of SPT order and spontaneous symmetry breaking, by
relying on string order parameters and a connected correlation function.
|
http://arxiv.org/abs/2302.14551v2
|
Most existing algorithms for replicated lists, which are widely used in
collaborative text editors, suffer from a problem: when two users concurrently
insert text at the same position in the document, the merged outcome may
interleave the inserted text passages, resulting in corrupted and potentially
unreadable text. The problem has gone unnoticed for decades, and it affects
both CRDTs and Operational Transformation. This paper defines maximal
non-interleaving, our new correctness property for replicated lists. We
introduce two related CRDT algorithms, Fugue and FugueMax, and prove that
FugueMax satisfies maximal non-interleaving. We also implement our algorithms
and demonstrate that Fugue offers performance comparable to state-of-the-art
CRDT libraries for text editing.
|
http://arxiv.org/abs/2305.00583v2
|
Sequential models, such as Recurrent Neural Networks and Neural Ordinary
Differential Equations, have long suffered from slow training due to their
inherent sequential nature. For many years this bottleneck has persisted, as
many thought sequential models could not be parallelized. We challenge this
long-held belief with our parallel algorithm that accelerates GPU evaluation of
sequential models by up to 3 orders of magnitude faster without compromising
output accuracy. The algorithm does not need any special structure in the
sequential models' architecture, making it applicable to a wide range of
architectures. Using our method, training sequential models can be more than 10
times faster than the common sequential method without any meaningful
difference in the training results. Leveraging this accelerated training, we
discovered the efficacy of the Gated Recurrent Unit in a long time series
classification problem with 17k time samples. By overcoming the training
bottleneck, our work serves as the first step to unlock the potential of
non-linear sequential models for long sequence problems.
|
http://arxiv.org/abs/2309.12252v3
|
Cohn and Umans proposed a framework for developing fast matrix multiplication
algorithms based on the embedding computation in certain groups algebras. In
subsequent work with Kleinberg and Szegedy, they connected this to the search
for combinatorial objects called strong uniquely solvable puzzles (strong
USPs). We begin a systematic computer-aided search for these objects. We
develop and implement constraint-based algorithms build on reductions to
$\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to
search for large strong USPs. We produce tight bounds on the maximum size of a
strong USP for width $k \le 5$, construct puzzles of small width that are
larger than previous work, and improve the upper bounds on strong USP size for
$k \le 12$. Although our work only deals with puzzles of small-constant width,
the strong USPs we find imply matrix multiplication algorithms that run in
$O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not
beat the fastest algorithms, our work provides evidence and, perhaps, a path to
finding families of strong USPs that imply matrix multiplication algorithms
that are more efficient than those currently known.
|
http://arxiv.org/abs/2301.00074v1
|
We consider generic families of gradient-like dynamical systems with a
parameter space $P$ which is a 2-dimensional simply connected domain. We prove
that if over the boundary of $P$ there is a S or Z shaped bifurcation graph
containing two opposing fold bifurcation points while over the rest of the
boundary there are no other bifurcation points then there is an odd number of
cusps in the interior of $P$.
|
http://arxiv.org/abs/2309.12246v1
|
Fusing multi-modal data can improve the performance of deep learning models.
However, missing modalities are common for medical data due to patients'
specificity, which is detrimental to the performance of multi-modal models in
applications. Therefore, it is critical to adapt the models to missing
modalities. This study aimed to develop an efficient multi-modal fusion
architecture for medical data that was robust to missing modalities and further
improved the performance on disease diagnosis.X-ray chest radiographs for the
image modality, radiology reports for the text modality, and structured value
data for the tabular data modality were fused in this study. Each modality pair
was fused with a Transformer-based bi-modal fusion module, and the three
bi-modal fusion modules were then combined into a tri-modal fusion framework.
Additionally, multivariate loss functions were introduced into the training
process to improve model's robustness to missing modalities in the inference
process. Finally, we designed comparison and ablation experiments for
validating the effectiveness of the fusion, the robustness to missing
modalities and the enhancements from each key component. Experiments were
conducted on MIMIC-IV, MIMIC-CXR with the 14-label disease diagnosis task.
Areas under the receiver operating characteristic curve (AUROC), the area under
the precision-recall curve (AUPRC) were used to evaluate models' performance.
The experimental results demonstrated that our proposed multi-modal fusion
architecture effectively fused three modalities and showed strong robustness to
missing modalities. This method is hopeful to be scaled to more modalities to
enhance the clinical practicality of the model.
|
http://arxiv.org/abs/2309.15529v1
|
Probing small main-belt asteroids provides insight into their formation and
evolution through multiple dynamical and collisional processes. These asteroids
also overlap in size with the potentially hazardous near-earth object
population and supply the majority of these objects. The Lucy mission will
provide an opportunity for study of a small main-belt asteroid, (152830)
Dinkinesh. The spacecraft will perform a flyby of this object on November 1,
2023, in preparation for its mission to the Jupiter Trojan asteroids. We
employed aperture photometry on stacked frames of Dinkinesh obtained by the
Wide-field-Infrared Survey Explorer and performed thermal modeling on a
detection at 12 $\mu$m to compute diameter and albedo values. Through this
method, we determined Dinkinesh has an effective spherical diameter of
$0.76^{+0.11}_{-0.21}$ km and a visual geometric albedo of
$0.27^{+0.25}_{-0.06}$ at the 16th and 84th percentiles. This albedo is
consistent with typical stony (S-type) asteroids.
|
http://arxiv.org/abs/2309.13158v1
|
Coupled-cluster and Green's function theories are highly successful in
treating many-body electron correlation and there has been significant interest
in identifying and leveraging connections between them. Here we present a
diagrammatic definition of the irreducible coupled-cluster self-energy that
directly embeds coupled-cluster theory within the framework of many-body field
theory. The EOM-CC treatment emerges naturally from our definition via the
Dyson and Bethe-Salpeter equations, providing a unified description of RPA,
$GW$-BSE and CC theory for ground state and excitation energies. This clarifies
the origin of previously established connections between RPA, $GW$-BSE and
coupled-cluster theory, and exposes the relationship between vertex corrections
and the coupled-cluster amplitude equations.
|
http://arxiv.org/abs/2309.10451v2
|
The ability to automatically learn movements and behaviors of increasing
complexity is a long-term goal in autonomous systems. Indeed, this is a very
complex problem that involves understanding how knowledge is acquired and
reused by humans as well as proposing mechanisms that allow artificial agents
to reuse previous knowledge. Inspired by Jean Piaget's theory's first three
sensorimotor substages, this work presents a cognitive agent based on CONAIM
(Conscious Attention-Based Integrated Model) that can learn procedures
incrementally. Throughout the paper, we show the cognitive functions required
in each substage and how adding new functions helps address tasks previously
unsolved by the agent. Experiments were conducted with a humanoid robot in a
simulated environment modeled with the Cognitive Systems Toolkit (CST)
performing an object tracking task. The system is modeled using a single
procedural learning mechanism based on Reinforcement Learning. The increasing
agent's cognitive complexity is managed by adding new terms to the reward
function for each learning phase. Results show that this approach is capable of
solving complex tasks incrementally.
|
http://arxiv.org/abs/2305.00597v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.