text
string | source
string |
|---|---|
Formats for representing and manipulating verification problems are extremely
important for supporting the ecosystem of tools, developers, and practitioners.
A good format allows representing many different types of problems, has a
strong toolchain for manipulating and translating problems, and can grow with
the community. In the world of hardware verification, and, specifically, the
Hardware Model Checking Competition (HWMCC), the Btor2 format has emerged as
the dominating format. It is supported by Btor2Tools, verification tools, and
Verilog design tools like Yosys. In this paper, we present an alternative
format and toolchain, called Btor2MLIR, based on the recent MLIR framework. The
advantage of Btor2MLIR is in reusing existing components from a mature compiler
infrastructure, including parsers, text and binary formats, converters to a
variety of intermediate representations, and executable semantics of LLVM. We
hope that the format and our tooling will lead to rapid prototyping of
verification and related tools for hardware verification.
|
http://arxiv.org/abs/2309.09100v1
|
Galaxies have been observed to exhibit a level of simplicity unexpected in
the complex galaxy formation scenario posited by standard cosmology. This is
particularly apparent in their dynamics, where scaling relations display much
regularity and little intrinsic scatter. However, the parameters responsible
for this simplicity have not been identified. Using the Spitzer Photometry &
Accurate Rotation Curves galaxy catalogue, we argue that the radial
acceleration relation (RAR) between galaxies' baryonic and total dynamical
accelerations is the fundamental $1$-dimensional correlation governing the
radial (in-disk) dynamics of late-type galaxies. In particular, we show that
the RAR cannot be tightened by the inclusion of any other available galaxy
property, that it is the strongest projection of galaxies' radial dynamical
parameter space, and that all other statistical radial dynamical correlations
stem from the RAR plus the non-dynamical correlations present in our sample. We
further provide evidence that the RAR's fundamentality is unique in that the
second most significant dynamical relation does not possess any of these
features. Our analysis reveals the root cause of the correlations present in
galaxies' radial dynamics: they are nothing but facets of the RAR. These
results have important ramifications for galaxy formation theory because they
imply that to explain statistically late-type galaxy dynamics within the disk
it is necessary and sufficient to explain the RAR and lack of any significant,
partially independent correlation. While simple in some modified dynamics
models, this poses a challenge to standard cosmology.
|
http://arxiv.org/abs/2305.19978v2
|
In this work we propose a Bayesian version of the Nagaoka-Hayashi bound when
estimating a parametric family of quantum states. This lower bound is a
generalization of a recently proposed bound for point estimation to Bayesian
estimation. We then show that the proposed lower bound can be efficiently
computed as a semidefinite programming problem. As a lower bound, we also
derive a Bayesian version of the Holevo-type bound from the Bayesian
Nagaoka-Hayashi bound. Lastly, we prove that the new lower bound is tighter
than the Bayesian quantum Cramer-Rao bounds.
|
http://arxiv.org/abs/2302.14223v2
|
Gaussian processes (GPs) are popular nonparametric statistical models for
learning unknown functions and quantifying the spatiotemporal uncertainty in
data. Recent works have extended GPs to model scalar and vector quantities
distributed over non-Euclidean domains, including smooth manifolds appearing in
numerous fields such as computer vision, dynamical systems, and neuroscience.
However, these approaches assume that the manifold underlying the data is
known, limiting their practical utility. We introduce RVGP, a generalisation of
GPs for learning vector signals over latent Riemannian manifolds. Our method
uses positional encoding with eigenfunctions of the connection Laplacian,
associated with the tangent bundle, readily derived from common graph-based
approximation of data. We demonstrate that RVGP possesses global regularity
over the manifold, which allows it to super-resolve and inpaint vector fields
while preserving singularities. Furthermore, we use RVGP to reconstruct
high-density neural dynamics derived from low-density EEG recordings in healthy
individuals and Alzheimer's patients. We show that vector field singularities
are important disease markers and that their reconstruction leads to a
comparable classification accuracy of disease states to high-density
recordings. Thus, our method overcomes a significant practical limitation in
experimental and clinical applications.
|
http://arxiv.org/abs/2309.16746v2
|
In many randomized experiments, the treatment effect of the long-term metric
(i.e. the primary outcome of interest) is often difficult or infeasible to
measure. Such long-term metrics are often slow to react to changes and
sufficiently noisy they are challenging to faithfully estimate in short-horizon
experiments. A common alternative is to measure several short-term proxy
metrics in the hope they closely track the long-term metric -- so they can be
used to effectively guide decision-making in the near-term. We introduce a new
statistical framework to both define and construct an optimal proxy metric for
use in a homogeneous population of randomized experiments. Our procedure first
reduces the construction of an optimal proxy metric in a given experiment to a
portfolio optimization problem which depends on the true latent treatment
effects and noise level of experiment under consideration. We then denoise the
observed treatment effects of the long-term metric and a set of proxies in a
historical corpus of randomized experiments to extract estimates of the latent
treatment effects for use in the optimization problem. One key insight derived
from our approach is that the optimal proxy metric for a given experiment is
not apriori fixed; rather it should depend on the sample size (or effective
noise level) of the randomized experiment for which it is deployed. To
instantiate and evaluate our framework, we employ our methodology in a large
corpus of randomized experiments from an industrial recommendation system and
construct proxy metrics that perform favorably relative to several baselines.
|
http://arxiv.org/abs/2309.07893v2
|
The work of Mann and Rafi gives a classification surfaces $\Sigma$ when
$\textrm{Map}(\Sigma)$ is globally CB, locally CB, and CB generated under the
technical assumption of tameness. In this article, we restrict our study to the
pure mapping class group and give a complete classification without additional
assumptions. In stark contrast with the rich class of examples of Mann--Rafi,
we prove that $\textrm{PMap}(\Sigma)$ is globally CB if and only if $\Sigma$ is
the Loch Ness monster surface, and locally CB or CB generated if and only if
$\Sigma$ has finitely many ends and is not a Loch Ness monster surface with
(nonzero) punctures.
|
http://arxiv.org/abs/2309.00124v1
|
In this work we study the notions of structural and universal completeness
both from the algebraic and logical point of view. In particular, we provide
new algebraic characterizations of quasivarieties that are actively and
passively universally complete, and passively structurally complete. We apply
these general results to varieties of bounded lattices and to quasivarieties
related to substructural logics. In particular we show that a substructural
logic satisfying weakening is passively structurally complete if and only if
every classical contradiction is explosive in it. Moreover, we fully
characterize the passively structurally complete varieties of MTL-algebras,
i.e., bounded commutative integral residuated lattices generated by chains.
|
http://arxiv.org/abs/2309.14151v1
|
We present a Markov-chain analysis of blockwise-stochastic algorithms for
solving partially block-separable optimization problems. Our main contributions
to the extensive literature on these methods are statements about the Markov
operators and distributions behind the iterates of stochastic algorithms, and
in particular the regularity of Markov operators and rates of convergence of
the distributions of the corresponding Markov chains. This provides a detailed
characterization of the moments of the sequences beyond just the expected
behavior. This also serves as a case study of how randomization restores
favorable properties to algorithms that iterations of only partial information
destroys. We demonstrate this on stochastic blockwise implementations of the
forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a
special case, convex), nonsmooth optimization.
|
http://arxiv.org/abs/2310.20397v1
|
Tactile representation learning (TRL) equips robots with the ability to
leverage touch information, boosting performance in tasks such as environment
perception and object manipulation. However, the heterogeneity of tactile
sensors results in many sensor- and task-specific learning approaches. This
limits the efficacy of existing tactile datasets, and the subsequent
generalisability of any learning outcome. In this work, we investigate the
applicability of vision foundational models to sensor-agnostic TRL, via a
simple yet effective transformation technique to feed the heterogeneous sensor
readouts into the model. Our approach recasts TRL as a computer vision (CV)
problem, which permits the application of various CV techniques for tackling
TRL-specific challenges. We evaluate our approach on multiple benchmark tasks,
using datasets collected from four different tactile sensors. Empirically, we
demonstrate significant improvements in task performance, model robustness, as
well as cross-sensor and cross-task knowledge transferability with limited data
requirements.
|
http://arxiv.org/abs/2305.00596v1
|
Machine learning (ML) is crucial in network anomaly detection for proactive
threat hunting, reducing detection and response times significantly. However,
challenges in model training, maintenance, and frequent false positives impact
its acceptance and reliability. Explainable AI (XAI) attempts to mitigate these
issues, allowing cybersecurity teams to assess AI-generated alerts with
confidence, but has seen limited acceptance from incident responders. Large
Language Models (LLMs) present a solution through discerning patterns in
extensive information and adapting to different functional requirements. We
present HuntGPT, a specialized intrusion detection dashboard applying a Random
Forest classifier using the KDD99 dataset, integrating XAI frameworks like SHAP
and Lime for user-friendly and intuitive model interaction, and combined with a
GPT-3.5 Turbo, it delivers threats in an understandable format. The paper
delves into the system's architecture, components, and technical accuracy,
assessed through Certified Information Security Manager (CISM) Practice Exams,
evaluating response quality across six metrics. The results demonstrate that
conversational agents, supported by LLM and integrated with XAI, provide
robust, explainable, and actionable AI solutions in intrusion detection,
enhancing user understanding and interactive experience.
|
http://arxiv.org/abs/2309.16021v1
|
We propose a group-level agent-based mixed (GLAM) logit model that is
estimated using market-level choice share data. The model non-parametrically
represents taste heterogeneity through market-specific parameters by solving a
multiagent inverse utility maximization problem, addressing the limitations of
existing market-level choice models with parametric taste heterogeneity. A case
study of mode choice in New York State is conducted using synthetic population
data of 53.55 million trips made by 19.53 million residents in 2019. These
trips are aggregated based on population segments and census block group-level
origin-destination (OD) pairs, resulting in 120,740 markets/agents. We
benchmark in-sample and out-of-sample predictive performance of the GLAM logit
model against multinomial logit, nested logit, inverse product differentiation
logit, and random coefficient logit (RCL) models. The results show that GLAM
logit outperforms benchmark models, improving the overall in-sample predictive
accuracy from 78.7% to 96.71% and out-of-sample accuracy from 65.30% to 81.78%.
The price elasticities and diversion ratios retrieved from GLAM logit and
benchmark models exhibit similar substitution patterns among the six travel
modes. GLAM logit is scalable and computationally efficient, taking less than
one-tenth of the time taken to estimate the RCL model. The agent-specific
parameters in GLAM logit provide additional insights such as value-of-time
(VOT) across segments and regions, which has been further utilized to
demonstrate its application in analyzing NYS travelers' mode choice response to
the congestion pricing. The agent-specific parameters in GLAM logit facilitate
their seamless integration into supply-side optimization models for revenue
management and system design.
|
http://arxiv.org/abs/2309.13159v2
|
In a previous paper two of us (D.M. and A.Z.) proposed that a vast class of
gravitational extremization problems in holography can be formulated in terms
of the equivariant volume of the internal geometry, or of the cone over it. We
substantiate this claim by analysing supergravity solutions corresponding to
branes partially or totally wrapped on a four-dimensional orbifold, both in
M-theory as well as in type II supergravities. We show that our approach
recovers the relevant gravitational central charges/free energies of several
known supergravity solutions and can be used to compute these also for
solutions that are not known explicitly. Moreover, we demonstrate the validity
of previously conjectured gravitational block formulas for M5 and D4 branes. In
the case of M5 branes we make contact with a recent approach based on
localization of equivariant forms, constructed with Killing spinor bilinears.
|
http://arxiv.org/abs/2309.04425v3
|
We investigate critical equilibrium and out of equilibrium properties of a
ferromagnetic Ising model in one and two dimension in the presence of long
range interactions, $J_{ij}\propto r^{-(d+\sigma)}$. We implement a novel local
dynamics on a dynamical L\'evy lattice, that correctly reproduces the static
critical exponents known in the literature, as a function of the interaction
parameter $\sigma$. Due to its locality the algorithm can be applied to
investigate dynamical properties, of both discrete and continuous long range
models. We consider the relaxation time at the critical temperature and we
measure the dynamical exponent $z$ as a function of the decay parameter
$\sigma$, highlighting that the onset of short range regime for the dynamical
critical properties appears to occur at a value of $\sigma$ which differs from
the equilibrium one.
|
http://arxiv.org/abs/2303.18057v2
|
Detecting fake news requires both a delicate sense of diverse clues and a
profound understanding of the real-world background, which remains challenging
for detectors based on small language models (SLMs) due to their knowledge and
capability limitations. Recent advances in large language models (LLMs) have
shown remarkable performance in various tasks, but whether and how LLMs could
help with fake news detection remains underexplored. In this paper, we
investigate the potential of LLMs in fake news detection. First, we conduct an
empirical study and find that a sophisticated LLM such as GPT 3.5 could
generally expose fake news and provide desirable multi-perspective rationales
but still underperforms the basic SLM, fine-tuned BERT. Our subsequent analysis
attributes such a gap to the LLM's inability to select and integrate rationales
properly to conclude. Based on these findings, we propose that current LLMs may
not substitute fine-tuned SLMs in fake news detection but can be a good advisor
for SLMs by providing multi-perspective instructive rationales. To instantiate
this proposal, we design an adaptive rationale guidance network for fake news
detection (ARG), in which SLMs selectively acquire insights on news analysis
from the LLMs' rationales. We further derive a rationale-free version of ARG by
distillation, namely ARG-D, which services cost-sensitive scenarios without
querying LLMs. Experiments on two real-world datasets demonstrate that ARG and
ARG-D outperform three types of baseline methods, including SLM-based,
LLM-based, and combinations of small and large language models.
|
http://arxiv.org/abs/2309.12247v2
|
Reducing the cost and delay and improving quality are major issues for
product and software development, especially in the automotive domain. Product
line engineering is a wellknown approach to engineer systems with the aim to
reduce costs and development time as well as to improve the product quality.
Feature models enable to make logical selection of features and obtain a
filtered set of assets that compose the product. We propose to use a color code
in feature models to make possible decisions visual in the feature tree. The
color code is explained and its use is illustrated. The completeness of the
approach is discussed.
|
http://arxiv.org/abs/2310.20396v1
|
This paper details a system for fast visual exploration and search without
prior map information. We leverage frontier based planning with both LiDAR and
visual sensing and augment it with a perception module that contextually labels
points in the surroundings from wide Field of View 2D LiDAR scans. The goal of
the perception module is to recognize surrounding points more likely to be the
search target in order to provide an informed prior on which to plan next best
viewpoints. The robust map-free scan classifier used to label pixels in the
robot's surroundings is trained from expert data collected using a simple cart
platform equipped with a map-based classifier. We propose a novel utility
function that accounts for the contextual data found from the classifier. The
resulting viewpoints encourage the robot to explore points unlikely to be
permanent in the environment, leading the robot to locate objects of interest
faster than several existing baseline algorithms. Our proposed system is
further validated in real-world search experiments for single and multiple
search objects with a Spot robot in two unseen environments. Videos of
experiments, implementation details and open source code can be found at
https://sites.google.com/view/lives-2024/home.
|
http://arxiv.org/abs/2309.14150v11
|
We study both numerically and experimentally the use of two third-order
nonlinear temporal filtering techniques, namely nonlinear ellipse rotation
(NER) and cross-polarized wave (XPW) generation, for spatio-temporal cleaning
of mJ energy 30 fs Titanium:Sapphire laser pulses in a multi-pass cell. In both
cases, a contrast enhancement greater than 3 orders of magnitude is observed,
together with excellent output pulse quality and record high conversion
efficiencies. Careful balancing of nonlinearity and dispersion inside the
multi-pass cell helps tune the spectral broadening process and control the
post-compressed pulse duration for specific applications.
|
http://arxiv.org/abs/2302.14222v1
|
Uniswap is a Constant Product Market Maker built around liquidity pools,
where pairs of tokens are exchanged subject to a fee that is proportional to
the size of transactions. At the time of writing, there exist more than 6,000
pools associated with Uniswap v3, implying that empirical investigations on the
full ecosystem can easily become computationally expensive. Thus, we propose a
systematic workflow to extract and analyse a meaningful but computationally
tractable sub-universe of liquidity pools. Leveraging on the 34 pools found
relevant for the six-months time window January-June 2022, we then investigate
the related liquidity consumption behaviour of market participants. We propose
to represent each liquidity taker by a suitably constructed transaction graph,
which is a fully connected network where nodes are the liquidity taker's
executed transactions, and edges contain weights encoding the time elapsed
between any two transactions. We extend the NLP-inspired graph2vec algorithm to
the weighted undirected setting, and employ it to obtain an embedding of the
set of graphs. This embedding allows us to extract seven clusters of liquidity
takers, with equivalent behavioural patters and interpretable trading
preferences. We conclude our work by testing for relationships between the
characteristic mechanisms of each pool, i.e. liquidity provision, consumption,
and price variation. We introduce a related ideal crypto law, inspired from the
ideal gas law of thermodynamics, and demonstrate that pools adhering to this
law are healthier trading venues in terms of sensitivity of liquidity and
agents' activity. Regulators and practitioners could benefit from our model by
developing related pool health monitoring tools.
|
http://arxiv.org/abs/2301.13009v2
|
To create effective data visualizations, it helps to represent data using
visual features in intuitive ways. When visualization designs match observer
expectations, visualizations are easier to interpret. Prior work suggests that
several factors influence such expectations. For example, the dark-is-more bias
leads observers to infer that darker colors map to larger quantities, and the
opaque-is-more bias leads them to infer that regions appearing more opaque
(given the background color) map to larger quantities. Previous work suggested
that the background color only plays a role if visualizations appear to vary in
opacity. The present study challenges this claim. We hypothesized that the
background color modulate inferred mappings for colormaps that should not
appear to vary in opacity (by previous measures) if the visualization appeared
to have a "hole" that revealed the background behind the map (hole hypothesis).
We found that spatial aspects of the map contributed to inferred mappings,
though the effects were inconsistent with the hole hypothesis. Our work raises
new questions about how spatial distributions of data influence color semantics
in colormap data visualizations.
|
http://arxiv.org/abs/2309.00131v1
|
We numerically model a two-dimensional active nematic confined by a periodic
array of fixed obstacles. Even in the passive nematic, the appearance of
topological defects is unavoidable due to planar anchoring by the obstacle
surfaces. We show that a vortex lattice state emerges as activity is increased,
and that this lattice may be tuned from ``ferromagnetic'' to
``antiferromagnetic'' by varying the gap size between obstacles. We map the
rich variety of states exhibited by the system as a function of distance
between obstacles and activity, including a pinned defect state, motile
defects, the vortex lattice, and active turbulence. We demonstrate that the
flows in the active turbulent phase can be tuned by the presence of obstacles,
and explore the effects of a frustrated lattice geometry on the vortex lattice
phase.
|
http://arxiv.org/abs/2309.07886v1
|
Very thin free-flowing liquid sheets are promising targets for
high-repetition-rate laser-ion acceleration. In this work, we report the
generation of micrometer-thin free-flowing liquid sheets from the collision of
two liquid jets, and study the vibration and jitter in their surface normal
direction. The dependence of their motion amplitudes on the generation
parameters is studied in detail. The origins of the vibration and jitter are
discussed. Our results indicate that when the generation parameters are
optimized, the motion amplitudes in the stable region can be stabilized below
3.7 {\mu}m to meet the stringent requirement of sheet position stability for a
tight-focusing setup in laser-ion acceleration experiments.
|
http://arxiv.org/abs/2302.14236v1
|
Pharmacodynamic (PD) models are mathematical models of cellular reaction
networks that include drug mechanisms of action. These models are useful for
studying predictive therapeutic outcomes of novel drug therapies in silico.
However, PD models are known to possess significant uncertainty with respect to
constituent parameter data, leading to uncertainty in the model predictions.
Furthermore, experimental data to calibrate these models is often limited or
unavailable for novel pathways. In this study, we present a Bayesian optimal
experimental design approach for improving PD model prediction accuracy. We
then apply our method using simulated experimental data to account for
uncertainty in hypothetical laboratory measurements. This leads to a
probabilistic prediction of drug performance and a quantitative measure of
which prospective laboratory experiment will optimally reduce prediction
uncertainty in the PD model. The methods proposed here provide a way forward
for uncertainty quantification and guided experimental design for models of
novel biological pathways.
|
http://arxiv.org/abs/2309.06540v2
|
Monolayers of transition metal dichalcogenides (TMDC) are direct-gap
semiconductors with strong light-matter interactions featuring tightly bound
excitons, while plasmonic crystals (PCs), consisting of metal nanoparticles
that act as meta-atoms, exhibit collective plasmon modes and allow one to
tailor electric fields on the nanoscale. Recent experiments show that TMDC-PC
hybrids can reach the strong-coupling limit between excitons and plasmons
forming new quasiparticles, so-called plexcitons. To describe this coupling
theoretically, we develop a self-consistent Maxwell-Bloch theory for TMDC-PC
hybrid structures, which allows us to compute the scattered light in the near-
and far-field explicitly and provide guidance for experimental studies. Our
calculations reveal a spectral splitting signature of strong coupling of more
than $100\,$meV in gold-MoSe$_2$ structures with $30\,$nm nanoparticles,
manifesting in a hybridization of exciton and plasmon into two effective
plexcitonic bands. In addition to the hybridized states, we find a remaining
excitonic mode with significantly smaller coupling to the plasmonic near-field,
emitting directly into the far-field. Thus, hybrid spectra in the strong
coupling regime can contain three emission peaks.
|
http://arxiv.org/abs/2309.09673v1
|
We study the structure of the finite-dimensional representations of
$\mathfrak{sl}_2[t]$, the current Lie algebra type of $A_1$, which are obtained
by taking tensor products of special Demazure modules. We show that these
representations admit a Demazure flag and obtain a closed formula for the
graded multiplicities of the level 2 Demazure modules in the filtration of the
tensor product of two local Weyl modules for $\mathfrak{sl}_2[t]$. Furthermore,
we derive an explicit expression for graded character of the tensor product of
a local Weyl module with an irreducible $\mathfrak{sl}_2[t]$ module. In
conjunction with the results of \cite{MR3210603}, our findings provide evidence
for the conjecture in \cite{9} that the tensor product of Demazure modules of
levels m and n respectively has a filtration by Demazure modules of level m +
n.
|
http://arxiv.org/abs/2309.14144v1
|
We propose a simple yet effective metric that measures structural similarity
between visual instances of architectural floor plans, without the need for
learning. Qualitatively, our experiments show that the retrieval results are
similar to deeply learned methods. Effectively comparing instances of floor
plan data is paramount to the success of machine understanding of floor plan
data, including the assessment of floor plan generative models and floor plan
recommendation systems. Comparing visual floor plan images goes beyond a sole
pixel-wise visual examination and is crucially about similarities and
differences in the shapes and relations between subdivisions that compose the
layout. Currently, deep metric learning approaches are used to learn a
pair-wise vector representation space that closely mimics the structural
similarity, in which the models are trained on similarity labels that are
obtained by Intersection-over-Union (IoU). To compensate for the lack of
structural awareness in IoU, graph-based approaches such as Graph Matching
Networks (GMNs) are used, which require pairwise inference for comparing data
instances, making GMNs less practical for retrieval applications. In this
paper, an effective evaluation metric for judging the structural similarity of
floor plans, coined SSIG (Structural Similarity by IoU and GED), is proposed
based on both image and graph distances. In addition, an efficient algorithm is
developed that uses SSIG to rank a large-scale floor plan database. Code will
be openly available.
|
http://arxiv.org/abs/2309.04357v1
|
We introduce a formal framework to study the multiple unicast problem for a
coded network in which the network code is linear over a finite field and
fixed. We show that the problem corresponds to an interference alignment
problem over a finite field. In this context, we establish an outer bound for
the achievable rate region and provide examples of networks where the bound is
sharp. We finally give evidence of the crucial role played by the field
characteristic in the problem.
|
http://arxiv.org/abs/2309.04431v1
|
The importance of humanoid robots in today's world is undeniable, one of the
most important features of humanoid robots is the ability to maneuver in
environments such as stairs that other robots can not easily cross. A suitable
algorithm to generate the path for the bipedal robot to climb is very
important. In this paper, an optimization-based method to generate an optimal
stairway for under-actuated bipedal robots without an ankle actuator is
presented. The generated paths are based on zero and non-zero dynamics of the
problem, and according to the satisfaction of the zero dynamics constraint in
the problem, tracking the path is possible, in other words, the problem can be
dynamically feasible. The optimization method used in the problem is a
gradient-based method that has a suitable number of function evaluations for
computational processing. This method can also be utilized to go down the
stairs.
|
http://arxiv.org/abs/2301.00075v1
|
The X-ray microscopy technique at the European X-ray free-electron laser
(EuXFEL), operating at a MHz repetition rate, provides superior contrast and
spatial-temporal resolution compared to typical microscopy techniques at other
X-ray sources. In both online visualization and offline data analysis for
microscopy experiments, baseline normalization is essential for further
processing steps such as phase retrieval and modal decomposition. In addition,
access to normalized projections during data acquisition can play an important
role in decision-making and improve the quality of the data. However, the
stochastic nature of XFEL sources hinders the use of existing flat-flied
normalization methods during MHz X-ray microscopy experiments. Here, we present
an online dynamic flat-field correction method based on principal component
analysis of dynamically evolving flat-field images. The method is used for the
normalization of individual X-ray projections and has been implemented as an
online analysis tool at the Single Particles, Clusters, and Biomolecules and
Serial Femtosecond Crystallography (SPB/SFX) instrument of EuXFEL.
|
http://arxiv.org/abs/2303.18043v1
|
We introduce an extension to the CLRS algorithmic learning benchmark,
prioritizing scalability and the utilization of sparse representations. Many
algorithms in CLRS require global memory or information exchange, mirrored in
its execution model, which constructs fully connected (not sparse) graphs based
on the underlying problem. Despite CLRS's aim of assessing how effectively
learned algorithms can generalize to larger instances, the existing execution
model becomes a significant constraint due to its demanding memory requirements
and runtime (hard to scale). However, many important algorithms do not demand a
fully connected graph; these algorithms, primarily distributed in nature, align
closely with the message-passing paradigm employed by Graph Neural Networks.
Hence, we propose SALSA-CLRS, an extension of the current CLRS benchmark
specifically with scalability and sparseness in mind. Our approach includes
adapted algorithms from the original CLRS benchmark and introduces new problems
from distributed and randomized algorithms. Moreover, we perform a thorough
empirical evaluation of our benchmark. Code is publicly available at
https://github.com/jkminder/SALSA-CLRS.
|
http://arxiv.org/abs/2309.12253v2
|
We determine a connection between the weight of a Boolean function and the
total weight of its first-order derivatives. The relationship established is
used to study some cryptographic properties of Boolean functions. We establish
a characterization of APN permutations in terms of the weight of the
first-order derivatives of their components. We also characterize APN functions
by the total weight of the second-order derivatives of their components. The
total weight of the first-order and second-order derivatives for functions such
as permutations, bent, partially-bent, quadratic, plateaued and balanced
functions is determined.
|
http://arxiv.org/abs/2305.00582v1
|
Large Language Models (LLMs), although powerful in general domains, often
perform poorly on domain-specific tasks such as medical question answering
(QA). In addition, LLMs tend to function as "black-boxes", making it
challenging to modify their behavior. To address the problem, our work employs
a transparent process of retrieval augmented generation (RAG), aiming to
improve LLM responses without the need for fine-tuning or retraining.
Specifically, we propose a comprehensive retrieval strategy to extract medical
facts from an external knowledge base, and then inject them into the LLM's
query prompt. Focusing on medical QA, we evaluate the impact of different
retrieval models and the number of facts on LLM performance using the
MedQA-SMILE dataset. Notably, our retrieval-augmented Vicuna-7B model exhibited
an accuracy improvement from 44.46% to 48.54%. This work underscores the
potential of RAG to enhance LLM performance, offering a practical approach to
mitigate the challenges posed by black-box LLMs.
|
http://arxiv.org/abs/2309.16035v3
|
The task of blind source separation (BSS) involves separating sources from a
mixture without prior knowledge of the sources or the mixing system.
Single-channel mixtures and non-linear mixtures are a particularly challenging
problem in BSS. In this paper, we propose a novel method for addressing BSS
with single-channel non-linear mixtures by leveraging the natural feature
subspace specialization ability of multi-encoder autoencoders. During the
training phase, our method unmixes the input into the separate encoding spaces
of the multi-encoder network and then remixes these representations within the
decoder for a reconstruction of the input. Then to perform source inference, we
introduce a novel encoding masking technique whereby masking out all but one of
the encodings enables the decoder to estimate a source signal. To this end, we
also introduce a sparse mixing loss that encourages sparse remixing of source
encodings throughout the decoder and a so-called zero reconstruction loss on
the decoder for coherent source estimations. To analyze and evaluate our
method, we conduct experiments on a toy dataset, designed to demonstrate this
property of feature subspace specialization, and with real-world biosignal
recordings from a polysomnography sleep study for extracting respiration from
electrocardiogram and photoplethysmography signals.
|
http://arxiv.org/abs/2309.07138v3
|
Commonsense reasoning is a pivotal skill for large language models, yet it
presents persistent challenges in specific tasks requiring this competence.
Traditional fine-tuning approaches can be resource-intensive and potentially
compromise a model's generalization capacity. Furthermore, state-of-the-art
language models like GPT-3.5 and Claude are primarily accessible through API
calls, which makes fine-tuning models challenging. To address these challenges,
we draw inspiration from the outputs of large models for tailored tasks and
semi-automatically developed a set of novel prompts from several perspectives,
including task-relevance, supportive evidence generation (e.g. chain-of-thought
and knowledge), diverse path decoding to aid the model. Experimental results on
ProtoQA dataset demonstrate that with better designed prompts we can achieve
the new state-of-art(SOTA) on the ProtoQA leaderboard, improving the Max
Answer@1 score by 8%, Max Incorrect@1 score by 4% (breakthrough 50% for the
first time) compared to the previous SOTA model and achieved an improvement on
StrategyQA and CommonsenseQA2.0 (3% and 1%, respectively). Furthermore, with
the generated Chain-of-Thought and knowledge, we can improve the
interpretability of the model while also surpassing the previous SOTA models.
We hope that our work can provide insight for the NLP community to develop
better prompts and explore the potential of large language models for more
complex reasoning tasks.
|
http://arxiv.org/abs/2309.13165v1
|
The anomalous scaling of Newton's constant around the Reuter fixed point is
dynamically computed using the functional flow equation approach. Specifically,
we thoroughly analyze the flow of the most general conformally reduced
Einstein-Hilbert action. Our findings reveal that, due to the distinctive
nature of gravity, the anomalous dimension $\eta$ of the Newton's constant
cannot be constrained to have one single value: the ultraviolet critical
manifold is characterized by a line of fixed points $(g_\ast(\eta),
\lambda_\ast (\eta))$, with a discrete (infinite) set of eigenoperators
associated to each fixed point. More specifically, we find three ranges of
$\eta$ corresponding to different properties of both fixed points and
eigenoperators and, in particular, the range $ \eta < \eta_c \approx 0.96$ the
ultraviolet critical manifolds has finite dimensionality.
|
http://arxiv.org/abs/2309.15514v1
|
The aim of this paper is to present a general algebraic identity. Applying
this identity, we provide several formulas involving the q-binomial
coefficients and the q-harmonic numbers. We also recover some known identities
including an algebraic identity of D. Y. Zheng on q-Ap\'{e}ry numbers and we
establish the q-analog of Euler's formula. The proposed results may have
important applications in the theory of q-supercongruences.
|
http://arxiv.org/abs/2301.13747v1
|
V838 Mon is a stellar merger remnant that erupted in 2002 in a luminous red
novae event. Although it is well studied in the optical, near infrared and
submillimeter regimes, its structure in the mid-infrared wavelengths remains
elusive. We observed V838 Mon with the MATISSE (LMN bands) and GRAVITY (K band)
instruments at the VLTI and also the MIRCX/MYSTIC (HK bands) instruments at the
CHARA array. We geometrically modelled the squared visibilities and the closure
phases in each of the bands to obtain constraints on physical parameters.
Furthermore, we constructed high resolution images of V838 Mon in the HK bands,
using the MIRA and SQUEEZE algorithms to study the immediate surroundings of
the star. Lastly, we also modelled the spectral features seen in the K and M
bands at various temperatures. The image reconstructions show a bipolar
structure that surrounds the central star in the post merger remnant. In the K
band, the super resolved images show an extended structure (uniform disk
diameter $\sim 1.94$ mas) with a clumpy morphology that is aligned along a
north-west position angle (PA) of $-40^\circ$. Whereas in the H band, the
extended structure (uniform disk diameter $\sim 1.18$ mas) lies roughly along
the same PA. However, the northern lobe is slightly misaligned with respect to
the southern lobe, which results in the closure phase deviations. The VLTI and
CHARA imaging results show that V838 Mon is surrounded by features that
resemble jets that are intrinsically asymmetric. This is also confirmed by the
closure phase modelling. Further observations with VLTI can help to determine
whether this structure shows any variation over time, and also if such bipolar
structures are commonly formed in other stellar merger remnants.
|
http://arxiv.org/abs/2306.17586v1
|
Speech emotion recognition has evolved from research to practical
applications. Previous studies of emotion recognition from speech have focused
on developing models on certain datasets like IEMOCAP. The lack of data in the
domain of emotion modeling emerges as a challenge to evaluate models in the
other dataset, as well as to evaluate speech emotion recognition models that
work in a multilingual setting. This paper proposes an ensemble learning to
fuse results of pre-trained models for emotion share recognition from speech.
The models were chosen to accommodate multilingual data from English and
Spanish. The results show that ensemble learning can improve the performance of
the baseline model with a single model and the previous best model from the
late fusion. The performance is measured using the Spearman rank correlation
coefficient since the task is a regression problem with ranking values. A
Spearman rank correlation coefficient of 0.537 is reported for the test set,
while for the development set, the score is 0.524. These scores are higher than
the previous study of a fusion method from monolingual data, which achieved
scores of 0.476 for the test and 0.470 for the development.
|
http://arxiv.org/abs/2309.11014v1
|
The LIGO-Virgo analyses of signals from compact binary mergers observed so
far have assumed isolated binary systems in a vacuum, neglecting the potential
presence of astrophysical environments. We present here the first investigation
of environmental effects on each of the events of GWTC-1 and two low-mass
events from GWTC-2. We find no evidence for the presence of environmental
effects. Most of the events decisively exclude the scenario of dynamical
fragmentation of massive stars as their formation channel. GW170817 results in
the most stringent upper bound on the medium density ($\lesssim
21\,\mathrm{g/cm^3}$). We find that environmental effects can substantially
bias the recovered parameters in the vacuum model, even when these effects are
not detectable. We forecast that the Einstein Telescope and B-DECIGO will be
able to probe the environmental effects of accretion disks and superradiant
boson clouds on compact binaries.
|
http://arxiv.org/abs/2309.05061v3
|
The ``Eshelby problem" refers to the response of a 2-dimensional elastic
sheet to cutting away a circle, deforming it into an ellipse, and pushing it
back. The resulting response is dominated by the so-called ``Eshelby Kernel"
which was derived for purely elastic (infinite) material, but has been employed
extensively to model the redistribution of stress after plastic events in
amorphous solids with finite boundaries. Here we discuss and solve the Eshelby
problem directly for amorphous solids, taking into account possible screening
effects and realistic boundary conditions. We find major modifications compared
to the classical Eshelby solution. These modification are needed for modeling
correctly the spatial responses to plastic events in amorphous solids.
|
http://arxiv.org/abs/2309.13603v1
|
Set-based state estimation plays a vital role in the safety verification of
dynamical systems, which becomes significantly challenging when the system's
sensors are susceptible to cyber-attacks. Existing methods often impose
limitations on the attacker's capabilities, restricting the number of attacked
sensors to be strictly less than half of the total number of sensors. This
paper proposes a Secure Set-Based State Estimation (S3E) algorithm that
addresses this limitation. The S3E algorithm guarantees that the true system
state is contained within the estimated set, provided the initialization set
encompasses the true initial state and the system is redundantly observable
from the set of uncompromised sensors. The algorithm gives the estimated set as
a collection of constrained zonotopes, which can be employed as robust
certificates for verifying whether the system adheres to safety constraints.
Furthermore, we demonstrate that the estimated set remains unaffected by attack
signals of sufficiently large and also establish sufficient conditions for
attack detection, identification, and filtering. This compels the attacker to
inject only stealthy signals of small magnitude to evade detection, thus
preserving the accuracy of the estimated set. When a few number of sensors
(less than half) can be compromised, we prove that the estimated set remains
bounded by a contracting set that converges to a ball whose radius is solely
determined by the noise magnitude and is independent of the attack signals. To
address the computational complexity of the algorithm, we offer several
strategies for complexity-performance trade-offs. The efficacy of the proposed
algorithm is illustrated through its application to a three-story building
model.
|
http://arxiv.org/abs/2309.05075v2
|
The article surveys the recent results on integrable systems arising from
quadratic pencil of Lax operator L, with values in a Hermitian symmetric space.
The counterpart operator M in the Lax pair defines positive, negative and
rational flows. The results are illustrated with examples from the A.III
symmetric space. The modeling aspect of the arising higher order nonlinear
Schr\"odinger equations is briefly discussed.
|
http://arxiv.org/abs/2309.12509v1
|
A new spectral conjugate subgradient method is presented to solve nonsmooth
unconstrained optimization problems. The method combines the spectral conjugate
gradient method for smooth problems with the spectral subgradient method for
nonsmooth problems. We study the effect of two different choices of line
search, as well as three formulas for determining the conjugate directions. In
addition to numerical experiments with standard nonsmooth test problems, we
also apply the method to several image reconstruction problems in computed
tomography, using total variation regularization. Performance profiles are used
to compare the performance of the algorithm using different line search
strategies and conjugate directions to that of the original spectral
subgradient method. Our results show that the spectral conjugate subgradient
algorithm outperforms the original spectral subgradient method, and that the
use of the Polak-Ribiere formula for conjugate directions provides the best and
most robust performance.
|
http://arxiv.org/abs/2309.15266v2
|
We consider approximating solutions to parameterized linear systems of the
form $A(\mu_1,\mu_2) x(\mu_1,\mu_2) = b$, where $(\mu_1, \mu_2) \in
\mathbb{R}^2$. Here the matrix $A(\mu_1,\mu_2) \in \mathbb{R}^{n \times n}$ is
nonsingular, large, and sparse and depends nonlinearly on the parameters
$\mu_1$ and $\mu_2$. Specifically, the system arises from a discretization of a
partial differential equation and $x(\mu_1,\mu_2) \in \mathbb{R}^n$, $b \in
\mathbb{R}^n$. This work combines companion linearization with the Krylov
subspace method preconditioned bi-conjugate gradient (BiCG) and a decomposition
of a tensor matrix of precomputed solutions, called snapshots. As a result, a
reduced order model of $x(\mu_1,\mu_2)$ is constructed, and this model can be
evaluated in a cheap way for many values of the parameters. Tensor
decompositions performed on a set of snapshots can fail to reach a certain
level of accuracy, and it is not known a priori if a decomposition will be
successful. Moreover, the selection of snapshots can affect both the quality of
the produced model and the computation time required for its construction. This
new method offers a way to generate a new set of solutions on the same
parameter space at little additional cost. An interpolation of the model is
used to produce approximations on the entire parameter space, and this method
can be used to solve a parameter estimation problem. Numerical examples of a
parameterized Helmholtz equation show the competitiveness of our approach. The
simulations are reproducible, and the software is available online.
|
http://arxiv.org/abs/2309.14178v2
|
Effective music mixing requires technical and creative finesse, but clear
communication with the client is crucial. The mixing engineer must grasp the
client's expectations, and preferences, and collaborate to achieve the desired
sound. The tacit agreement for the desired sound of the mix is often
established using guides like reference songs and demo mixes exchanged between
the artist and the engineer and sometimes verbalised using semantic terms. This
paper presents the findings of a two-phased exploratory study aimed at
understanding how professional mixing engineers interact with clients and use
their feedback to guide the mixing process. For phase one, semi-structured
interviews were conducted with five mixing engineers with the aim of gathering
insights about their communication strategies, creative processes, and
decision-making criteria. Based on the inferences from these interviews, an
online questionnaire was designed and administered to a larger group of 22
mixing engineers during the second phase. The results of this study shed light
on the importance of collaboration, empathy, and intention in the mixing
process, and can inform the development of smart multi-track mixing systems
that better support these practices. By highlighting the significance of these
findings, this paper contributes to the growing body of research on the
collaborative nature of music production and provides actionable
recommendations for the design and implementation of innovative mixing tools.
|
http://arxiv.org/abs/2309.03404v3
|
In this paper we revisit the classical Cauchy problem for Laplace's equation
as well as two further related problems in the light of regularisation of this
highly ill-conditioned problem by replacing integer derivatives with fractional
ones. We do so in the spirit of quasi reversibility, replacing a classically
severely ill-posed PDE problem by a nearby well-posed or only mildly ill-posed
one. In order to be able to make use of the known stabilising effect of
one-dimensional fractional derivatives of Abel type we work in a particular
rectangular (in higher space dimensions cylindrical) geometry. We start with
the plain Cauchy problem of reconstructing the values of a harmonic function
inside this domain from its Dirichlet and Neumann trace on part of the boundary
(the cylinder base) and explore three options for doing this with fractional
operators. The two other related problems are the recovery of a free boundary
and then this together with simultaneous recovery of the impedance function in
the boundary condition. Our main technique here will be Newton's method. The
paper contains numerical reconstructions and convergence results for the
devised methods.
|
http://arxiv.org/abs/2309.13617v1
|
Online speech recognition, where the model only accesses context to the left,
is an important and challenging use case for ASR systems. In this work, we
investigate augmenting neural encoders for online ASR by incorporating
structured state-space sequence models (S4), a family of models that provide a
parameter-efficient way of accessing arbitrarily long left context. We
performed systematic ablation studies to compare variants of S4 models and
propose two novel approaches that combine them with convolutions. We found that
the most effective design is to stack a small S4 using real-valued recurrent
weights with a local convolution, allowing them to work complementarily. Our
best model achieves WERs of 4.01%/8.53% on test sets from Librispeech,
outperforming Conformers with extensively tuned convolution.
|
http://arxiv.org/abs/2309.08551v2
|
We propose an approach to compute inner and outer-approximations of the sets
of values satisfying constraints expressed as arbitrarily quantified formulas.
Such formulas arise for instance when specifying important problems in control
such as robustness, motion planning or controllers comparison. We propose an
interval-based method which allows for tractable but tight approximations. We
demonstrate its applicability through a series of examples and benchmarks using
a prototype implementation.
|
http://arxiv.org/abs/2309.07662v1
|
We present a novel adversarial model for authentication systems that use gait
patterns recorded by the inertial measurement unit (IMU) built into
smartphones. The attack idea is inspired by and named after the concept of a
dictionary attack on knowledge (PIN or password) based authentication systems.
In particular, this work investigates whether it is possible to build a
dictionary of IMUGait patterns and use it to launch an attack or find an
imitator who can actively reproduce IMUGait patterns that match the target's
IMUGait pattern. Nine physically and demographically diverse individuals walked
at various levels of four predefined controllable and adaptable gait factors
(speed, step length, step width, and thigh-lift), producing 178 unique IMUGait
patterns. Each pattern attacked a wide variety of user authentication models.
The deeper analysis of error rates (before and after the attack) challenges the
belief that authentication systems based on IMUGait patterns are the most
difficult to spoof; further research is needed on adversarial models and
associated countermeasures.
|
http://arxiv.org/abs/2309.11766v2
|
We use cluster algebras to interpret Floer potentials of monotone Lagrangian
tori in toric del Pezzo surfaces as cluster characters of quiver
representations.
|
http://arxiv.org/abs/2309.16009v1
|
State-of-the-art neural text generation models are typically trained to
maximize the likelihood of each token in the ground-truth sequence conditioned
on the previous target tokens. However, during inference, the model needs to
make a prediction conditioned on the tokens generated by itself. This
train-test discrepancy is referred to as exposure bias. Scheduled sampling is a
curriculum learning strategy that gradually exposes the model to its own
predictions during training to mitigate this bias. Most of the proposed
approaches design a scheduler based on training steps, which generally requires
careful tuning depending on the training setup. In this work, we introduce
Dynamic Scheduled Sampling with Imitation Loss (DySI), which maintains the
schedule based solely on the training time accuracy, while enhancing the
curriculum learning by introducing an imitation loss, which attempts to make
the behavior of the decoder indistinguishable from the behavior of a
teacher-forced decoder. DySI is universally applicable across training setups
with minimal tuning. Extensive experiments and analysis show that DySI not only
achieves notable improvements on standard machine translation benchmarks, but
also significantly improves the robustness of other text generation models.
|
http://arxiv.org/abs/2301.13753v1
|
We study the Landau-Ginzburg mirror of toric/non-toric blowups of (possibly
non-Fano) toric surfaces arising from SYZ mirror symmetry. Through the
framework of tropical geometry, we provide an effective method for identifying
the precise locations of critical points of the superpotential, and further
show their non-degeneracy for generic parameters. Moreover, we prove that the
number of geometric critical points equals the rank of cohomology of the
surface, which leads to its closed-string mirror symmetry due to Bayer's
earlier result.
|
http://arxiv.org/abs/2309.08237v1
|
Monocular 3D object detection is a challenging task because depth information
is difficult to obtain from 2D images. A subset of viewpoint-agnostic monocular
3D detection methods also do not explicitly leverage scene homography or
geometry during training, meaning that a model trained thusly can detect
objects in images from arbitrary viewpoints. Such works predict the projections
of the 3D bounding boxes on the image plane to estimate the location of the 3D
boxes, but these projections are not rectangular so the calculation of IoU
between these projected polygons is not straightforward. This work proposes an
efficient, fully differentiable algorithm for the calculation of IoU between
two convex polygons, which can be utilized to compute the IoU between two 3D
bounding box footprints viewed from an arbitrary angle. We test the performance
of the proposed polygon IoU loss (PIoU loss) on three state-of-the-art
viewpoint-agnostic 3D detection models. Experiments demonstrate that the
proposed PIoU loss converges faster than L1 loss and that in 3D detection
models, a combination of PIoU loss and L1 loss gives better results than L1
loss alone (+1.64% AP70 for MonoCon on cars, +0.18% AP70 for RTM3D on cars, and
+0.83%/+2.46% AP50/AP25 for MonoRCNN on cyclists).
|
http://arxiv.org/abs/2309.07104v1
|
Dust-obscured galaxies are thought to represent an early evolutionary phase
of massive galaxies in which the active galactic nucleus (AGN) is still deeply
buried in significant amounts of dusty material and its emission is strongly
suppressed. The unprecedented sensitivity of the James Webb Space Telescope
enables us for the first time to detect the rest-frame optical emission of
heavily obscured AGN and unveil the properties of the hidden accreting
super-massive black holes (BHs). In this work, we present the JWST/NIRSpec IFS
data of ALESS073.1, a massive, dusty, star-forming galaxy at $z = 4.76$ hosting
an AGN at its center. The detection of a very broad $H_\alpha$ emission
associated with the Broad Line Region (BLR) confirms the presence of a BH
($\log(M_{BH}/M_\odot)>8.7$) accreting at less than 15\% of its Eddington limit
and classifies the target as a Type 1 AGN. The rest-frame optical emission
lines also reveal a fast ionized gas outflow marginally resolved in the galaxy
center. The high sensitivity of NIRSpec allows us to perform the kinematic
analysis of the narrow H$\alpha$ component which indicates that the warm
ionized gas velocity field is consistent with disk rotation. We also find that,
in the innermost nuclear regions ($< 1.5$ kpc), the intrinsic velocity
dispersion of the disk reaches $\sim 150$ km/s, $\sim 2-3$ times higher than
the velocity dispersion inferred from the [CII] 158$\mu$m line tracing mostly
cold gas. Since, at large radii, the velocity dispersion of the warm and cold
gas are comparable, we conclude that the outflows are injecting turbulence in
the warm ionized gas in the central region, but they are not sufficiently
powerful to disrupt the dense gas and quench star formation. These findings
support the scenario that dust-obscured galaxies represent the evolutionary
stage preceding the unobscured quasar when all gas and dust are removed from
the host.
|
http://arxiv.org/abs/2309.05713v2
|
We give a simple proof that assuming the Exponential Time Hypothesis (ETH),
determining the winner of a Rabin game cannot be done in time $2^{o(k \log k)}
\cdot n^{O(1)}$, where $k$ is the number of pairs of vertex subsets involved in
the winning condition and $n$ is the vertex count of the game graph. While this
result follows from the lower bounds provided by Calude et al [SIAM J. Comp.
2022], our reduction is simpler and arguably provides more insight into the
complexity of the problem. In fact, the analogous lower bounds discussed by
Calude et al, for solving Muller games and multidimensional parity games,
follow as simple corollaries of our approach. Our reduction also highlights the
usefulness of a certain pivot problem -- Permutation SAT -- which may be of
independent interest.
|
http://arxiv.org/abs/2310.20433v1
|
The widespread adoption of commercial autonomous vehicles (AVs) and advanced
driver assistance systems (ADAS) may largely depend on their acceptance by
society, for which their perceived trustworthiness and interpretability to
riders are crucial. In general, this task is challenging because modern
autonomous systems software relies heavily on black-box artificial intelligence
models. Towards this goal, this paper introduces a novel dataset, Rank2Tell, a
multi-modal ego-centric dataset for Ranking the importance level and Telling
the reason for the importance. Using various close and open-ended visual
question answering, the dataset provides dense annotations of various semantic,
spatial, temporal, and relational attributes of various important objects in
complex traffic scenarios. The dense annotations and unique attributes of the
dataset make it a valuable resource for researchers working on visual scene
understanding and related fields. Furthermore, we introduce a joint model for
joint importance level ranking and natural language captions generation to
benchmark our dataset and demonstrate performance with quantitative
evaluations.
|
http://arxiv.org/abs/2309.06597v2
|
While recent advancements in the capabilities and widespread accessibility of
generative language models, such as ChatGPT (OpenAI, 2022), have brought about
various benefits by generating fluent human-like text, the task of
distinguishing between human- and large language model (LLM) generated text has
emerged as a crucial problem. These models can potentially deceive by
generating artificial text that appears to be human-generated. This issue is
particularly significant in domains such as law, education, and science, where
ensuring the integrity of text is of the utmost importance. This survey
provides an overview of the current approaches employed to differentiate
between texts generated by humans and ChatGPT. We present an account of the
different datasets constructed for detecting ChatGPT-generated text, the
various methods utilized, what qualitative analyses into the characteristics of
human versus ChatGPT-generated text have been performed, and finally, summarize
our findings into general insights
|
http://arxiv.org/abs/2309.07689v1
|
Galaxy clusters are the universe's largest objects in the universe kept
together by gravity. Most of their baryonic content is made of a magnetized
diffuse plasma. We investigate the impact of such magnetized environment on
ultra-high-energy-cosmic-ray (UHECR) propagation. The intracluster medium is
described according to the self-similar assumption, in which the gas density
and pressure profiles are fully determined by the cluster mass and redshift.
The magnetic field is scaled to the thermal components of the intracluster
medium under different assumptions. We model the propagation of UHECRs in the
intracluster medium using a modified version of the Monte Carlo code {\it
SimProp}, where hadronic processes and diffusion in the turbulent magnetic
field are implemented. We provide a universal parametrization that approximates
the UHECR fluxes escaping from the environment as a function of the most
relevant quantities, such as the mass of the cluster, the position of the
source with respect to the center of the cluster and the nature of the
accelerated particles. We show that galaxy clusters are an opaque environment
especially for UHECR nuclei. The role of the most massive nearby clusters in
the context of the emerging UHECR astronomy is finally discussed.
|
http://arxiv.org/abs/2309.04380v1
|
Safe landing is an essential aspect of flight operations in fields ranging
from industrial to space robotics. With the growing interest in artificial
intelligence, we focus on learning-based methods for safe landing. Our previous
work, Dynamic Open-Vocabulary Enhanced SafE-Landing with Intelligence
(DOVESEI), demonstrated the feasibility of using prompt-based segmentation for
identifying safe landing zones with open vocabulary models. However, relying on
a heuristic selection of words for prompts is not reliable, as it cannot adapt
to changing environments, potentially leading to harmful outcomes if the
observed environment is not accurately represented by the chosen prompt. To
address this issue, we introduce PEACE (Prompt Engineering Automation for
CLIPSeg Enhancement), an enhancement to DOVESEI that automates prompt
engineering to adapt to shifts in data distribution. PEACE can perform safe
landings using only monocular cameras and image segmentation. PEACE shows
significant improvements in prompt generation and engineering for aerial images
compared to standard prompts used for CLIP and CLIPSeg. By combining DOVESEI
and PEACE, our system improved the success rate of safe landing zone selection
by at least 30\% in both simulations and indoor experiments.
|
http://arxiv.org/abs/2310.00085v4
|
The conventional general syntax of indexed families in dependent type
theories follow the style of "constructors returning a special case", as in
Agda, Lean, Idris, Coq, and probably many other systems. Fording is a method to
encode indexed families of this style with index-free inductive types and an
identity type. There is another trick that merges interleaved higher
inductive-inductive types into a single big family of types. It makes use of a
small universe as the index to distinguish the original types. In this paper,
we show that these two methods can trivialize some very fancy-looking indexed
families with higher inductive indices (which we refer to as higher indexed
families).
|
http://arxiv.org/abs/2309.14187v2
|
We study the phenomenological implications of two minor zeros in neutrino
mass matrix using trimaximal mixing matrix. In this context, we analyse fifteen
possible cases of two minor zeros in neutrino mass matrix and found only two
cases, namely Class $A_1$ and Class $A_2$, that are compatible with the present
neutrino oscillation data. We present correlations of several neutrino
oscillation parameters and give prediction of the total neutrino mass, the
values of effective Majorana mass, the effective electron anti-neutrino mass
and CP violating Majorana phases for these two classes. We also explore the
degree of fine tuning in the elements of neutrino mass matrix for the allowed
classes. Moreover, we propose a flavor model within the seesaw model along with
$Z_{8}$ symmetry group that can generate such classes.
|
http://arxiv.org/abs/2309.04394v2
|
Little Higgs models address the hierarchy problem by identifying the SM Higgs
doublet as pseudo-Nambu--Goldstone bosons (pNGB) arising from global symmetries
with collective breakings. These models are designed to address the little
hierarchy problem up to a scale of $\Lambda\!\sim\! {\cal O}(10)$ TeV.
Consequently, these models necessitate an ultraviolet (UV) completion above
this scale. On the other hand, conformal extensions of the Standard Model are
intriguing because scales emerge as a consequence of dimensional transmutation.
In this study, we present a unified framework in which the electroweak
hierarchy problem is tackled through a conformal symmetry collectively broken
around the TeV scale, offering an appealing UV completion for little Higgs
models. Notably, this framework automatically ensures the presence of the
required UV fixed points, eliminating the need for careful adjustments to the
particle content of the theory. Moreover, this framework naturally addresses
the flavor puzzles associated with composite or little Higgs models.
Furthermore, we suggest that in this framework all known little Higgs models
can be UV-completed through conformal dynamics above the scale $\Lambda$ up to
arbitrary high scales.
|
http://arxiv.org/abs/2309.07845v2
|
Determining the symmetry breaking order of correlated quantum phases is
essential for understanding the microscopic interactions in their host systems.
The flat bands in magic angle twisted bilayer graphene (MATBG) provide an
especially rich arena to investigate such interaction-driven ground states, and
while progress has been made in identifying the correlated insulators and their
excitations at commensurate moire filling factors, the spin-valley
polarizations of the topological states that emerge at high magnetic field
remain unknown. Here we introduce a new technique based on twist-decoupled van
der Waals layers that enables measurements of their electronic band structure
and, by studying the backscattering between counter-propagating edge states,
determination of relative spin polarization of the their edge modes. Applying
this method to twist-decoupled MATBG and monolayer graphene, we find that the
broken-symmetry quantum Hall states that extend from the charge neutrality
point in MATBG are spin-unpolarized at even integer filling factors. The
measurements also indicate that the correlated Chern insulator emerging from
half filling of the flat valence band is spin-unpolarized, but suggest that its
conduction band counterpart may be spin-polarized. Our results constrain models
of spin-valley ordering in MATBG and establish a versatile approach to study
the electronic properties of van der Waals systems.
|
http://arxiv.org/abs/2309.06583v2
|
Diffusion models are powerful generative models that map noise to data using
stochastic processes. However, for many applications such as image editing, the
model input comes from a distribution that is not random noise. As such,
diffusion models must rely on cumbersome methods like guidance or projected
sampling to incorporate this information in the generative process. In our
work, we propose Denoising Diffusion Bridge Models (DDBMs), a natural
alternative to this paradigm based on diffusion bridges, a family of processes
that interpolate between two paired distributions given as endpoints. Our
method learns the score of the diffusion bridge from data and maps from one
endpoint distribution to the other by solving a (stochastic) differential
equation based on the learned score. Our method naturally unifies several
classes of generative models, such as score-based diffusion models and
OT-Flow-Matching, allowing us to adapt existing design and architectural
choices to our more general problem. Empirically, we apply DDBMs to challenging
image datasets in both pixel and latent space. On standard image translation
problems, DDBMs achieve significant improvement over baseline methods, and,
when we reduce the problem to image generation by setting the source
distribution to random noise, DDBMs achieve comparable FID scores to
state-of-the-art methods despite being built for a more general task.
|
http://arxiv.org/abs/2309.16948v3
|
J-UNIWARD is a popular steganography method for hiding secret messages in
JPEG cover images. As a content-adaptive method, J-UNIWARD aims to embed into
textured image regions where changes are difficult to detect. To this end,
J-UNIWARD first assigns to each DCT coefficient an embedding cost calculated
based on the image's Wavelet residual, and then uses a coding method that
minimizes the cost while embedding the desired payload.
Changing one DCT coefficient affects a 23x23 window of Wavelet coefficients.
To speed up the costmap computation, the original implementation pre-computes
the Wavelet residual and then considers per changed DCT coefficient a 23x23
window of the Wavelet residual. However, the implementation accesses a window
accidentally shifted by one pixel to the bottom right.
In this report, we evaluate the effect of this off-by-one error on the
resulting costmaps. Some image blocks are over-priced while other image blocks
are under-priced, but the difference is relatively small. The off-by-one error
seems to make little difference for learning-based steganalysis.
|
http://arxiv.org/abs/2305.19776v2
|
With the ever increasing importance of web services and the Cloud as a
reliable commodity to provide business value as well as consolidate IT
infrastructure, electronic contracts have become very important. WS-Agreement
has itself established as a well accepted container format for describing such
contracts. However, the semantic interpretation of the terms contained in these
contracts, as well as the process of agreeing to contracts when multiple
options have to be considered (negotiation), are still pretty much dealt with
on a case by case basis. In this paper we address the issues of diverging
contracts and varying contract negotiation protocols by introducing the concept
of a contract aware marketplace, which abstracts from the heterogeneous offers
of different services providers. This allows for the automated consumption of
services solely based on preferences, instead of additional restrictions such
as understanding of contract terms and/or negotiation protocols. We also
contribute an evaluation of several existing negotiation concepts/protocols. We
think that reducing the complexity for automated contract negotiation and thus
service consumption is a key for the success of future service and Cloud
infrastructures.
|
http://arxiv.org/abs/2309.11941v1
|
Adapting a segmentation model from a labeled source domain to a target
domain, where a single unlabeled datum is available, is one the most
challenging problems in domain adaptation and is otherwise known as one-shot
unsupervised domain adaptation (OSUDA). Most of the prior works have addressed
the problem by relying on style transfer techniques, where the source images
are stylized to have the appearance of the target domain. Departing from the
common notion of transferring only the target ``texture'' information, we
leverage text-to-image diffusion models (e.g., Stable Diffusion) to generate a
synthetic target dataset with photo-realistic images that not only faithfully
depict the style of the target domain, but are also characterized by novel
scenes in diverse contexts. The text interface in our method Data AugmenTation
with diffUsion Models (DATUM) endows us with the possibility of guiding the
generation of images towards desired semantic concepts while respecting the
original spatial context of a single training image, which is not possible in
existing OSUDA methods. Extensive experiments on standard benchmarks show that
our DATUM surpasses the state-of-the-art OSUDA methods by up to +7.1%. The
implementation is available at https://github.com/yasserben/DATUM
|
http://arxiv.org/abs/2303.18080v2
|
We propose MatSci ML, a novel benchmark for modeling MATerials SCIence using
Machine Learning (MatSci ML) methods focused on solid-state materials with
periodic crystal structures. Applying machine learning methods to solid-state
materials is a nascent field with substantial fragmentation largely driven by
the great variety of datasets used to develop machine learning models. This
fragmentation makes comparing the performance and generalizability of different
methods difficult, thereby hindering overall research progress in the field.
Building on top of open-source datasets, including large-scale datasets like
the OpenCatalyst, OQMD, NOMAD, the Carolina Materials Database, and Materials
Project, the MatSci ML benchmark provides a diverse set of materials systems
and properties data for model training and evaluation, including simulated
energies, atomic forces, material bandgaps, as well as classification data for
crystal symmetries via space groups. The diversity of properties in MatSci ML
makes the implementation and evaluation of multi-task learning algorithms for
solid-state materials possible, while the diversity of datasets facilitates the
development of new, more generalized algorithms and methods across multiple
datasets. In the multi-dataset learning setting, MatSci ML enables researchers
to combine observations from multiple datasets to perform joint prediction of
common properties, such as energy and forces. Using MatSci ML, we evaluate the
performance of different graph neural networks and equivariant point cloud
networks on several benchmark tasks spanning single task, multitask, and
multi-data learning scenarios. Our open-source code is available at
https://github.com/IntelLabs/matsciml.
|
http://arxiv.org/abs/2309.05934v1
|
This Letter presents a study of the geometry and motion of the Galactic disk
using open clusters in the Gaia era. The findings suggest that the inclination
of the Galactic disk increases gradually from the inner to the outer disk, with
a shift in orientation at the Galactocentric radius of approximately 5 to 7
kpc. Furthermore, this study brings forth the revelation that the mid-plane of
the Milky Way may not possess a stationary or fixed position. A plausible
explanation is that the inclined orbits of celestial bodies within our Galaxy
exhibit a consistent pattern of elliptical shapes, deviating from perfect
circularity; however, more observations are needed to confirm this. An analysis
of the vertical motion along the Galactocentric radius reveals that the disk
has warped with precession, and that the line-of-nodes shifts at different
radii, aligning with the results from the classical Cepheids. Although there is
uncertainty for precession/peculiar motion in Solar orbit, after considering
the uncertainty, the study derives a median value of precession rate = 6.8
km/s/kpc in the Galaxy. This value for the derived precession in the outer disk
is lower than those in the literature due to the systematic motion in Solar
orbit (inclination angle = 0.6 deg). The study also finds that the
inclinational variation of the disk is significant and can cause systematic
motion, with the inclinational variation rate decreasing along the Galactic
radius with a slope of -8.9 uas/yr/kpc. Moreover, the derived inclinational
variation rate in Solar orbit is 59.1+-11.2(sample)+-7.7(VZsun) uas/yr, which
makes it observable for high precision astrometry. The all-sky open cluster
catalog based on Gaia DR3 and Galactic precession/inclinational variation fits
as well as Python code related to these fits are available at
https://nadc.china-vo.org/res/r101288/
|
http://arxiv.org/abs/2306.17545v2
|
Let $\gamma^d_m(K)$ be the smallest positive number $\lambda$ such that the
convex body $K$ can be covered by $m$ translates of $\lambda K$. Let $K^d$ be
the $d$-dimensional crosspolytope. It will be proved that $\gamma^d_m(K^d)=1$
for $1\le m< 2d$, $d\ge4$; $\gamma^d_m(K^d)=\frac{d-1}{d}$ for
$m=2d,2d+1,2d+2$, $d\ge4$; $\gamma^d_m(K^d)=\frac{d-1}{d}$ for $ m= 2d+3$,
$d=4,5$; $\gamma^d_m(K^d)=\frac{2d-3}{2d-1}$ for $ m= 2d+4$, $d=4$ and
$\gamma^d_m(K^d)\le\frac{2d-3}{2d-1}$ for $ m= 2d+4$, $d\ge5$. Moreover the
Hadwiger's covering conjecture is verified for the $d$-dimensional
crosspolytope.
|
http://arxiv.org/abs/2305.00569v2
|
I present a new class of nonrelativistic, modified-gravity MOND theories. The
three gravitational degrees of freedom of these ``TRIMOND'' theories are the
MOND potential and two auxiliary potentials, one of which emerges as the
Newtonian potential. Their Lagrangians involve a function of three acceleration
variables -- the gradients of the potentials. So, the transition from the
Newtonian to the MOND regime is rather richer than in the aquadratic-Lagrangian
theory (AQUAL) and the quasilinear MOND theory (QUMOND), which are special
cases of TRIMOND, each defined by a Lagrangian function of a single variable.
In particular, unlike AQUAL and QUMOND whose deep-MOND limit (DML) is fully
dictated by the required scale invariance, here, the scale-invariant DML still
requires specifying a function of two variables. For one-dimensional (e.g.,
spherical) mass distributions, in all TRIMOND theories the MOND acceleration is
a (theory specific, but system independent) function of the Newtonian
acceleration; their variety appears in nonsymmetric situations. Also, they all
make the salient, primary MOND predictions. For example, they predict the same
DML virial relation as AQUAL and QUMOND, and thus the same DML $M-\sigma$
relation, and the same DML two-body force. Yet they can differ materially on
secondary predictions. Such TRIMOND theories may be the nonrelativistic limits
of scalar-bimetric relativistic formulations of MOND, such as BIMOND with an
added scalar.
|
http://arxiv.org/abs/2305.19986v3
|
Science is facing a reproducibility crisis. Previous work has proposed
incorporating data analysis replications into classrooms as a potential
solution. However, despite the potential benefits, it is unclear whether this
approach is feasible, and if so, what the involved stakeholders-students,
educators, and scientists-should expect from it. Can students perform a data
analysis replication over the course of a class? What are the costs and
benefits for educators? And how can this solution help benchmark and improve
the state of science?
In the present study, we incorporated data analysis replications in the
project component of the Applied Data Analysis course (CS-401) taught at EPFL
(N=354 students). Here we report pre-registered findings based on surveys
administered throughout the course. First, we demonstrate that students can
replicate previously published scientific papers, most of them qualitatively
and some exactly. We find discrepancies between what students expect of data
analysis replications and what they experience by doing them along with changes
in expectations about reproducibility, which together serve as evidence of
attitude shifts to foster students' critical thinking. Second, we provide
information for educators about how much overhead is needed to incorporate
replications into the classroom and identify concerns that replications bring
as compared to more traditional assignments. Third, we identify tangible
benefits of the in-class data analysis replications for scientific communities,
such as a collection of replication reports and insights about replication
barriers in scientific work that should be avoided going forward.
Overall, we demonstrate that incorporating replication tasks into a large
data science class can increase the reproducibility of scientific work as a
by-product of data science instruction, thus benefiting both science and
students.
|
http://arxiv.org/abs/2308.16491v2
|
In 5G cellular networks, frequency range 2 (FR2) introduces higher
frequencies that cause rapid signal degradation and challenge user mobility. In
recent studies, a conditional handover procedure has been adopted as an
enhancement to baseline handover to enhance user mobility robustness. In this
article, the mobility performance of conditional handover is analyzed for a 5G
mm-wave network in FR2 that employs beamforming. In addition, a
resource-efficient random access procedure is proposed that increases the
probability of contention-free random access during a handover. Moreover, a
simple yet effective decision tree-based supervised learning method is proposed
to minimize the handover failures that are caused by the beam preparation phase
of the random access procedure. Results have shown that a tradeoff exists
between contention-free random access and handover failures. It is also seen
that the optimum operation point of random access is achievable with the
proposed learning algorithm for conditional handover. Moreover, a mobility
performance comparison of conditional handover with baseline handover is also
carried out. Results have shown that while baseline handover causes fewer
handover failures than conditional handover, the total number of mobility
failures in the latter is less due to the decoupling of the handover
preparation and execution phases.
|
http://arxiv.org/abs/2309.09840v1
|
In this paper, we propose a nested matrix-tensor model which extends the
spiked rank-one tensor model of order three. This model is particularly
motivated by a multi-view clustering problem in which multiple noisy
observations of each data point are acquired, with potentially non-uniform
variances along the views. In this case, data can be naturally represented by
an order-three tensor where the views are stacked. Given such a tensor, we
consider the estimation of the hidden clusters via performing a best rank-one
tensor approximation. In order to study the theoretical performance of this
approach, we characterize the behavior of this best rank-one approximation in
terms of the alignments of the obtained component vectors with the hidden model
parameter vectors, in the large-dimensional regime. In particular, we show that
our theoretical results allow us to anticipate the exact accuracy of the
proposed clustering approach. Furthermore, numerical experiments indicate that
leveraging our tensor-based approach yields better accuracy compared to a naive
unfolding-based algorithm which ignores the underlying low-rank tensor
structure. Our analysis unveils unexpected and non-trivial phase transition
phenomena depending on the model parameters, ``interpolating'' between the
typical behavior observed for the spiked matrix and tensor models.
|
http://arxiv.org/abs/2305.19992v1
|
NGC1052-DF4 was found to be the second "galaxy lacking dark matter" in the
NGC1052 group, based on its velocity dispersion of $\sigma_{\rm
gc}=4.2^{+4.4}_{-2.2}$ km/s as measured from the radial velocities of seven of
its globular clusters. Here we verify this result by measuring the stellar
velocity dispersion of the galaxy. We observed the diffuse stellar light in
NGC1052-DF4 with the Keck Cosmic Web Imager (KCWI) in its highest resolution
mode, with $\sigma_{\mathrm{instr}}\approx 7$ km/s. With a total science + sky
exposure time of 34hrs, the resulting spectrum is exceptional both in its
spectral resolution and its S/N ratio of 23\r{A}$^{-1}$. We find a stellar
velocity dispersion of $\sigma_{\rm stars} = 8.0^{+2.3}_{-1.9}$ km/s,
consistent with the previous measurement from the globular clusters. Combining
both measurements gives a fiducial dispersion of $\sigma_{\rm f} =
6.3_{-1.6}^{+2.5}$ km/s. The implied dynamical mass within the half-light
radius is $8_{-4}^{+6} \times 10^7 M_{\odot}$. The expected velocity dispersion
of NGC1052-DF4 from the stellar mass alone is $7 \pm 1$ km/s, and for an NFW
halo that follows the stellar mass -- halo mass relation and the halo mass --
concentration relation, the expectation is $\sim 30$ km/s. The low velocity
dispersion rules out a normal NFW dark matter halo, and we confirm that
NGC1052-DF4 is one of at least two galaxies in the NGC1052 group that have an
anomalously low dark matter content. While any viable model for their formation
should explain the properties of both galaxies, we note that NGC1052-DF4 now
poses the largest challenge as it has the most stringent constraints on its
dynamical mass.
|
http://arxiv.org/abs/2309.08592v2
|
We propose that the cascade decay $\Lambda_b \to D(\to K^+\pi^-) N(\to
p\pi^-)$ may serve as the discovery channel for baryonic CP violation. This
decay chain is contributed by, dominantly, the amplitudes with the intermediate
$D$ state as $D^0$ or $\bar{D}^0$. The large weak phase between the two kinds
of amplitudes suggests the possibility of significant CP violation. While the
presence of undetermined strong phases may complicate the dependence of CP
asymmetry, our phenomenological analysis demonstrates that CP violation remains
prominent across a broad range of strong phases. The mechanism also applies to
similar decay modes such as $\Lambda_b \rightarrow D(\rightarrow K^+ K^-)
\Lambda$. Considering the anticipated luminosity of LHCb, we conclude that
these decay channels offer a promising opportunity to uncover CP violation in
the baryon sector.
|
http://arxiv.org/abs/2309.09854v2
|
Object detection in 3D is a crucial aspect in the context of autonomous
vehicles and drones. However, prototyping detection algorithms is
time-consuming and costly in terms of energy and environmental impact. To
address these challenges, one can check the effectiveness of different models
by training on a subset of the original training set. In this paper, we present
a comparison of three algorithms for selecting such a subset - random sampling,
random per class sampling, and our proposed MONSPeC (Maximum Object Number
Sampling per Class). We provide empirical evidence for the superior
effectiveness of random per class sampling and MONSPeC over basic random
sampling. By replacing random sampling with one of the more efficient
algorithms, the results obtained on the subset are more likely to transfer to
the results on the entire dataset. The code is available at:
https://github.com/vision-agh/monspec.
|
http://arxiv.org/abs/2306.17551v1
|
The accretion flow / jet correlation in neutron star (NS) low-mass X-ray
binaries (LMXBs) is far less understood when compared to black hole (BH) LMXBs.
In this paper we will present the results of a dense multi-wavelength
observational campaign on the NS LMXB 4U 1820-30, including X-ray (Nicer,
NuSTAR and AstroSAT) and quasi-simultaneous radio (ATCA) observations in 2022.
4U 1820-30 shows a peculiar 170 day super-orbital accretion modulation, during
which the system evolves between "modes" of high and low X-ray flux. During our
monitoring, the source did not show any transition to a full hard state. X-ray
spectra were well described using a disc blackbody, a Comptonisation spectrum
along with a Fe K emission line at 6.6 keV. Our results show that the observed
X-ray flux modulation is almost entirely produced by changes in the size of the
region providing seed photons for the Comptonisation spectrum. This region is
large (about 15 km) in the high mode and likely coincides with the whole
boundary layer, while it shrinks significantly (<10 km) in low mode. The
electron temperature of the corona and the observed RMS variability in the hard
X-rays also exhibit a slight increase in low mode. As the source moves from
high to low mode, the radio emission due to the jet becomes about 5 fainter.
These radio changes appear not to be strongly connected to the hard-to-soft
transitions as in BH systems, while they seem to be connected mostly to
variations observed in the boundary layer.
|
http://arxiv.org/abs/2307.16566v1
|
For a $k$-tree $T$, we prove that the maximum local mean order is attained in
a $k$-clique of degree $1$ and that it is not more than twice the global mean
order. We also bound the global mean order if $T$ has no $k$-cliques of degree
$2$ and prove that for large order, the $k$-star attains the minimum global
mean order. These results solve the remaining problems of Stephens and
Oellermann [J. Graph Theory 88 (2018), 61-79] concerning the mean order of
sub-$k$-trees of $k$-trees.
|
http://arxiv.org/abs/2309.16545v1
|
A risk analysis is conducted considering several release sources located
around the NEOM shoreline. The sources are selected close to the coast and in
neighboring regions of high marine traffic. The evolution of oil spills
released by these sources is simulated using the MOHID model, driven by
validated, high-resolution met-ocean fields of the Red Sea. For each source,
simulations are conducted over a 4-week period, starting from first, tenth and
twentieth days of each month, covering five consecutive years. A total of 48
simulations are thus conducted for each source location, adequately reflecting
the variability of met-ocean conditions in the region. The risk associated with
each source is described in terms of amount of oil beached, and by the elapsed
time required for the spilled oil to reach the NEOM coast, extending from the
Gulf of Aqaba in the North to Duba in the South. A finer analysis is performed
by segmenting the NEOM shoreline, based on important coastal development and
installation sites. For each subregion, source and release event considered, a
histogram of the amount of volume beached is generated, also classifying
individual events in terms of the corresponding arrival times. In addition, for
each subregion considered, an inverse analysis is conducted to identify regions
of dependence of the cumulative risk, estimated using the collection of all
sources and events considered. The transport of oil around the NEOM shorelines
is promoted by chaotic circulations and northwest winds in summer, and a
dominant cyclonic eddy in winter. Hence, spills originating from release
sources located close to the NEOM shorelines are characterized by large monthly
variations in arrival times, ranging from less than a week to more than two
weeks. Large variations in the volume fraction of beached oil, ranging from
less then 50\% to more than 80% are reported.
|
http://arxiv.org/abs/2309.14352v1
|
Built upon the state-of-the-art model a multiphase transport (AMPT), we
develop a new module of chiral anomaly transport (CAT), which can trace the
evolution of the initial topological charge of gauge field created through
sphaleron transition at finite temperature and external magnetic field in heavy
ion collisions. The eventual experimental signals of chiral magnetic
effect(CME) can be measured. The CAT explicitly shows the generation and
evolution of the charge separation, and the signals of CME through the CAT are
quantitatively in agreement with the experimental measurements in Au+Au
collision at $\sqrt{s}=200 {\rm GeV}$, and the centrality dependence of the CME
fraction follows that of the fireball temperature.
|
http://arxiv.org/abs/2310.20194v1
|
We give an extension of Bochner's criterion for the almost periodic
functions. By using our main result, we extend two results of A. Haraux. The
first is a generalization of Bochner's criterion which is useful for periodic
dynamical systems. The second is a characterization of periodic functions in
term of Bochner's criterion.
|
http://arxiv.org/abs/2301.00263v1
|
In this work, a novel data-driven methodology for designing polar codes for
channels with and without memory is proposed. The methodology is suitable for
the case where the channel is given as a "black-box" and the designer has
access to the channel for generating observations of its inputs and outputs,
but does not have access to the explicit channel model. The proposed method
leverages the structure of the successive cancellation (SC) decoder to devise a
neural SC (NSC) decoder. The NSC decoder uses neural networks (NNs) to replace
the core elements of the original SC decoder, the check-node, the bit-node and
the soft decision. Along with the NSC, we devise additional NN that embeds the
channel outputs into the input space of the SC decoder. The proposed method is
supported by theoretical guarantees that include the consistency of the NSC.
Also, the NSC has computational complexity that does not grow with the channel
memory size. This sets its main advantage over successive cancellation trellis
(SCT) decoder for finite state channels (FSCs) that has complexity of
$O(|\mathcal{S}|^3 N\log N)$, where $|\mathcal{S}|$ denotes the number of
channel states. We demonstrate the performance of the proposed algorithms on
memoryless channels and on channels with memory. The empirical results are
compared with the optimal polar decoder, given by the SC and SCT decoders. We
further show that our algorithms are applicable for the case where there SC and
SCT decoders are not applicable.
|
http://arxiv.org/abs/2309.03148v1
|
Supervised classification recognizes patterns in the data to separate classes
of behaviours. Canonical solutions contain misclassification errors that are
intrinsic to the numerical approximating nature of machine learning. The data
analyst may minimize the classification error on a class at the expense of
increasing the error of the other classes. The error control of such a design
phase is often done in a heuristic manner. In this context, it is key to
develop theoretical foundations capable of providing probabilistic
certifications to the obtained classifiers. In this perspective, we introduce
the concept of probabilistic safety region to describe a subset of the input
space in which the number of misclassified instances is probabilistically
controlled. The notion of scalable classifiers is then exploited to link the
tuning of machine learning with error control. Several tests corroborate the
approach. They are provided through synthetic data in order to highlight all
the steps involved, as well as through a smart mobility application.
|
http://arxiv.org/abs/2309.04627v1
|
Diverse explainability methods of graph neural networks (GNN) have recently
been developed to highlight the edges and nodes in the graph that contribute
the most to the model predictions. However, it is not clear yet how to evaluate
the correctness of those explanations, whether it is from a human or a model
perspective. One unaddressed bottleneck in the current evaluation procedure is
the problem of out-of-distribution explanations, whose distribution differs
from those of the training data. This important issue affects existing
evaluation metrics such as the popular faithfulness or fidelity score. In this
paper, we show the limitations of faithfulness metrics. We propose GInX-Eval
(Graph In-distribution eXplanation Evaluation), an evaluation procedure of
graph explanations that overcomes the pitfalls of faithfulness and offers new
insights on explainability methods. Using a fine-tuning strategy, the GInX
score measures how informative removed edges are for the model and the EdgeRank
score evaluates if explanatory edges are correctly ordered by their importance.
GInX-Eval verifies if ground-truth explanations are instructive to the GNN
model. In addition, it shows that many popular methods, including
gradient-based methods, produce explanations that are not better than a random
designation of edges as important subgraphs, challenging the findings of
current works in the area. Results with GInX-Eval are consistent across
multiple datasets and align with human evaluation.
|
http://arxiv.org/abs/2309.16223v2
|
In the analysis of spatial point patterns on linear networks, a critical
statistical objective is estimating the first-order intensity function,
representing the expected number of points within specific subsets of the
network. Typically, non-parametric approaches employing heating kernels are
used for this estimation. However, a significant challenge arises in selecting
appropriate bandwidths before conducting the estimation. We study an intensity
estimation mechanism that overcomes this limitation using adaptive estimators,
where bandwidths adapt to the data points in the pattern. While adaptive
estimators have been explored in other contexts, their application in linear
networks remains underexplored. We investigate the adaptive intensity estimator
within the linear network context and extend a partitioning technique based on
bandwidth quantiles to expedite the estimation process significantly. Through
simulations, we demonstrate the efficacy of this technique, showing that the
partition estimator closely approximates the direct estimator while drastically
reducing computation time. As a practical application, we employ our method to
estimate the intensity of traffic accidents in a neighbourhood in Medellin,
Colombia, showcasing its real-world relevance and efficiency.
|
http://arxiv.org/abs/2309.09303v1
|
We envision a system to continuously build and maintain a map based on
earth-scale neural radiance fields (NeRF) using data collected from vehicles
and drones in a lifelong learning manner. However, existing large-scale
modeling by NeRF has problems in terms of scalability and maintainability when
modeling earth-scale environments. Therefore, to address these problems, we
propose a federated learning pipeline for large-scale modeling with NeRF. We
tailor the model aggregation pipeline in federated learning for NeRF, thereby
allowing local updates of NeRF. In the aggregation step, the accuracy of the
clients' global pose is critical. Thus, we also propose global pose alignment
to align the noisy global pose of clients before the aggregation step. In
experiments, we show the effectiveness of the proposed pose alignment and the
federated learning pipeline on the large-scale scene dataset, Mill19.
|
http://arxiv.org/abs/2309.06030v4
|
Evaluating the quality of videos generated from text-to-video (T2V) models is
important if they are to produce plausible outputs that convince a viewer of
their authenticity. We examine some of the metrics used in this area and
highlight their limitations. The paper presents a dataset of more than 1,000
generated videos from 5 very recent T2V models on which some of those commonly
used quality metrics are applied. We also include extensive human quality
evaluations on those videos, allowing the relative strengths and weaknesses of
metrics, including human assessment, to be compared. The contribution is an
assessment of commonly used quality metrics, and a comparison of their
performances and the performance of human evaluations on an open dataset of T2V
videos. Our conclusion is that naturalness and semantic matching with the text
prompt used to generate the T2V output are important but there is no single
measure to capture these subtleties in assessing T2V model output.
|
http://arxiv.org/abs/2309.08009v1
|
Three-dimensional electron microscopy (3DEM) is an essential technique to
investigate volumetric tissue ultra-structure. Due to technical limitations and
high imaging costs, samples are often imaged anisotropically, where resolution
in the axial direction ($z$) is lower than in the lateral directions $(x,y)$.
This anisotropy 3DEM can hamper subsequent analysis and visualization tasks. To
overcome this limitation, we propose a novel deep-learning (DL)-based
self-supervised super-resolution approach that computationally reconstructs
isotropic 3DEM from the anisotropic acquisition. The proposed DL-based
framework is built upon the U-shape architecture incorporating
vision-transformer (ViT) blocks, enabling high-capability learning of local and
global multi-scale image dependencies. To train the tailored network, we employ
a self-supervised approach. Specifically, we generate pairs of anisotropic and
isotropic training datasets from the given anisotropic 3DEM data. By feeding
the given anisotropic 3DEM dataset in the trained network through our proposed
framework, the isotropic 3DEM is obtained. Importantly, this isotropic
reconstruction approach relies solely on the given anisotropic 3DEM dataset and
does not require pairs of co-registered anisotropic and isotropic 3DEM training
datasets. To evaluate the effectiveness of the proposed method, we conducted
experiments using three 3DEM datasets acquired from brain. The experimental
results demonstrated that our proposed framework could successfully reconstruct
isotropic 3DEM from the anisotropic acquisition.
|
http://arxiv.org/abs/2309.10646v1
|
In the second part of this publication, we present simulation results for two
three-dimensional models of Heusler-type alloys obtained by the mesoscopic
micromagnetic approach. In the first model, we simulate the magnetization
reversal of a single ferromagnetic (FM) inclusion within a monocrystalline
antiferromagnetic (AFM) matrix, revealing the evolution of the complex
magnetization distribution within this inclusion when the external field is
changed. The main result of this ``monocrystalline'' model is the absence of
any hysteretic behavior by the magnetization reversal of the FM inclusion.
Hence, this model is unable to reproduce the basic experimental result for the
corresponding nanocomposite -- hysteresis in the magnetization reversal of FM
inclusions with a vertical shift of the corresponding loops. To explain this
latter feature, in the second model we introduce a polycrystalline AFM matrix,
with exchange interactions between AFM crystallites and between the FM
inclusion and these crystallites. We show that within this model we can not
only reproduce the hysteretic character of the remagnetization process, but
also achieve a semi-quantitative agreement with the experimentally observed
hysteresis loop assuming that the concentration of FM inclusions strongly
fluctuates. These findings demonstrate the reliability of our enhanced
micromagnetic model and set the basis for its applications in future studies of
Heusler alloys and FM/AFM nanocomposites.
|
http://arxiv.org/abs/2309.17129v1
|
We build a model to predict from first principles the properties of major
mergers. We predict these from the coalescence of peaks and saddle points in
the vicinity of a given larger peak, as one increases the smoothing scale in
the initial linear density field as a proxy for cosmic time. To refine our
results, we also ensure, using a suite of $\sim 400$ power-law Gaussian random
fields smoothed at $\sim 30$ different scales, that the relevant peaks and
saddles are topologically connected: they should belong to a persistent pair
before coalescence. Our model allows us to (a) compute the probability
distribution function of the satellite-merger separation in Lagrangian space:
they peak at three times the smoothing scale; (b) predict the distribution of
the number of mergers as a function of peak rarity: haloes typically undergo
two major mergers ($>$1:10) per decade of mass growth; (c) recover that the
typical spin brought by mergers: it is of the order of a few tens of percent.
|
http://arxiv.org/abs/2309.11558v3
|
In the electromagnetic multipole expansion, magnetic octupoles are the
subsequent order of magnetic multipoles allowed in centrosymmetric systems,
following the more commonly observed magnetic dipoles. As order parameters in
condensed matter systems, magnetic octupoles have been experimentally elusive.
In particular, the lack of simple external fields that directly couple to them
makes their experimental detection challenging. Here, we demonstrate a
methodology for probing the magnetic octupole susceptibility using a product of
magnetic field $H_i$ and shear strain $\epsilon_{jk}$ to couple to the
octupolar fluctuations, while using an adiabatic elastocaloric effect to probe
the response to this composite effective field. We observe a Curie-Weiss
behavior in the obtained octupolar susceptibility of \ce{PrV2Al20} up to
temperatures approximately forty times the putative octupole ordering
temperature. Our results demonstrate the presence of magnetic octupole
fluctuations in the particular material system, and more broadly highlight how
anisotropic strain can be combined with magnetic fields to formulate a
versatile probe to observe otherwise elusive emergent `hidden' electronic
orders.
|
http://arxiv.org/abs/2309.04633v1
|
This study investigates game-based learning in the context of the educational
game "Jo Wilder and the Capitol Case," focusing on predicting student
performance using various machine learning models, including K-Nearest
Neighbors (KNN), Multi-Layer Perceptron (MLP), and Random Forest. The research
aims to identify the features most predictive of student performance and
correct question answering. By leveraging gameplay data, we establish complete
benchmarks for these models and explore the importance of applying proper data
aggregation methods. By compressing all numeric data to min/max/mean/sum and
categorical data to first, last, count, and nunique, we reduced the size of the
original training data from 4.6 GB to 48 MB of preprocessed training data,
maintaining high F1 scores and accuracy.
Our findings suggest that proper preprocessing techniques can be vital in
enhancing the performance of non-deep-learning-based models. The MLP model
outperformed the current state-of-the-art French Touch model, achieving an F-1
score of 0.83 and an accuracy of 0.74, suggesting its suitability for this
dataset. Future research should explore using larger datasets, other
preprocessing techniques, more advanced deep learning techniques, and
real-world applications to provide personalized learning recommendations to
students based on their predicted performance. This paper contributes to the
understanding of game-based learning and provides insights into optimizing
educational game experiences for improved student outcomes and skill
development.
|
http://arxiv.org/abs/2309.13429v1
|
The generalized outcome-adaptive lasso (GOAL) is a variable selection for
high-dimensional causal inference proposed by Bald\'e et al. [2023, {\em
Biometrics} {\bfseries 79(1)}, 514--520]. When the dimension is high, it is now
well established that an ideal variable selection method should have the oracle
property to ensure the optimal large sample performance. However, the oracle
property of GOAL has not been proven. In this paper, we show that the GOAL
estimator enjoys the oracle property. Our simulation shows that the GOAL method
deals with the collinearity problem better than the oracle-like method, the
outcome-adaptive lasso (OAL).
|
http://arxiv.org/abs/2310.00250v2
|
Orthogonal Calculus, first developed by Weiss in 1991, provides a calculus of
functors for functors from real inner product spaces to spaces. Many of the
functors to which Orthogonal Calculus has been applied since carry an
additional lax symmetric monoidal structure which has so far been ignored. For
instance, the functor $V \mapsto \text{BO}(V)$ admits maps $$\text{BO}(V)
\times \text{BO}(W) \to \text{BO}(V \oplus W)$$ which determine a lax symmetric
monoidal structure.
Our first main result, Corollary 4$.$2$.$0$.$2, states that the Taylor
approximations of a lax symmetric monoidal functor are themselves lax symmetric
monoidal. We also study the derivative spectra of lax symmetric monoidal
functors, and prove in Corollary 5$.$4$.$0$.$1 that they admit
$O(n)$-equivariant structure maps of the form $$\Theta^nF \otimes \Theta^nF \to
D_{O(n)} \otimes \Theta^nF$$ where $D_{O(n)} \simeq S^{\text{Ad}_n}$ is the
Klein-Spivak dualising spectrum of the topological group $O(n)$.
As our proof methods are largely abstract and $\infty$-categorical, we also
formulate Orthogonal Calculus in that language before proving our results.
|
http://arxiv.org/abs/2309.15058v2
|
This article proposes a new method to increase the efficiency of stimulated
Raman adiabatic passage (STIRAP) in superconducting circuits using a shortcut
to the adiabaticity (STA) method. The STA speeds up the adiabatic process
before decoherence has a significant effect, thus leading to increased
efficiency. This method achieves fast, high-fidelity coherent population
transfer, known as super-adiabatic STIRAP (saSTIRAP), in a dressed
state-engineered $\Lambda$ system with polariton states in circuit QED.
|
http://arxiv.org/abs/2310.20180v1
|
Two-dimensional (2D) materials exhibit a wide range of remarkable phenomena,
many of which owe their existence to the relativistic spin-orbit coupling (SOC)
effects. To understand and predict properties of materials containing heavy
elements, such as the transition-metal dichalcogenides (TMDs), relativistic
effects must be taken into account in first-principles calculations. We present
an all-electron method based on the four-component Dirac Hamiltonian and
Gaussian-type orbitals (GTOs) that overcomes complications associated with
linear dependencies and ill-conditioned matrices that arise when diffuse
functions are included in the basis. Until now, there has been no systematic
study of the convergence of GTO basis sets for periodic solids either at the
nonrelativistic or the relativistic level. Here we provide such a study of
relativistic band structures of the 2D TMDs in the hexagonal (2H), tetragonal
(1T), and distorted tetragonal (1T') structures, along with a discussion of
their SOC-driven properties (Rashba splitting and $\mathbb{Z}_2$ topological
invariants). We demonstrate the viability of our approach even when large basis
sets with multiple basis functions involving various valence orbitals (denoted
triple- and quadruple-$\zeta$) are used in the relativistic regime. Our method
does not require the use of pseudopotentials and provides access to all
electronic states within the same framework. Our study paves the way for direct
studies of material properties, such as the parameters in spin Hamiltonians,
that depend heavily on the electron density near atomic nuclei where
relativistic and SOC effects are the strongest.
|
http://arxiv.org/abs/2302.00041v3
|
Although deep learning has made strides in the field of deep noise
suppression, leveraging deep architectures on resource-constrained devices
still proved challenging. Therefore, we present an early-exiting model based on
nsNet2 that provides several levels of accuracy and resource savings by halting
computations at different stages. Moreover, we adapt the original architecture
by splitting the information flow to take into account the injected dynamism.
We show the trade-offs between performance and computational complexity based
on established metrics.
|
http://arxiv.org/abs/2308.16678v1
|
The 450th anniversary of the discovery of the SN 1572 supernova event was
celebrated in 2022. A closer look at the historical development of the field of
supernova astronomy reveals the scientific importance of Tycho Brahe's 1572
observations of this "new star". In their quest to learn more about the new
type of stellar explosion and subsequent evolution, the initial protagonists in
this field (Baader and Zwicky among others) gradually turned their attention to
the final remnant state of these supernova events. Since the remnant object
thought to be associated with the extragalactic supernova event was found to be
very dim, the focus quickly shifted toward nearby galactic events. It is at
this point where Tycho Brahe's observations played an important and often
overlooked role in the context of the development of stellar evolution as a
scientific field. Tycho Brahe's meticulous and detailed recordings of the
change in brightness of the new star not only allowed modern astronomers to
classify SN 1572 as a supernova event but also helped them pinpoint the exact
astrometric location of SN 1572. These findings helped to empirically link
extragalactic supernova events to nearby past supernova remnants in the Milky
Way. This enabled subsequent observations allowing further characterization.
Transforming the historical recordings to a standardized photometric system
also allowed the classification of SN 1572 as a type I supernova event.
|
http://arxiv.org/abs/2309.10120v1
|
We recently proposed a new approach for the real-time monitoring of particle
therapy treatments with the goal of achieving high sensitivities on the
particle range measurement already at limited counting statistics. This method
extends the Prompt Gamma (PG) timing technique to obtain the PG vertex
distribution from the exclusive measurement of particle Time-Of-Flight (TOF).
It was previously shown, through Monte Carlo simulation, that an original data
reconstruction algorithm (Prompt Gamma Time Imaging) allows to combine the
response of multiple detectors placed around the target. In this work we focus
on the experimental feasibility of PGTI in Single Proton Regime (SPR) through
the development of a multi-channel, Cherenkov-based PG detector with a targeted
time resolution of 235 ps (FWHM): the TOF Imaging ARrAy (TIARA). The PG module
that we developed is composed of a small PbF$_{2}$ crystal coupled to a silicon
photoMultiplier to provide the time stamp of the PG. This prototype was tested
with 63 MeV protons delivered from a cyclotron: a time resolution of 276 ps
(FWHM) was obtained, resulting in a proton range sensitivity of 4 mm at
2$\sigma$ with the acquisition of only 600 PGs. A second prototype was also
evaluated with 148 MeV protons delivered from a synchro-cyclotron obtaining a
time resolution below 167 ps (FWHM) for the gamma detector. Moreover, using two
identical PG modules, it was shown that a uniform sensitivity on the PG
profiles would be achievable by combining the response of gamma detectors
uniformly distributed around the target. This work provides the experimental
proof-of-concept for the development of a high sensitivity detector that can be
used to monitor particle therapy treatments and potentially act in real-time if
the irradiation does not comply to treatment plan.
|
http://arxiv.org/abs/2309.03612v1
|
Quantum entanglement, a fundamental aspect of quantum mechanics, has captured
significant attention in the era of quantum information science. In
multipartite quantum systems, entanglement plays a crucial role in facilitating
various quantum information processing tasks, such as quantum teleportation and
dense coding. In this article, we review the theory of multipartite
entanglement measures, with a particular focus on the genuine as well as the
operational meaning of multipartite entanglement measures. By providing a
thorough and valuable insight on this field, we hope that this review would
inspire and guide researchers in their endeavors to further develop novel
approaches for characterizing multipartite entanglement.
|
http://arxiv.org/abs/2309.09459v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.