id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.17635
|
In-Context Meta LoRA Generation
|
cs.CL cs.AI cs.CV
|
Low-rank Adaptation (LoRA) has demonstrated remarkable capabilities for task
specific fine-tuning. However, in scenarios that involve multiple tasks,
training a separate LoRA model for each one results in considerable
inefficiency in terms of storage and inference. Moreover, existing parameter
generation methods fail to capture the correlations among these tasks, making
multi-task LoRA parameter generation challenging. To address these limitations,
we propose In-Context Meta LoRA (ICM-LoRA), a novel approach that efficiently
achieves task-specific customization of large language models (LLMs).
Specifically, we use training data from all tasks to train a tailored
generator, Conditional Variational Autoencoder (CVAE). CVAE takes task
descriptions as inputs and produces task-aware LoRA weights as outputs. These
LoRA weights are then merged with LLMs to create task-specialized models
without the need for additional fine-tuning. Furthermore, we utilize in-context
meta-learning for knowledge enhancement and task mapping, to capture the
relationship between tasks and parameter distributions. As a result, our method
achieves more accurate LoRA parameter generation for diverse tasks using CVAE.
ICM-LoRA enables more accurate LoRA parameter reconstruction than current
parameter reconstruction methods and is useful for implementing task-specific
enhancements of LoRA parameters. At the same time, our method occupies 283MB,
only 1\% storage compared with the original LoRA.
|
2501.17636
|
Efficient Interactive 3D Multi-Object Removal
|
cs.CV
|
Object removal is of great significance to 3D scene understanding, essential
for applications in content filtering and scene editing. Current mainstream
methods primarily focus on removing individual objects, with a few methods
dedicated to eliminating an entire area or all objects of a certain category.
They however confront the challenge of insufficient granularity and flexibility
for real-world applications, where users demand tailored excision and
preservation of objects within defined zones. In addition, most of the current
methods require kinds of priors when addressing multi-view inpainting, which is
time-consuming. To address these limitations, we propose an efficient and
user-friendly pipeline for 3D multi-object removal, enabling users to flexibly
select areas and define objects for removal or preservation. Concretely, to
ensure object consistency and correspondence across multiple views, we propose
a novel mask matching and refinement module, which integrates homography-based
warping with high-confidence anchor points for segmentation. By leveraging the
IoU joint shape context distance loss, we enhance the accuracy of warped masks
and improve subsequent inpainting processes. Considering the current immaturity
of 3D multi-object removal, we provide a new evaluation dataset to bridge the
developmental void. Experimental results demonstrate that our method
significantly reduces computational costs, achieving processing speeds more
than 80% faster than state-of-the-art methods while maintaining equivalent or
higher reconstruction quality.
|
2501.17642
|
Efficient Redundancy Reduction for Open-Vocabulary Semantic Segmentation
|
cs.CV
|
Open-vocabulary semantic segmentation (OVSS) is an open-world task that aims
to assign each pixel within an image to a specific class defined by arbitrary
text descriptions. Recent advancements in large-scale vision-language models
have demonstrated their open-vocabulary understanding capabilities,
significantly facilitating the development of OVSS. However, most existing
methods suffer from either suboptimal performance or long latency. This study
introduces ERR-Seg, a novel framework that effectively reduces redundancy to
balance accuracy and efficiency. ERR-Seg incorporates a training-free Channel
Reduction Module (CRM) that leverages prior knowledge from vision-language
models like CLIP to identify the most relevant classes while discarding others.
Moreover, it incorporates Efficient Semantic Context Fusion (ESCF) with
spatial-level and class-level sequence reduction strategies. CRM and ESCF
result in substantial memory and computational savings without compromising
accuracy. Additionally, recognizing the significance of hierarchical semantics
extracted from middle-layer features for closed-set semantic segmentation,
ERR-Seg introduces the Hierarchical Semantic Module (HSM) to exploit
hierarchical semantics in the context of OVSS. Compared to previous
state-of-the-art methods under the ADE20K-847 setting, ERR-Seg achieves
+$5.6\%$ mIoU improvement and reduces latency by $67.3\%$.
|
2501.17643
|
Tonguescape: Exploring Language Models Understanding of Vowel
Articulation
|
cs.CL cs.AI
|
Vowels are primarily characterized by tongue position. Humans have discovered
these features of vowel articulation through their own experience and explicit
objective observation such as using MRI. With this knowledge and our
experience, we can explain and understand the relationship between tongue
positions and vowels, and this knowledge is helpful for language learners to
learn pronunciation. Since language models (LMs) are trained on a large amount
of data that includes linguistic and medical fields, our preliminary studies
indicate that an LM is able to explain the pronunciation mechanisms of vowels.
However, it is unclear whether multi-modal LMs, such as vision LMs, align
textual information with visual information. One question arises: do LMs
associate real tongue positions with vowel articulation? In this study, we
created video and image datasets from the existing real-time MRI dataset and
investigated whether LMs can understand vowel articulation based on tongue
positions using vision-based information. Our findings suggest that LMs exhibit
potential for understanding vowels and tongue positions when reference examples
are provided while they have difficulties without them. Our code for dataset
building is available on GitHub.
|
2501.17644
|
Efficient Stochastic Polar Decoder With Correlated Stochastic Computing
|
cs.IT eess.SP math.IT
|
Polar codes have gained significant attention in channel coding for their
ability to approach the capacity of binary input discrete memoryless channels
(B-DMCs), thanks to their reliability and efficiency in transmission. However,
existing decoders often struggle to balance hardware area and performance.
Stochastic computing offers a way to simplify circuits, and previous work has
implemented decoding using this approach. A common issue with these methods is
performance degradation caused by the introduction of correlation. This paper
presents an Efficient Correlated Stochastic Polar Decoder (ECS-PD) that
fundamentally addresses the issue of the `hold-state', preventing it from
increasing as correlation computation progresses. We propose two optimization
strategies aimed at reducing iteration latency, increasing throughput, and
simplifying the circuit to improve hardware efficiency. The optimization can
reduce the number of iterations by 25.2% at $E_b/N_0$ = 3 dB. Compared to other
efficient designs, the proposed ECS-PD achieves higher throughput and is 2.7
times more hardware-efficient than the min-sum decoder.
|
2501.17648
|
Analysis and Control of Perturbed Density Systems
|
eess.SY cs.SY
|
The paper investigates dynamical systems for which the derivative of some
positive-definite function along the solutions of this system depends on
so-called density function. In turn, such dynamical systems are called density
systems. The density function sets the density of the space, where the system
is evolved, and affects the behaviour of the original system. For example, this
function can define (un)stable regions and forbidden regions where there are no
system solutions. The density function can be used in the design of new
adaptive control laws with the formulation of appropriate new control goals,
e.g., stabilization in given bounded or semi-bounded sets. To design a novel
adaptive control law that ensures the system outputs in a given set, systems
with known and unknown parameters under disturbances are considered. All
theoretical results and conclusions are illustrated by numerical simulations.
|
2501.17653
|
Drivetrain simulation using variational autoencoders
|
cs.LG cs.CE eess.SP
|
This work proposes variational autoencoders (VAEs) to predict a vehicle's
jerk from a given torque demand, addressing the limitations of sparse
real-world datasets. Specifically, we implement unconditional and conditional
VAEs to generate jerk signals that integrate features from different drivetrain
scenarios. The VAEs are trained on experimental data collected from two
variants of a fully electric SUV, which differ in maximum torque delivery and
drivetrain configuration. New meaningful jerk signals are generated within an
engineering context through the interpretation of the VAE's latent space. A
performance comparison with baseline physics-based and hybrid models confirms
the effectiveness of the VAEs. We show that VAEs bypass the need for exhaustive
manual system parametrization while maintaining physical plausibility by
conditioning data generation on specific inputs.
|
2501.17654
|
Exploring Vision Language Models for Multimodal and Multilingual Stance
Detection
|
cs.CL cs.AI
|
Social media's global reach amplifies the spread of information, highlighting
the need for robust Natural Language Processing tasks like stance detection
across languages and modalities. Prior research predominantly focuses on
text-only inputs, leaving multimodal scenarios, such as those involving both
images and text, relatively underexplored. Meanwhile, the prevalence of
multimodal posts has increased significantly in recent years. Although
state-of-the-art Vision-Language Models (VLMs) show promise, their performance
on multimodal and multilingual stance detection tasks remains largely
unexamined. This paper evaluates state-of-the-art VLMs on a newly extended
dataset covering seven languages and multimodal inputs, investigating their use
of visual cues, language-specific performance, and cross-modality interactions.
Our results show that VLMs generally rely more on text than images for stance
detection and this trend persists across languages. Additionally, VLMs rely
significantly more on text contained within the images than other visual
content. Regarding multilinguality, the models studied tend to generate
consistent predictions across languages whether they are explicitly
multilingual or not, although there are outliers that are incongruous with
macro F1, language support, and model size.
|
2501.17655
|
FeatureGS: Eigenvalue-Feature Optimization in 3D Gaussian Splatting for
Geometrically Accurate and Artifact-Reduced Reconstruction
|
cs.CV
|
3D Gaussian Splatting (3DGS) has emerged as a powerful approach for 3D scene
reconstruction using 3D Gaussians. However, neither the centers nor surfaces of
the Gaussians are accurately aligned to the object surface, complicating their
direct use in point cloud and mesh reconstruction. Additionally, 3DGS typically
produces floater artifacts, increasing the number of Gaussians and storage
requirements. To address these issues, we present FeatureGS, which incorporates
an additional geometric loss term based on an eigenvalue-derived 3D shape
feature into the optimization process of 3DGS. The goal is to improve geometric
accuracy and enhance properties of planar surfaces with reduced structural
entropy in local 3D neighborhoods.We present four alternative formulations for
the geometric loss term based on 'planarity' of Gaussians, as well as
'planarity', 'omnivariance', and 'eigenentropy' of Gaussian neighborhoods. We
provide quantitative and qualitative evaluations on 15 scenes of the DTU
benchmark dataset focusing on following key aspects: Geometric accuracy and
artifact-reduction, measured by the Chamfer distance, and memory efficiency,
evaluated by the total number of Gaussians. Additionally, rendering quality is
monitored by Peak Signal-to-Noise Ratio. FeatureGS achieves a 30 % improvement
in geometric accuracy, reduces the number of Gaussians by 90 %, and suppresses
floater artifacts, while maintaining comparable photometric rendering quality.
The geometric loss with 'planarity' from Gaussians provides the highest
geometric accuracy, while 'omnivariance' in Gaussian neighborhoods reduces
floater artifacts and number of Gaussians the most. This makes FeatureGS a
strong method for geometrically accurate, artifact-reduced and memory-efficient
3D scene reconstruction, enabling the direct use of Gaussian centers for
geometric representation.
|
2501.17658
|
An eco-driving approach for ride comfort improvement
|
cs.RO cs.CY
|
New challenges on transport systems are emerging due to the advances that the
current paradigm is experiencing. The breakthrough of the autonomous car brings
concerns about ride comfort, while the pollution concerns have arisen in recent
years. In the model of automated automobiles, drivers are expected to become
passengers, so, they will be more prone to suffer from ride discomfort or
motion sickness. Conversely, the eco-driving implications should not be set
aside because of the influence of pollution on climate and people's health. For
that reason, a joint assessment of the aforementioned points would have a
positive impact. Thus, this work presents a self-organised map-based solution
to assess ride comfort features of individuals considering their driving style
from the viewpoint of eco-driving. For this purpose, a previously acquired
dataset from an instrumented car was used to classify drivers regarding the
causes of their lack of ride comfort and eco-friendliness. Once drivers are
classified regarding their driving style, natural-language-based
recommendations are proposed to increase the engagement with the system. Hence,
potential improvements of up to the 57.7% for ride comfort evaluation
parameters, as well as up to the 47.1% in greenhouse-gasses emissions are
expected to be reached.
|
2501.17661
|
Multi-Agent Path Finding Using Conflict-Based Search and
Structural-Semantic Topometric Maps
|
cs.RO
|
As industries increasingly adopt large robotic fleets, there is a pressing
need for computationally efficient, practical, and optimal conflict-free path
planning for multiple robots. Conflict-Based Search (CBS) is a popular method
for multi-agent path finding (MAPF) due to its completeness and optimality;
however, it is often impractical for real-world applications, as it is
computationally intensive to solve and relies on assumptions about agents and
operating environments that are difficult to realize. This article proposes a
solution to overcome computational challenges and practicality issues of CBS by
utilizing structural-semantic topometric maps. Instead of running CBS over
large grid-based maps, the proposed solution runs CBS over a sparse topometric
map containing structural-semantic cells representing intersections, pathways,
and dead ends. This approach significantly accelerates the MAPF process and
reduces the number of conflict resolutions handled by CBS while operating in
continuous time. In the proposed method, robots are assigned time ranges to
move between topometric regions, departing from the traditional CBS assumption
that a robot can move to any connected cell in a single time step. The approach
is validated through real-world multi-robot path-finding experiments and
benchmarking simulations. The results demonstrate that the proposed MAPF method
can be applied to real-world non-holonomic robots and yields significant
improvement in computational efficiency compared to traditional CBS methods
while improving conflict detection and resolution in cases of corridor
symmetries.
|
2501.17663
|
Landscape Features in Single-Objective Continuous Optimization: Have We
Hit a Wall in Algorithm Selection Generalization?
|
cs.LG
|
%% Text of abstract The process of identifying the most suitable optimization
algorithm for a specific problem, referred to as algorithm selection (AS),
entails training models that leverage problem landscape features to forecast
algorithm performance. A significant challenge in this domain is ensuring that
AS models can generalize effectively to novel, unseen problems. This study
evaluates the generalizability of AS models based on different problem
representations in the context of single-objective continuous optimization. In
particular, it considers the most widely used Exploratory Landscape Analysis
features, as well as recently proposed Topological Landscape Analysis features,
and features based on deep learning, such as DeepELA, TransOptAS and Doe2Vec.
Our results indicate that when presented with out-of-distribution evaluation
data, none of the feature-based AS models outperform a simple baseline model,
i.e., a Single Best Solver.
|
2501.17664
|
Analysis of the Motion Sickness and the Lack of Comfort in Car
Passengers
|
cs.RO
|
Advanced driving assistance systems (ADAS) are primarily designed to increase
driving safety and reduce traffic congestion without paying too much attention
to passenger comfort or motion sickness. However, in view of autonomous cars,
and taking into account that the lack of comfort and motion sickness increase
in passengers, analysis from a comfort perspective is essential in the future
car investigation. The aim of this work is to study in detail how passenger's
comfort evaluation parameters vary depending on the driving style, car or road.
The database used has been developed by compiling the accelerations suffered by
passengers when three drivers cruise two different vehicles on different types
of routes. In order to evaluate both comfort and motion sickness, first, the
numerical values of the main comfort evaluation variables reported in the
literature have been analyzed. Moreover, a complementary statistical analysis
of probability density and a power spectral analysis are performed. Finally,
quantitative results are compared with passenger qualitative feedback. The
results show the high dependence of comfort evaluation variables' value with
the road type. In addition, it has been demonstrated that the driving style and
vehicle dynamics amplify or attenuate those values. Additionally, it has been
demonstrated that contributions from longitudinal and lateral accelerations
have a much greater effect in the lack of comfort than vertical ones. Finally,
based on the concrete results obtained, a new experimental campaign is
proposed.
|
2501.17665
|
Planning with Vision-Language Models and a Use Case in Robot-Assisted
Teaching
|
cs.RO cs.AI
|
Automating the generation of Planning Domain Definition Language (PDDL) with
Large Language Model (LLM) opens new research topic in AI planning,
particularly for complex real-world tasks. This paper introduces Image2PDDL, a
novel framework that leverages Vision-Language Models (VLMs) to automatically
convert images of initial states and descriptions of goal states into PDDL
problems. By providing a PDDL domain alongside visual inputs, Imasge2PDDL
addresses key challenges in bridging perceptual understanding with symbolic
planning, reducing the expertise required to create structured problem
instances, and improving scalability across tasks of varying complexity. We
evaluate the framework on various domains, including standard planning domains
like blocksworld and sliding tile puzzles, using datasets with multiple
difficulty levels. Performance is assessed on syntax correctness, ensuring
grammar and executability, and content correctness, verifying accurate state
representation in generated PDDL problems. The proposed approach demonstrates
promising results across diverse task complexities, suggesting its potential
for broader applications in AI planning. We will discuss a potential use case
in robot-assisted teaching of students with Autism Spectrum Disorder.
|
2501.17666
|
An Intelligent System-on-a-Chip for a Real-Time Assessment of Fuel
Consumption to Promote Eco-Driving
|
cs.RO
|
Pollution that originates from automobiles is a concern in the current world,
not only because of global warming, but also due to the harmful effects on
people's health and lives. Despite regulations on exhaust gas emissions being
applied, minimizing unsuitable driving habits that cause elevated fuel
consumption and emissions would achieve further reductions. For that reason,
this work proposes a self-organized map (SOM)-based intelligent system in order
to provide drivers with eco-driving-intended driving style (DS)
recommendations. The development of the DS advisor uses driving data from the
Uyanik instrumented car. The system classifies drivers regarding the underlying
causes of non-optimal DSs from the eco-driving viewpoint. When compared with
other solutions, the main advantage of this approach is the personalization of
the recommendations that are provided to motorists, comprising the handling of
the pedals and the gearbox, with potential improvements in both fuel
consumption and emissions ranging from the 9.5\% to the 31.5\%, or even higher
for drivers that are strongly engaged with the system. It was successfully
implemented using a field-programmable gate array (FPGA) device of the Xilinx
ZynQ programmable system-on-a-chip (PSoC) family. This SOM-based system allows
for real-time implementation, state-of-the-art timing performances, and low
power consumption, which are suitable for developing advanced driving
assistance systems (ADASs).
|
2501.17667
|
CAMP in the Odyssey: Provably Robust Reinforcement Learning with
Certified Radius Maximization
|
cs.LG cs.CR
|
Deep reinforcement learning (DRL) has gained widespread adoption in control
and decision-making tasks due to its strong performance in dynamic
environments. However, DRL agents are vulnerable to noisy observations and
adversarial attacks, and concerns about the adversarial robustness of DRL
systems have emerged. Recent efforts have focused on addressing these
robustness issues by establishing rigorous theoretical guarantees for the
returns achieved by DRL agents in adversarial settings. Among these approaches,
policy smoothing has proven to be an effective and scalable method for
certifying the robustness of DRL agents. Nevertheless, existing certifiably
robust DRL relies on policies trained with simple Gaussian augmentations,
resulting in a suboptimal trade-off between certified robustness and certified
return. To address this issue, we introduce a novel paradigm dubbed
\texttt{C}ertified-r\texttt{A}dius-\texttt{M}aximizing \texttt{P}olicy
(\texttt{CAMP}) training. \texttt{CAMP} is designed to enhance DRL policies,
achieving better utility without compromising provable robustness. By
leveraging the insight that the global certified radius can be derived from
local certified radii based on training-time statistics, \texttt{CAMP}
formulates a surrogate loss related to the local certified radius and optimizes
the policy guided by this surrogate loss. We also introduce \textit{policy
imitation} as a novel technique to stabilize \texttt{CAMP} training.
Experimental results demonstrate that \texttt{CAMP} significantly improves the
robustness-return trade-off across various tasks. Based on the results,
\texttt{CAMP} can achieve up to twice the certified expected return compared to
that of baselines. Our code is available at
https://github.com/NeuralSec/camp-robust-rl.
|
2501.17670
|
Distinguished Quantized Guidance for Diffusion-based Sequence
Recommendation
|
cs.IR
|
Diffusion models (DMs) have emerged as promising approaches for sequential
recommendation due to their strong ability to model data distributions and
generate high-quality items. Existing work typically adds noise to the next
item and progressively denoises it guided by the user's interaction sequence,
generating items that closely align with user interests. However, we identify
two key issues in this paradigm. First, the sequences are often heterogeneous
in length and content, exhibiting noise due to stochastic user behaviors. Using
such sequences as guidance may hinder DMs from accurately understanding user
interests. Second, DMs are prone to data bias and tend to generate only the
popular items that dominate the training dataset, thus failing to meet the
personalized needs of different users. To address these issues, we propose
Distinguished Quantized Guidance for Diffusion-based Sequence Recommendation
(DiQDiff), which aims to extract robust guidance to understand user interests
and generate distinguished items for personalized user interests within DMs. To
extract robust guidance, DiQDiff introduces Semantic Vector Quantization (SVQ)
to quantize sequences into semantic vectors (e.g., collaborative signals and
category interests) using a codebook, which can enrich the guidance to better
understand user interests. To generate distinguished items, DiQDiff
personalizes the generation through Contrastive Discrepancy Maximization (CDM),
which maximizes the distance between denoising trajectories using contrastive
loss to prevent biased generation for different users. Extensive experiments
are conducted to compare DiQDiff with multiple baseline models across four
widely-used datasets. The superior recommendation performance of DiQDiff
against leading approaches demonstrates its effectiveness in sequential
recommendation tasks.
|
2501.17676
|
Explainable Artificial Intelligence for identifying profitability
predictors in Financial Statements
|
cs.LG
|
The interconnected nature of the economic variables influencing a firm's
performance makes the prediction of a company's earning trend a challenging
task. Existing methodologies often rely on simplistic models and financial
ratios failing to capture the complexity of interacting influences. In this
paper, we apply Machine Learning techniques to raw financial statements data
taken from AIDA, a Database comprising Italian listed companies' data from 2013
to 2022.
We present a comparative study of different models and following the European
AI regulations, we complement our analysis by applying explainability
techniques to the proposed models. In particular, we propose adopting an
eXplainable Artificial Intelligence method based on Game Theory to identify the
most sensitive features and make the result more interpretable.
|
2501.17683
|
Temperature-Free Loss Function for Contrastive Learning
|
cs.LG
|
As one of the most promising methods in self-supervised learning, contrastive
learning has achieved a series of breakthroughs across numerous fields. A
predominant approach to implementing contrastive learning is applying InfoNCE
loss: By capturing the similarities between pairs, InfoNCE loss enables
learning the representation of data. Albeit its success, adopting InfoNCE loss
requires tuning a temperature, which is a core hyperparameter for calibrating
similarity scores. Despite its significance and sensitivity to performance
being emphasized by several studies, searching for a valid temperature requires
extensive trial-and-error-based experiments, which increases the difficulty of
adopting InfoNCE loss. To address this difficulty, we propose a novel method to
deploy InfoNCE loss without temperature. Specifically, we replace temperature
scaling with the inverse hyperbolic tangent function, resulting in a modified
InfoNCE loss. In addition to hyperparameter-free deployment, we observed that
the proposed method even yielded a performance gain in contrastive learning.
Our detailed theoretical analysis discovers that the current practice of
temperature scaling in InfoNCE loss causes serious problems in gradient
descent, whereas our method provides desirable gradient properties. The
proposed method was validated on five benchmarks on contrastive learning,
yielding satisfactory results without temperature tuning.
|
2501.17688
|
ContourFormer:Real-Time Contour-Based End-to-End Instance Segmentation
Transformer
|
cs.CV cs.AI
|
This paper presents Contourformer, a real-time contour-based instance
segmentation algorithm. The method is fully based on the DETR paradigm and
achieves end-to-end inference through iterative and progressive mechanisms to
optimize contours. To improve efficiency and accuracy, we develop two novel
techniques: sub-contour decoupling mechanisms and contour fine-grained
distribution refinement. In the sub-contour decoupling mechanism, we propose a
deformable attention-based module that adaptively selects sampling regions
based on the current predicted contour, enabling more effective capturing of
object boundary information. Additionally, we design a multi-stage optimization
process to enhance segmentation precision by progressively refining
sub-contours. The contour fine-grained distribution refinement technique aims
to further improve the ability to express fine details of contours. These
innovations enable Contourformer to achieve stable and precise segmentation for
each instance while maintaining real-time performance. Extensive experiments
demonstrate the superior performance of Contourformer on multiple benchmark
datasets, including SBD, COCO, and KINS. We conduct comprehensive evaluations
and comparisons with existing state-of-the-art methods, showing significant
improvements in both accuracy and inference speed. This work provides a new
solution for contour-based instance segmentation tasks and lays a foundation
for future research, with the potential to become a strong baseline method in
this field.
|
2501.17689
|
Machine-Learning-Enhanced Optimization of Noise-Resilient Variational
Quantum Eigensolvers
|
quant-ph cs.LG hep-lat
|
Variational Quantum Eigensolvers (VQEs) are a powerful class of hybrid
quantum-classical algorithms designed to approximate the ground state of a
quantum system described by its Hamiltonian. VQEs hold promise for various
applications, including lattice field theory. However, the inherent noise of
Noisy Intermediate-Scale Quantum (NISQ) devices poses a significant challenge
for running VQEs as these algorithms are particularly susceptible to noise,
e.g., measurement shot noise and hardware noise.
In a recent work, it was proposed to enhance the classical optimization of
VQEs with Gaussian Processes (GPs) and Bayesian Optimization, as these
machine-learning techniques are well-suited for handling noisy data. In these
proceedings, we provide additional insights into this new algorithm and present
further numerical experiments. In particular, we examine the impact of hardware
noise and error mitigation on the algorithm's performance. We validate the
algorithm using classical simulations of quantum hardware, including hardware
noise benchmarks, which have not been considered in previous works. Our
numerical experiments demonstrate that GP-enhanced algorithms can outperform
state-of-the-art baselines, laying the foundation for future research on
deploying these techniques to real quantum hardware and lattice field theory
setups.
|
2501.17690
|
Segmentation-Aware Generative Reinforcement Network (GRN) for Tissue
Layer Segmentation in 3-D Ultrasound Images for Chronic Low-back Pain (cLBP)
Assessment
|
cs.CV cs.AI cs.LG
|
We introduce a novel segmentation-aware joint training framework called
generative reinforcement network (GRN) that integrates segmentation loss
feedback to optimize both image generation and segmentation performance in a
single stage. An image enhancement technique called segmentation-guided
enhancement (SGE) is also developed, where the generator produces images
tailored specifically for the segmentation model. Two variants of GRN were also
developed, including GRN for sample-efficient learning (GRN-SEL) and GRN for
semi-supervised learning (GRN-SSL). GRN's performance was evaluated using a
dataset of 69 fully annotated 3D ultrasound scans from 29 subjects. The
annotations included six anatomical structures: dermis, superficial fat,
superficial fascial membrane (SFM), deep fat, deep fascial membrane (DFM), and
muscle. Our results show that GRN-SEL with SGE reduces labeling efforts by up
to 70% while achieving a 1.98% improvement in the Dice Similarity Coefficient
(DSC) compared to models trained on fully labeled datasets. GRN-SEL alone
reduces labeling efforts by 60%, GRN-SSL with SGE decreases labeling
requirements by 70%, and GRN-SSL alone by 60%, all while maintaining
performance comparable to fully supervised models. These findings suggest the
effectiveness of the GRN framework in optimizing segmentation performance with
significantly less labeled data, offering a scalable and efficient solution for
ultrasound image analysis and reducing the burdens associated with data
annotation.
|
2501.17699
|
PulmoFusion: Advancing Pulmonary Health with Efficient Multi-Modal
Fusion
|
eess.IV cs.AI cs.CV
|
Traditional remote spirometry lacks the precision required for effective
pulmonary monitoring. We present a novel, non-invasive approach using
multimodal predictive models that integrate RGB or thermal video data with
patient metadata. Our method leverages energy-efficient Spiking Neural Networks
(SNNs) for the regression of Peak Expiratory Flow (PEF) and classification of
Forced Expiratory Volume (FEV1) and Forced Vital Capacity (FVC), using
lightweight CNNs to overcome SNN limitations in regression tasks. Multimodal
data integration is improved with a Multi-Head Attention Layer, and we employ
K-Fold validation and ensemble learning to boost robustness. Using thermal
data, our SNN models achieve 92% accuracy on a breathing-cycle basis and 99.5%
patient-wise. PEF regression models attain Relative RMSEs of 0.11 (thermal) and
0.26 (RGB), with an MAE of 4.52% for FEV1/FVC predictions, establishing
state-of-the-art performance. Code and dataset can be found on
https://github.com/ahmed-sharshar/RespiroDynamics.git
|
2501.17701
|
Decision-Theoretic Approaches in Learning-Augmented Algorithms
|
cs.DS cs.LG
|
In this work, we initiate the systemic study of decision-theoretic metrics in
the design and analysis of algorithms with machine-learned predictions. We
introduce approaches based on both deterministic measures such as
distance-based evaluation, that help us quantify how close the algorithm is to
an ideal solution, as well as stochastic measures that allow us to balance the
trade-off between the algorithm's performance and the risk associated with the
imperfect oracle. These approaches help us quantify the algorithmic performance
across the entire spectrum of prediction error, unlike several previous works
that focus on few, and often extreme values of the error. We apply these
techniques to two well-known problems from resource allocation and online
decision making, namely contract scheduling and 1-max search.
|
2501.17703
|
Critique Fine-Tuning: Learning to Critique is More Effective than
Learning to Imitate
|
cs.CL
|
Supervised Fine-Tuning (SFT) is commonly used to train language models to
imitate annotated responses for given instructions. In this paper, we challenge
this paradigm and propose Critique Fine-Tuning (CFT), a strategy where models
learn to critique noisy responses rather than simply imitate correct ones.
Inspired by human learning processes that emphasize critical thinking, CFT
encourages deeper analysis and nuanced understanding-traits often overlooked by
standard SFT. To validate the effectiveness of CFT, we construct a 50K-sample
dataset from WebInstruct, using GPT-4o as the teacher to generate critiques in
the form of ([query; noisy response], critique). CFT on this dataset yields a
consistent 4-10% improvement over SFT on six math benchmarks with different
base models like Qwen2.5, Qwen2.5-Math and DeepSeek-Math. We further expand to
MetaMath and NuminaMath datasets and observe similar gains over SFT. Notably,
our model Qwen2.5-Math-CFT only requires 1 hour training on 8xH100 over the 50K
examples. It can match or outperform strong competitors like
Qwen2.5-Math-Instruct on most benchmarks, which use over 2M samples. Moreover,
it can match the performance of SimpleRL, which is a deepseek-r1 replication
trained with 140x more compute. Ablation studies show that CFT is robust to the
source of noisy response and teacher critique model. Through these findings, we
argue that CFT offers a more effective alternative to advance the reasoning of
language models.
|
2501.17704
|
Inferring Implicit Goals Across Differing Task Models
|
cs.AI cs.RO cs.SY eess.SY
|
One of the significant challenges to generating value-aligned behavior is to
not only account for the specified user objectives but also any implicit or
unspecified user requirements. The existence of such implicit requirements
could be particularly common in settings where the user's understanding of the
task model may differ from the agent's estimate of the model. Under this
scenario, the user may incorrectly expect some agent behavior to be inevitable
or guaranteed. This paper addresses such expectation mismatch in the presence
of differing models by capturing the possibility of unspecified user subgoal in
the context of a task captured as a Markov Decision Process (MDP) and querying
for it as required. Our method identifies bottleneck states and uses them as
candidates for potential implicit subgoals. We then introduce a querying
strategy that will generate the minimal number of queries required to identify
a policy guaranteed to achieve the underlying goal. Our empirical evaluations
demonstrate the effectiveness of our approach in inferring and achieving
unstated goals across various tasks.
|
2501.17706
|
Source-Channel Separation Theorems for Distortion Perception Coding
|
cs.IT math.IT
|
It is well known that separation between lossy source coding and channel
coding is asymptotically optimal under classical additive distortion measures.
Recently, coding under a new class of quality considerations, often referred to
as perception or realism, has attracted significant attention due to its close
connection to neural generative models and semantic communications. In this
work, we revisit source-channel separation under the consideration of
distortion-perception. We show that when the perception quality is measured on
the block level, i.e., in the strong-sense, the optimality of separation still
holds when common randomness is shared between the encoder and the decoder;
however, separation is no longer optimal when such common randomness is not
available. In contrast, when the perception quality is the average per-symbol
measure, i.e., in the weak-sense, the optimality of separation holds regardless
of the availability of common randomness.
|
2501.17711
|
STGCN-LSTM for Olympic Medal Prediction: Dynamic Power Modeling and
Causal Policy Optimization
|
cs.LG
|
This paper proposes a novel hybrid model, STGCN-LSTM, to forecast Olympic
medal distributions by integrating the spatio-temporal relationships among
countries and the long-term dependencies of national performance. The
Spatial-Temporal Graph Convolution Network (STGCN) captures geographic and
interactive factors-such as coaching exchange and socio-economic links-while
the Long Short-Term Memory (LSTM) module models historical trends in medal
counts, economic data, and demographics. To address zero-inflated outputs
(i.e., the disparity between countries that consistently yield wins and those
never having won medals), a Zero-Inflated Compound Poisson (ZICP) framework is
incorporated to separate random zeros from structural zeros, providing a
clearer view of potential breakthrough performances. Validation includes
historical backtracking, policy shock simulations, and causal inference checks,
confirming the robustness of the proposed method. Results shed light on the
influence of coaching mobility, event specialization, and strategic investment
on medal forecasts, offering a data-driven foundation for optimizing sports
policies and resource allocation in diverse Olympic contexts.
|
2501.17715
|
RICoTA: Red-teaming of In-the-wild Conversation with Test Attempts
|
cs.CL
|
User interactions with conversational agents (CAs) evolve in the era of
heavily guardrailed large language models (LLMs). As users push beyond
programmed boundaries to explore and build relationships with these systems,
there is a growing concern regarding the potential for unauthorized access or
manipulation, commonly referred to as "jailbreaking." Moreover, with CAs that
possess highly human-like qualities, users show a tendency toward initiating
intimate sexual interactions or attempting to tame their chatbots. To capture
and reflect these in-the-wild interactions into chatbot designs, we propose
RICoTA, a Korean red teaming dataset that consists of 609 prompts challenging
LLMs with in-the-wild user-made dialogues capturing jailbreak attempts. We
utilize user-chatbot conversations that were self-posted on a Korean
Reddit-like community, containing specific testing and gaming intentions with a
social chatbot. With these prompts, we aim to evaluate LLMs' ability to
identify the type of conversation and users' testing purposes to derive chatbot
design implications for mitigating jailbreaking risks. Our dataset will be made
publicly available via GitHub.
|
2501.17718
|
Learning Semantic Facial Descriptors for Accurate Face Animation
|
cs.CV
|
Face animation is a challenging task. Existing model-based methods (utilizing
3DMMs or landmarks) often result in a model-like reconstruction effect, which
doesn't effectively preserve identity. Conversely, model-free approaches face
challenges in attaining a decoupled and semantically rich feature space,
thereby making accurate motion transfer difficult to achieve. We introduce the
semantic facial descriptors in learnable disentangled vector space to address
the dilemma. The approach involves decoupling the facial space into identity
and motion subspaces while endowing each of them with semantics by learning
complete orthogonal basis vectors. We obtain basis vector coefficients by
employing an encoder on the source and driving faces, leading to effective
facial descriptors in the identity and motion subspaces. Ultimately, these
descriptors can be recombined as latent codes to animate faces. Our approach
successfully addresses the issue of model-based methods' limitations in
high-fidelity identity and the challenges faced by model-free methods in
accurate motion transfer. Extensive experiments are conducted on three
challenging benchmarks (i.e. VoxCeleb, HDTF, CelebV). Comprehensive
quantitative and qualitative results demonstrate that our model outperforms
SOTA methods with superior identity preservation and motion transfer.
|
2501.17720
|
Parsimonious Hawkes Processes for temporal networks modelling
|
cs.SI physics.data-an
|
Temporal networks are characterised by interdependent link events between
nodes, forming ordered sequences of links that may represent specific
information flows in the system. Nevertheless, representing temporal networks
using discrete snapshots in time partially cancels the effect of time-ordered
links on each other, while continuous time models, such as Poisson or Hawkes
processes, can describe the full influence between all the potential pairs of
links at all times. In this paper, we introduce a continuous Hawkes temporal
network model which accounts both for a community structure of the aggregate
network and a strong heterogeneity in the activity of individual nodes, thus
accounting for the presence of highly heterogeneous clusters with isolated
high-activity influencer nodes, communities and low-activity nodes. Our model
improves the prediction performance of previously available continuous time
network models, and obtains a systematic increase in log-likelihood.
Characterising the direct interaction between influencer nodes and communities,
we can provide a more detailed description of the system that can better
outline the sequence of activations in the components of the systems
represented by temporal networks.
|
2501.17725
|
Using Code Generation to Solve Open Instances of Combinatorial Design
Problems
|
cs.AI cs.CL cs.DM math.CO
|
The Handbook of Combinatorial Designs catalogs many types of combinatorial
designs, together with lists of open instances for which existence has not yet
been determined. We develop a constructive protocol CPro1, which uses Large
Language Models (LLMs) to generate code that constructs combinatorial designs
and resolves some of these open instances. The protocol starts from a
definition of a particular type of design, and a verifier that reliably
confirms whether a proposed design is valid. The LLM selects strategies and
implements them in code, and scaffolding provides automated hyperparameter
tuning and execution feedback using the verifier. Most generated code fails,
but by generating many candidates, the protocol automates exploration of a
variety of standard methods (e.g. simulated annealing, genetic algorithms) and
experimentation with variations (e.g. cost functions) to find successful
approaches. Testing on 16 different types of designs, CPro1 constructs
solutions to open instances for 6 of them: Symmetric and Skew Weighing
Matrices, Equidistant Permutation Arrays, Packing Arrays, Balanced Ternary
Designs, and Florentine Rectangles.
|
2501.17726
|
VICCA: Visual Interpretation and Comprehension of Chest X-ray Anomalies
in Generated Report Without Human Feedback
|
cs.CV cs.CL
|
As artificial intelligence (AI) becomes increasingly central to healthcare,
the demand for explainable and trustworthy models is paramount. Current report
generation systems for chest X-rays (CXR) often lack mechanisms for validating
outputs without expert oversight, raising concerns about reliability and
interpretability. To address these challenges, we propose a novel multimodal
framework designed to enhance the semantic alignment and localization accuracy
of AI-generated medical reports. Our framework integrates two key modules: a
Phrase Grounding Model, which identifies and localizes pathologies in CXR
images based on textual prompts, and a Text-to-Image Diffusion Module, which
generates synthetic CXR images from prompts while preserving anatomical
fidelity. By comparing features between the original and generated images, we
introduce a dual-scoring system: one score quantifies localization accuracy,
while the other evaluates semantic consistency. This approach significantly
outperforms existing methods, achieving state-of-the-art results in pathology
localization and text-to-image alignment. The integration of phrase grounding
with diffusion models, coupled with the dual-scoring evaluation system,
provides a robust mechanism for validating report quality, paving the way for
more trustworthy and transparent AI in medical imaging.
|
2501.17727
|
Sparse Autoencoders Can Interpret Randomly Initialized Transformers
|
cs.LG
|
Sparse autoencoders (SAEs) are an increasingly popular technique for
interpreting the internal representations of transformers. In this paper, we
apply SAEs to 'interpret' random transformers, i.e., transformers where the
parameters are sampled IID from a Gaussian rather than trained on text data. We
find that random and trained transformers produce similarly interpretable SAE
latents, and we confirm this finding quantitatively using an open-source
auto-interpretability pipeline. Further, we find that SAE quality metrics are
broadly similar for random and trained transformers. We find that these results
hold across model sizes and layers. We discuss a number of number interesting
questions that this work raises for the use of SAEs and auto-interpretability
in the context of mechanistic interpretability.
|
2501.17731
|
Exact characterization of {\epsilon}-Safe Decision Regions for
exponential family distributions and Multi Cost SVM approximation
|
stat.ML cs.AI cs.LG
|
Probabilistic guarantees on the prediction of data-driven classifiers are
necessary to define models that can be considered reliable. This is a key
requirement for modern machine learning in which the goodness of a system is
measured in terms of trustworthiness, clearly dividing what is safe from what
is unsafe. The spirit of this paper is exactly in this direction. First, we
introduce a formal definition of {\epsilon}-Safe Decision Region, a subset of
the input space in which the prediction of a target (safe) class is
probabilistically guaranteed. Second, we prove that, when data come from
exponential family distributions, the form of such a region is analytically
determined and controllable by design parameters, i.e. the probability of
sampling the target class and the confidence on the prediction. However, the
request of having exponential data is not always possible. Inspired by this
limitation, we developed Multi Cost SVM, an SVM based algorithm that
approximates the safe region and is also able to handle unbalanced data. The
research is complemented by experiments and code available for reproducibility.
|
2501.17736
|
Winning Rates of $(n,k)$ Quantum Coset Monogamy Games
|
quant-ph cs.IT math.IT
|
We formulate the $(n,k)$ Coset Monogamy Game, in which two players must
extract complementary information of unequal size ($k$ bits vs. $n-k$ bits)
from a random coset state without communicating. The complementary information
takes the form of random Pauli-X and Pauli-Z errors on subspace states. Our
game generalizes those considered in previous works that deal with the case of
equal information size $(k=\frac{n}{2})$. We prove a convex upper bound of the
information-theoretic winning rate of the $(n,k)$ Coset Monogamy Game in terms
of the subspace rate $R=\frac{k}{n}\in [0,1]$. This bound improves upon
previous results for the case of $R=\frac{1}{2}$. We also prove the
achievability of an optimal winning probability upper bound for the class of
unentangled strategies of the $(n,k)$ Coset Monogamy Game.
|
2501.17737
|
Sparser, Better, Faster, Stronger: Efficient Automatic Differentiation
for Sparse Jacobians and Hessians
|
cs.LG cs.MS
|
From implicit differentiation to probabilistic modeling, Jacobians and
Hessians have many potential use cases in Machine Learning (ML), but
conventional wisdom views them as computationally prohibitive. Fortunately,
these matrices often exhibit sparsity, which can be leveraged to significantly
speed up the process of Automatic Differentiation (AD). This paper presents
advances in Automatic Sparse Differentiation (ASD), starting with a new
perspective on sparsity detection. Our refreshed exposition is based on
operator overloading, able to detect both local and global sparsity patterns,
and naturally avoids dead ends in the control flow graph. We also describe a
novel ASD pipeline in Julia, consisting of independent software packages for
sparsity detection, matrix coloring, and differentiation, which together enable
ASD based on arbitrary AD backends. Our pipeline is fully automatic and
requires no modification of existing code, making it compatible with existing
ML codebases. We demonstrate that this pipeline unlocks Jacobian and Hessian
matrices at scales where they were considered too expensive to compute. On
real-world problems from scientific ML and optimization, we show significant
speed-ups of up to three orders of magnitude. Notably, our ASD pipeline often
outperforms standard AD for one-off computations, once thought impractical due
to slower sparsity detection methods.
|
2501.17745
|
Dynamics of Transient Structure in In-Context Linear Regression
Transformers
|
cs.LG
|
Modern deep neural networks display striking examples of rich internal
computational structure. Uncovering principles governing the development of
such structure is a priority for the science of deep learning. In this paper,
we explore the transient ridge phenomenon: when transformers are trained on
in-context linear regression tasks with intermediate task diversity, they
initially behave like ridge regression before specializing to the tasks in
their training distribution. This transition from a general solution to a
specialized solution is revealed by joint trajectory principal component
analysis. Further, we draw on the theory of Bayesian internal model selection
to suggest a general explanation for the phenomena of transient structure in
transformers, based on an evolving tradeoff between loss and complexity. We
empirically validate this explanation by measuring the model complexity of our
transformers as defined by the local learning coefficient.
|
2501.17746
|
Predictive Beamforming with Distributed MIMO
|
eess.SP cs.IT math.IT
|
In vehicle-to-everything (V2X) applications, roadside units (RSUs) can be
tasked with both sensing and communication functions to enable sensing-assisted
communications. Recent studies have demonstrated that distance, angle, and
velocity information obtained through sensing can be leveraged to reduce the
overhead associated with communication beam tracking. In this work, we extend
this concept to scenarios involving multiple distributed RSUs and distributed
MIMO (multiple-input multiple-output) systems. We derive the state evolution
model, formulate the extended Kalman-filter equations, and implement predictive
beamforming for distributed MIMO. Simulation results indicate that, when
compared with a co-located massive MIMO antenna array, distributed antennas
lead to more uniform and robust sensing performance, coverage, and data rates,
while the vehicular user is in motion.
|
2501.17749
|
Early External Safety Testing of OpenAI's o3-mini: Insights from the
Pre-Deployment Evaluation
|
cs.SE cs.AI
|
Large Language Models (LLMs) have become an integral part of our daily lives.
However, they impose certain risks, including those that can harm individuals'
privacy, perpetuate biases and spread misinformation. These risks highlight the
need for robust safety mechanisms, ethical guidelines, and thorough testing to
ensure their responsible deployment. Safety of LLMs is a key property that
needs to be thoroughly tested prior the model to be deployed and accessible to
the general users. This paper reports the external safety testing experience
conducted by researchers from Mondragon University and University of Seville on
OpenAI's new o3-mini LLM as part of OpenAI's early access for safety testing
program. In particular, we apply our tool, ASTRAL, to automatically and
systematically generate up to date unsafe test inputs (i.e., prompts) that
helps us test and assess different safety categories of LLMs. We automatically
generate and execute a total of 10,080 unsafe test input on a early o3-mini
beta version. After manually verifying the test cases classified as unsafe by
ASTRAL, we identify a total of 87 actual instances of unsafe LLM behavior. We
highlight key insights and findings uncovered during the pre-deployment
external testing phase of OpenAI's latest LLM.
|
2501.17754
|
Analysis of the navigation of magnetic microrobots through cerebral
bifurcations
|
math.NA cs.NA cs.RO cs.SY eess.SY physics.bio-ph
|
Local administration of thrombolytics in ischemic stroke could accelerate
clot lysis and the ensuing reperfusion while minimizing the side effects of
systemic administration. Medical microrobots could be injected into the
bloodstream and magnetically navigated to the clot for administering the drugs
directly to the target. The magnetic manipulation required to navigate medical
microrobots will depend on various parameters such as the microrobots size, the
blood velocity, and the imposed magnetic field gradients. Numerical simulation
was used to study the motion of magnetically controlled microrobots flowing
through representative cerebral bifurcations, for predicting the magnetic
gradients required to navigate the microrobots from the injection point until
the target location. Upon thorough validation of the model against several
independent analytical and experimental results, the model was used to generate
maps and a predictive equation providing quantitative information on the
required magnetic gradients, for different scenarios. The developed maps and
predictive equation are crucial to inform the design, operation and
optimization of magnetic navigation systems for healthcare applications.
|
2501.17755
|
AI Governance through Markets
|
econ.GN cs.AI q-fin.EC
|
This paper argues that market governance mechanisms should be considered a
key approach in the governance of artificial intelligence (AI), alongside
traditional regulatory frameworks. While current governance approaches have
predominantly focused on regulation, we contend that market-based mechanisms
offer effective incentives for responsible AI development. We examine four
emerging vectors of market governance: insurance, auditing, procurement, and
due diligence, demonstrating how these mechanisms can affirm the relationship
between AI risk and financial risk while addressing capital allocation
inefficiencies. While we do not claim that market forces alone can adequately
protect societal interests, we maintain that standardised AI disclosures and
market mechanisms can create powerful incentives for safe and responsible AI
development. This paper urges regulators, economists, and machine learning
researchers to investigate and implement market-based approaches to AI
governance.
|
2501.17758
|
Glioma Multimodal MRI Analysis System for Tumor Layered Diagnosis via
Multi-task Semi-supervised Learning
|
eess.IV cs.CV
|
Gliomas are the most common primary tumors of the central nervous system.
Multimodal MRI is widely used for the preliminary screening of gliomas and
plays a crucial role in auxiliary diagnosis, therapeutic efficacy, and
prognostic evaluation. Currently, the computer-aided diagnostic studies of
gliomas using MRI have focused on independent analysis events such as tumor
segmentation, grading, and radiogenomic classification, without studying
inter-dependencies among these events. In this study, we propose a Glioma
Multimodal MRI Analysis System (GMMAS) that utilizes a deep learning network
for processing multiple events simultaneously, leveraging their
inter-dependencies through an uncertainty-based multi-task learning
architecture and synchronously outputting tumor region segmentation, glioma
histological subtype, IDH mutation genotype, and 1p/19q chromosome disorder
status. Compared with the reported single-task analysis models, GMMAS improves
the precision across tumor layered diagnostic tasks. Additionally, we have
employed a two-stage semi-supervised learning method, enhancing model
performance by fully exploiting both labeled and unlabeled MRI samples.
Further, by utilizing an adaptation module based on knowledge self-distillation
and contrastive learning for cross-modal feature extraction, GMMAS exhibited
robustness in situations of modality absence and revealed the differing
significance of each MRI modal. Finally, based on the analysis outputs of the
GMMAS, we created a visual and user-friendly platform for doctors and patients,
introducing GMMAS-GPT to generate personalized prognosis evaluations and
suggestions.
|
2501.17759
|
Yin-Yang: Developing Motifs With Long-Term Structure And Controllability
|
cs.SD cs.AI cs.SC
|
Transformer models have made great strides in generating symbolically
represented music with local coherence. However, controlling the development of
motifs in a structured way with global form remains an open research area. One
of the reasons for this challenge is due to the note-by-note autoregressive
generation of such models, which lack the ability to correct themselves after
deviations from the motif. In addition, their structural performance on
datasets with shorter durations has not been studied in the literature. In this
study, we propose Yin-Yang, a framework consisting of a phrase generator,
phrase refiner, and phrase selector models for the development of motifs into
melodies with long-term structure and controllability. The phrase refiner is
trained on a novel corruption-refinement strategy which allows it to produce
melodic and rhythmic variations of an original motif at generation time,
thereby rectifying deviations of the phrase generator. We also introduce a new
objective evaluation metric for quantifying how smoothly the motif manifests
itself within the piece. Evaluation results show that our model achieves better
performance compared to state-of-the-art transformer models while having the
advantage of being controllable and making the generated musical structure
semi-interpretable, paving the way for musical analysis. Our code and demo page
can be found at https://github.com/keshavbhandari/yinyang.
|
2501.17762
|
Improving Privacy Benefits of Redaction
|
cs.CR cs.CL cs.LG
|
We propose a novel redaction methodology that can be used to sanitize natural
text data. Our new technique provides better privacy benefits than other state
of the art techniques while maintaining lower redaction levels.
|
2501.17767
|
Hybrid Graphs for Table-and-Text based Question Answering using LLMs
|
cs.CL cs.AI
|
Answering questions that require reasoning and aggregation across both
structured (tables) and unstructured (raw text) data sources presents
significant challenges. Current methods rely on fine-tuning and high-quality,
human-curated data, which is difficult to obtain. Recent advances in Large
Language Models (LLMs) have shown promising results for multi-hop question
answering (QA) over single-source text data in a zero-shot setting, yet
exploration into multi-source Table-Text QA remains limited. In this paper, we
present a novel Hybrid Graph-based approach for Table-Text QA that leverages
LLMs without fine-tuning. Our method constructs a unified Hybrid Graph from
textual and tabular data, pruning information based on the input question to
provide the LLM with relevant context concisely. We evaluate our approach on
the challenging Hybrid-QA and OTT-QA datasets using state-of-the-art LLMs,
including GPT-3.5, GPT-4, and LLaMA-3. Our method achieves the best zero-shot
performance on both datasets, improving Exact Match scores by up to 10% on
Hybrid-QA and 5.4% on OTT-QA. Moreover, our approach reduces token usage by up
to 53% compared to the original context.
|
2501.17770
|
Generative Unordered Flow for Set-Structured Data Generation
|
cs.LG
|
Flow-based generative models have demonstrated promising performance across a
broad spectrum of data modalities (e.g., image and text). However, there are
few works exploring their extension to unordered data (e.g., spatial point
set), which is not trivial because previous models are mostly designed for
vector data that are naturally ordered. In this paper, we present unordered
flow, a type of flow-based generative model for set-structured data generation.
Specifically, we convert unordered data into an appropriate function
representation, and learn the probability measure of such representations
through function-valued flow matching. For the inverse map from a function
representation to unordered data, we propose a method similar to particle
filtering, with Langevin dynamics to first warm-up the initial particles and
gradient-based search to update them until convergence. We have conducted
extensive experiments on multiple real-world datasets, showing that our
unordered flow model is very effective in generating set-structured data and
significantly outperforms previous baselines.
|
2501.17771
|
2SSP: A Two-Stage Framework for Structured Pruning of LLMs
|
cs.CL cs.AI cs.LG
|
We propose a novel Two-Stage framework for Structured Pruning (2SSP) for
pruning Large Language Models (LLMs), which combines two different strategies
of pruning, namely Width and Depth Pruning. The first stage (Width Pruning)
removes entire neurons, hence their corresponding rows and columns, aiming to
preserve the connectivity among the pruned structures in the intermediate state
of the Feed-Forward Networks in each Transformer block. This is done based on
an importance score measuring the impact of each neuron over the output
magnitude. The second stage (Depth Pruning), instead, removes entire Attention
submodules. This is done by applying an iterative process that removes the
Attention submodules with the minimum impact on a given metric of interest (in
our case, perplexity). We also propose a novel mechanism to balance the
sparsity rate of the two stages w.r.t. to the desired global sparsity. We test
2SSP on four LLM families and three sparsity rates (25\%, 37.5\%, and 50\%),
measuring the resulting perplexity over three language modeling datasets as
well as the performance over six downstream tasks. Our method consistently
outperforms five state-of-the-art competitors over three language modeling and
six downstream tasks, with an up to two-order-of-magnitude gain in terms of
pruning time. The code is available at available at
\url{https://github.com/FabrizioSandri/2SSP}.
|
2501.17772
|
Self-Supervised Frameworks for Speaker Verification via Bootstrapped
Positive Sampling
|
eess.AS cs.LG cs.SD
|
Recent developments in Self-Supervised Learning (SSL) have demonstrated
significant potential for Speaker Verification (SV), but closing the
performance gap with supervised systems remains an ongoing challenge. Standard
SSL frameworks rely on anchor-positive pairs extracted from the same audio
utterances. Hence, positives have channel characteristics similar to those of
their corresponding anchors, even with extensive data-augmentation. Therefore,
this positive sampling strategy is a fundamental limitation as it encodes too
much information regarding the recording source in the learned representations.
This article introduces Self-Supervised Positive Sampling (SSPS), a
bootstrapped technique for sampling appropriate and diverse positives in SSL
frameworks for SV. SSPS samples positives close to their anchor in the
representation space, as we assume that these pseudo-positives belong to the
same speaker identity but correspond to different recording conditions. This
method demonstrates consistent improvements in SV performance on VoxCeleb
benchmarks when implemented in major SSL frameworks, such as SimCLR, SwAV,
VICReg, and DINO. Using SSPS, SimCLR, and DINO achieve 2.57% and 2.53% EER on
VoxCeleb1-O. SimCLR yields a 58% relative reduction in EER, getting comparable
performance to DINO with a simpler training framework. Furthermore, SSPS lowers
intra-class variance and reduces channel information in speaker representations
while exhibiting greater robustness without data-augmentation.
|
2501.17773
|
SafePR: Unified Approach for Safe Parallel Robots by Contact Detection
and Reaction with Redundancy Resolution
|
cs.RO cs.SY eess.SY
|
Fast and safe motion is crucial for the successful deployment of physically
interactive robots. Parallel robots (PRs) offer the potential for higher speeds
while maintaining the same energy limits due to their low moving masses.
However, they require methods for contact detection and reaction while avoiding
singularities and self-collisions. We address this issue and present SafePR - a
unified approach for the detection and localization, including the distinction
between collision and clamping to perform a reaction that is safe for humans
and feasible for PRs. Our approach uses information from the encoders and motor
currents to estimate forces via a generalized-momentum observer. Neural
networks and particle filters classify and localize the contacts. We introduce
reactions with redundancy resolution to avoid type-II singularities and
self-collisions. Our approach detected and terminated 72 real-world collision
and clamping contacts with end-effector speeds of up to 1.5 m/s, each within
25-275 ms. The forces were below the thresholds from ISO/TS 15066. By using
built-in sensors, SafePR enables safe interaction with already assembled PRs
without the need for new hardware components.
|
2501.17774
|
Percolation and localisation: Sub-leading eigenvalues of the
nonbacktracking matrix
|
physics.soc-ph cs.SI
|
The spectrum of the nonbacktracking matrix associated to a network is known
to contain fundamental information regarding percolation properties of the
network. Indeed, the inverse of its leading eigenvalue is often used as an
estimate for the percolation threshold. However, for many networks with
nonbacktracking centrality localised on a few nodes, such as networks with a
core-periphery structure, this spectral approach badly underestimates the
threshold. In this work, we study networks that exhibit this localisation
effect by looking beyond the leading eigenvalue and searching deeper into the
spectrum of the nonbacktracking matrix. We identify that, when localisation is
present, the threshold often more closely aligns with the inverse of one of the
sub-leading real eigenvalues: the largest real eigenvalue with a "delocalised"
corresponding eigenvector. We investigate a core-periphery network model and
determine, both theoretically and experimentally, a regime of parameters for
which our approach closely approximates the threshold, while the estimate
derived using the leading eigenvalue does not. We further present experimental
results on large scale real-world networks that showcase the usefulness of our
approach.
|
2501.17777
|
On decoding hyperbolic codes
|
cs.IT math.CO math.IT
|
This work studies several decoding algorithms for hyperbolic codes. We use
some previous ideas to describe how to decode a hyperbolic code using the
largest Reed-Muller code contained in it or using the smallest Reed-Muller code
that contains it. A combination of these two algorithms is proposed when
hyperbolic codes are defined by polynomials in two variables. Then, we compare
hyperbolic codes and Cube codes (tensor product of Reed-Solomon codes) and
propose decoding algorithms of hyperbolic codes based on their closest Cube
codes. Finally, we adapt to hyperbolic codes the Geil and Matsumoto's
generalization of Sudan's list decoding algorithm.
|
2501.17781
|
Long-term prediction of El Ni\~no-Southern Oscillation using reservoir
computing with data-driven realtime filter
|
physics.comp-ph cs.LG physics.ao-ph
|
In recent years, the application of machine learning approaches to
time-series forecasting of climate dynamical phenomena has become increasingly
active. It is known that applying a band-pass filter to a time-series data is a
key to obtaining a high-quality data-driven model. Here, to obtain longer-term
predictability of machine learning models, we introduce a new type of band-pass
filter. It can be applied to realtime operational prediction workflows since it
relies solely on past time series. We combine the filter with reservoir
computing, which is a machine-learning technique that employs a data-driven
dynamical system. As an application, we predict the multi-year dynamics of the
El Ni\~no-Southern Oscillation with the prediction horizon of 24 months using
only past time series.
|
2501.17782
|
Picard-KKT-hPINN: Enforcing Nonlinear Enthalpy Balances for Physically
Consistent Neural Networks
|
cs.LG
|
Neural networks are widely used as surrogate models but they do not guarantee
physically consistent predictions thereby preventing adoption in various
applications. We propose a method that can enforce NNs to satisfy physical laws
that are nonlinear in nature such as enthalpy balances. Our approach, inspired
by Picard successive approximations method, aims to enforce multiplicatively
separable constraints by sequentially freezing and projecting a set of the
participating variables. We demonstrate our PicardKKThPINN for surrogate
modeling of a catalytic packed bed reactor for methanol synthesis. Our results
show that the method efficiently enforces nonlinear enthalpy and linear atomic
balances at machine-level precision. Additionally, we show that enforcing
conservation laws can improve accuracy in data-scarce conditions compared to
vanilla multilayer perceptron.
|
2501.17784
|
AdditiveLLM: Large Language Models Predict Defects in Additive
Manufacturing
|
cs.LG
|
In this work we investigate the ability of large language models to predict
additive manufacturing defect regimes given a set of process parameter inputs.
For this task we utilize a process parameter defect dataset to fine-tune a
collection of models, titled AdditiveLLM, for the purpose of predicting
potential defect regimes including Keyholing, Lack of Fusion, and Balling. We
compare different methods of input formatting in order to gauge the model's
performance to correctly predict defect regimes on our sparse Baseline dataset
and our natural language Prompt dataset. The model displays robust predictive
capability, achieving an accuracy of 93\% when asked to provide the defect
regimes associated with a set of process parameters. The incorporation of
natural language input further simplifies the task of process parameters
selection, enabling users to identify optimal settings specific to their build.
|
2501.17785
|
Reasoning Over the Glyphs: Evaluation of LLM's Decipherment of Rare
Scripts
|
cs.CL cs.LG
|
We explore the capabilities of LVLMs and LLMs in deciphering rare scripts not
encoded in Unicode. We introduce a novel approach to construct a multimodal
dataset of linguistic puzzles involving such scripts, utilizing a tokenization
method for language glyphs. Our methods include the Picture Method for LVLMs
and the Description Method for LLMs, enabling these models to tackle these
challenges. We conduct experiments using prominent models, GPT-4o, Gemini, and
Claude 3.5 Sonnet, on linguistic puzzles. Our findings reveal the strengths and
limitations of current AI methods in linguistic decipherment, highlighting the
impact of Unicode encoding on model performance and the challenges of modeling
visual language tokens through descriptions. Our study advances understanding
of AI's potential in linguistic decipherment and underscores the need for
further research.
|
2501.17787
|
Detecting Anomalies Using Rotated Isolation Forest
|
cs.LG
|
The Isolation Forest (iForest), proposed by Liu, Ting, and Zhou at TKDE 2012,
has become a prominent tool for unsupervised anomaly detection. However, recent
research by Hariri, Kind, and Brunner, published in TKDE 2021, has revealed
issues with iForest. They identified the presence of axis-aligned ghost
clusters that can be misidentified as normal clusters, leading to biased
anomaly scores and inaccurate predictions. In response, they developed the
Extended Isolation Forest (EIF), which effectively solves these issues by
eliminating the ghost clusters introduced by iForest. This enhancement results
in improved consistency of anomaly scores and superior performance. We reveal a
previously overlooked problem in the Extended Isolation Forest (EIF), showing
that it is vulnerable to ghost inter-clusters between normal clusters of data
points. In this paper, we introduce the Rotated Isolation Forest (RIF)
algorithm which effectively addresses both the axis-aligned ghost clusters
observed in iForest and the ghost inter-clusters seen in EIF. RIF accomplishes
this by randomly rotating the dataset (using random rotation matrices and QR
decomposition) before feeding it into the iForest construction, thereby
increasing dataset variation and eliminating ghost clusters. Our experiments
conclusively demonstrate that the RIF algorithm outperforms iForest and EIF, as
evidenced by the results obtained from both synthetic datasets and real-world
datasets.
|
2501.17788
|
WARP: An Efficient Engine for Multi-Vector Retrieval
|
cs.IR
|
We study the efficiency of multi-vector retrieval methods like ColBERT and
its recent variant XTR. We introduce WARP, a retrieval engine that drastically
improves the efficiency of XTR-based ColBERT retrievers through three key
innovations: (1) WARP$_\text{SELECT}$ for dynamic similarity imputation, (2)
implicit decompression to bypass costly vector reconstruction, and (3) a
two-stage reduction process for efficient scoring. Combined with optimized C++
kernels and specialized inference runtimes, WARP reduces end-to-end latency by
41x compared to XTR's reference implementation and thereby achieves a 3x
speedup over PLAID from the the official ColBERT implementation.
We study the efficiency of multi-vector retrieval methods like ColBERT and
its recent variant XTR. We introduce WARP, a retrieval engine that drastically
improves the efficiency of XTR-based ColBERT retrievers through three key
innovations: (1) WARP$_\text{SELECT}$ for dynamic similarity imputation, (2)
implicit decompression during retrieval, and (3) a two-stage reduction process
for efficient scoring. Thanks also to highly-optimized C++ kernels and to the
adoption of specialized inference runtimes, WARP can reduce end-to-end query
latency relative to XTR's reference implementation by 41x. And it thereby
achieves a 3x speedup over the official ColBERTv2 PLAID engine, while
preserving retrieval quality.
|
2501.17789
|
Propeller Motion of a Devil-Stick using Normal Forcing
|
eess.SY cs.RO cs.SY
|
The problem of realizing rotary propeller motion of a devil-stick in the
vertical plane using forces purely normal to the stick is considered. This
problem represents a nonprehensile manipulation task of an underactuated
system. In contrast with previous approaches, the devil-stick is manipulated by
controlling the normal force and its point of application. Virtual holonomic
constraints are used to design the trajectory of the center-of-mass of the
devil-stick in terms of its orientation angle, and conditions for stable
propeller motion are derived. Intermittent large-amplitude forces are used to
asymptotically stabilize a desired propeller motion. Simulations demonstrate
the efficacy of the approach in realizing stable propeller motion without loss
of contact between the actuator and devil-stick.
|
2501.17790
|
BreezyVoice: Adapting TTS for Taiwanese Mandarin with Enhanced Polyphone
Disambiguation -- Challenges and Insights
|
cs.CL cs.AI
|
We present BreezyVoice, a Text-to-Speech (TTS) system specifically adapted
for Taiwanese Mandarin, highlighting phonetic control abilities to address the
unique challenges of polyphone disambiguation in the language. Building upon
CosyVoice, we incorporate a $S^{3}$ tokenizer, a large language model (LLM), an
optimal-transport conditional flow matching model (OT-CFM), and a grapheme to
phoneme prediction model, to generate realistic speech that closely mimics
human utterances. Our evaluation demonstrates BreezyVoice's superior
performance in both general and code-switching contexts, highlighting its
robustness and effectiveness in generating high-fidelity speech. Additionally,
we address the challenges of generalizability in modeling long-tail speakers
and polyphone disambiguation. Our approach significantly enhances performance
and offers valuable insights into the workings of neural codec TTS systems.
|
2501.17792
|
CrowdSplat: Exploring Gaussian Splatting For Crowd Rendering
|
cs.CV
|
We present CrowdSplat, a novel approach that leverages 3D Gaussian Splatting
for real-time, high-quality crowd rendering. Our method utilizes 3D Gaussian
functions to represent animated human characters in diverse poses and outfits,
which are extracted from monocular videos. We integrate Level of Detail (LoD)
rendering to optimize computational efficiency and quality. The CrowdSplat
framework consists of two stages: (1) avatar reconstruction and (2) crowd
synthesis. The framework is also optimized for GPU memory usage to enhance
scalability. Quantitative and qualitative evaluations show that CrowdSplat
achieves good levels of rendering quality, memory efficiency, and computational
performance. Through these experiments, we demonstrate that CrowdSplat is a
viable solution for dynamic, realistic crowd simulation in real-time
applications.
|
2501.17799
|
Leveraging Multimodal LLM for Inspirational User Interface Search
|
cs.HC cs.IR
|
Inspirational search, the process of exploring designs to inform and inspire
new creative work, is pivotal in mobile user interface (UI) design. However,
exploring the vast space of UI references remains a challenge. Existing
AI-based UI search methods often miss crucial semantics like target users or
the mood of apps. Additionally, these models typically require metadata like
view hierarchies, limiting their practical use. We used a multimodal large
language model (MLLM) to extract and interpret semantics from mobile UI images.
We identified key UI semantics through a formative study and developed a
semantic-based UI search system. Through computational and human evaluations,
we demonstrate that our approach significantly outperforms existing UI
retrieval methods, offering UI designers a more enriched and contextually
relevant search experience. We enhance the understanding of mobile UI design
semantics and highlight MLLMs' potential in inspirational search, providing a
rich dataset of UI semantics for future studies.
|
2501.17802
|
LEKA:LLM-Enhanced Knowledge Augmentation
|
cs.LG
|
Humans excel in analogical learning and knowledge transfer and, more
importantly, possess a unique understanding of identifying appropriate sources
of knowledge. From a model's perspective, this presents an interesting
challenge. If models could autonomously retrieve knowledge useful for transfer
or decision-making to solve problems, they would transition from passively
acquiring to actively accessing and learning from knowledge. However, filling
models with knowledge is relatively straightforward -- it simply requires more
training and accessible knowledge bases. The more complex task is teaching
models about which knowledge can be analogized and transferred. Therefore, we
design a knowledge augmentation method LEKA for knowledge transfer that
actively searches for suitable knowledge sources that can enrich the target
domain's knowledge. This LEKA method extracts key information from textual
information from the target domain, retrieves pertinent data from external data
libraries, and harmonizes retrieved data with the target domain data in feature
space and marginal probability measures. We validate the effectiveness of our
approach through extensive experiments across various domains and demonstrate
significant improvements over traditional methods in reducing computational
costs, automating data alignment, and optimizing transfer learning outcomes.
|
2501.17804
|
Recyclable Thin-Film Soft Electronics for Smart Packaging and E-Skins
|
eess.SY cond-mat.mtrl-sci cs.SY
|
Despite advances in soft, sticker_like electronics, few efforts have dealt
with the challenge of electronic waste. Here, this is addressed by introducing
an eco friendly conductive ink for thin_film circuitry composed of silver
flakes and a water_based polyurethane dispersion. This ink uniquely combines
high electrical conductivity (1.6 x 105 S m_1), high resolution digital
printability, robust adhesion for microchip integration, mechanical resilience,
and recyclability. Recycling is achieved with an ecologically friendly
processing method to decompose the circuits into constituent elements and
recover the conductive ink with a decrease of only 2.4 per cent in
conductivity. Moreover, adding liquid metal enables stretchability of up to 200
per cent strain, although this introduces the need for more complex recycling
steps. Finally, on_skin electrophysiological monitoring biostickers along with
a recyclable smart package with integrated sensors for monitoring safe storage
of perishable foods are demonstrated.
|
2501.17805
|
International AI Safety Report
|
cs.CY cs.AI cs.LG
|
The first International AI Safety Report comprehensively synthesizes the
current evidence on the capabilities, risks, and safety of advanced AI systems.
The report was mandated by the nations attending the AI Safety Summit in
Bletchley, UK. Thirty nations, the UN, the OECD, and the EU each nominated a
representative to the report's Expert Advisory Panel. A total of 100 AI experts
contributed, representing diverse perspectives and disciplines. Led by the
report's Chair, these independent experts collectively had full discretion over
the report's content.
|
2501.17808
|
Replacing the Gallium Oxide Shell with Conductive Ag: Toward a Printable
and Recyclable Composite for Highly Stretchable Electronics, Electromagnetic
Shielding, and Thermal Interfaces
|
eess.SY cs.SY
|
Liquid metal (LM)-based composites hold promise for soft electronics due to
their high conductivity and fluidic nature. However, the presence of
{\alpha}_Ga2O3 and GaOOH layers around LM droplets impairs conductivity and
performance. We tackle this issue by replacing the oxide layer with conductive
silver (Ag) using an ultrasonic_assisted galvanic replacement reaction. The
Ag_coated nanoparticles form aggregated, porous microparticles that are mixed
with styrene_isoprene_styrene (SIS) polymers, resulting in a digitally
printable composite with superior electrical conductivity and electromechanical
properties compared to conventional fillers. Adding more LM enhances these
properties further. The composite achieves EMI shielding effectiveness (SE)
exceeding 75 dB in the X_band frequency range, even at 200 per cent strain,
meeting stringent military and medical standards. It is applicable in wireless
communications and Bluetooth signal blocking and as a thermal interface
material (TIM). Additionally, we highlight its recyclability using a
biodegradable solvent, underscoring its eco_friendly potential. This composite
represents a significant advancement in stretchable electronics and EMI
shielding, with implications for wearable and bioelectronic applications.
|
2501.17811
|
Janus-Pro: Unified Multimodal Understanding and Generation with Data and
Model Scaling
|
cs.AI cs.CL cs.CV
|
In this work, we introduce Janus-Pro, an advanced version of the previous
work Janus. Specifically, Janus-Pro incorporates (1) an optimized training
strategy, (2) expanded training data, and (3) scaling to larger model size.
With these improvements, Janus-Pro achieves significant advancements in both
multimodal understanding and text-to-image instruction-following capabilities,
while also enhancing the stability of text-to-image generation. We hope this
work will inspire further exploration in the field. Code and models are
publicly available.
|
2501.17813
|
P-TAME: Explain Any Image Classifier with Trained Perturbations
|
cs.CV cs.AI
|
The adoption of Deep Neural Networks (DNNs) in critical fields where
predictions need to be accompanied by justifications is hindered by their
inherent black-box nature. In this paper, we introduce P-TAME
(Perturbation-based Trainable Attention Mechanism for Explanations), a
model-agnostic method for explaining DNN-based image classifiers. P-TAME
employs an auxiliary image classifier to extract features from the input image,
bypassing the need to tailor the explanation method to the internal
architecture of the backbone classifier being explained. Unlike traditional
perturbation-based methods, which have high computational requirements, P-TAME
offers an efficient alternative by generating high-resolution explanations in a
single forward pass during inference. We apply P-TAME to explain the decisions
of VGG-16, ResNet-50, and ViT-B-16, three distinct and widely used image
classifiers. Quantitative and qualitative results show that our method matches
or outperforms previous explainability methods, including model-specific
approaches. Code and trained models will be released upon acceptance.
|
2501.17817
|
Improving community detection via community association strength scores
|
cs.SI
|
Community detection methods play a central role in understanding complex
networks by revealing highly connected subsets of entities. However, most
community detection algorithms generate partitions of the nodes, thus (i)
forcing every node to be part of a community and (ii) ignoring the possibility
that some nodes may be part of multiple communities. In our work, we
investigate three simple community association strength (CAS) scores and their
usefulness as post-processing tools given some partition of the nodes. We show
that these measures can be used to improve node partitions, detect outlier
nodes (not part of any community), and help find nodes with multiple community
memberships.
|
2501.17821
|
SSF: Sparse Long-Range Scene Flow for Autonomous Driving
|
cs.CV
|
Scene flow enables an understanding of the motion characteristics of the
environment in the 3D world. It gains particular significance in the
long-range, where object-based perception methods might fail due to sparse
observations far away. Although significant advancements have been made in
scene flow pipelines to handle large-scale point clouds, a gap remains in
scalability with respect to long-range. We attribute this limitation to the
common design choice of using dense feature grids, which scale quadratically
with range. In this paper, we propose Sparse Scene Flow (SSF), a general
pipeline for long-range scene flow, adopting a sparse convolution based
backbone for feature extraction. This approach introduces a new challenge: a
mismatch in size and ordering of sparse feature maps between time-sequential
point scans. To address this, we propose a sparse feature fusion scheme, that
augments the feature maps with virtual voxels at missing locations.
Additionally, we propose a range-wise metric that implicitly gives greater
importance to faraway points. Our method, SSF, achieves state-of-the-art
results on the Argoverse2 dataset, demonstrating strong performance in
long-range scene flow estimation. Our code will be released at
https://github.com/KTH-RPL/SSF.git.
|
2501.17822
|
Aggregation Schemes for Single-Vector WSI Representation Learning in
Digital Pathology
|
eess.IV cs.AI cs.CV cs.IR q-bio.QM
|
A crucial step to efficiently integrate Whole Slide Images (WSIs) in
computational pathology is assigning a single high-quality feature vector,
i.e., one embedding, to each WSI. With the existence of many pre-trained deep
neural networks and the emergence of foundation models, extracting embeddings
for sub-images (i.e., tiles or patches) is straightforward. However, for WSIs,
given their high resolution and gigapixel nature, inputting them into existing
GPUs as a single image is not feasible. As a result, WSIs are usually split
into many patches. Feeding each patch to a pre-trained model, each WSI can then
be represented by a set of patches, hence, a set of embeddings. Hence, in such
a setup, WSI representation learning reduces to set representation learning
where for each WSI we have access to a set of patch embeddings. To obtain a
single embedding from a set of patch embeddings for each WSI, multiple
set-based learning schemes have been proposed in the literature. In this paper,
we evaluate the WSI search performance of multiple recently developed
aggregation techniques (mainly set representation learning techniques)
including simple average or max pooling operations, Deep Sets, Memory networks,
Focal attention, Gaussian Mixture Model (GMM) Fisher Vector, and deep sparse
and binary Fisher Vector on four different primary sites including bladder,
breast, kidney, and Colon from TCGA. Further, we benchmark the search
performance of these methods against the median of minimum distances of patch
embeddings, a non-aggregating approach used for WSI retrieval.
|
2501.17823
|
U2A: Unified Unimodal Adaptation for Robust and Efficient Multimodal
Learning
|
cs.CV cs.AI cs.LG
|
Multimodal learning often relies on designing new models and complex training
strategies to achieve optimal performance. We present Unified Unimodal
Adaptation (U2A), which jointly fine-tunes pretrained unimodal encoders using
low-rank adaptation (LoRA) for various multimodal tasks. Our method
significantly reduces the number of learnable parameters and eliminates the
need for complex training strategies, such as alternating training, gradient
modifications, or unimodal fine-tuning. To address missing modalities during
both training and testing, we introduce Mask Tokens (MT), which generate
missing modality features from available modalities using a single token per
modality. This simplifies the process, removing the need for specialized
feature estimation or prompt-tuning methods. Our evaluation demonstrates that
U2A matches or outperforms state-of-the-art methods in both complete and
missing modality settings, showcasing strong performance and robustness across
various modalities, tasks, and datasets. We also analyze and report the
effectiveness of Mask Tokens in different missing modality scenarios. Overall,
our method provides a robust, flexible, and efficient solution for multimodal
learning, with minimal computational overhead.
|
2501.17827
|
Langevin Soft Actor-Critic: Efficient Exploration through
Uncertainty-Driven Critic Learning
|
cs.LG
|
Existing actor-critic algorithms, which are popular for continuous control
reinforcement learning (RL) tasks, suffer from poor sample efficiency due to
lack of principled exploration mechanism within them. Motivated by the success
of Thompson sampling for efficient exploration in RL, we propose a novel
model-free RL algorithm, Langevin Soft Actor Critic (LSAC), which prioritizes
enhancing critic learning through uncertainty estimation over policy
optimization. LSAC employs three key innovations: approximate Thompson sampling
through distributional Langevin Monte Carlo (LMC) based $Q$ updates, parallel
tempering for exploring multiple modes of the posterior of the $Q$ function,
and diffusion synthesized state-action samples regularized with $Q$ action
gradients. Our extensive experiments demonstrate that LSAC outperforms or
matches the performance of mainstream model-free RL algorithms for continuous
control tasks. Notably, LSAC marks the first successful application of an LMC
based Thompson sampling in continuous control tasks with continuous action
spaces.
|
2501.17830
|
A Comprehensive Survey on Legal Summarization: Challenges and Future
Directions
|
cs.CL
|
This article provides a systematic up-to-date survey of automatic
summarization techniques, datasets, models, and evaluation methods in the legal
domain. Through specific source selection criteria, we thoroughly review over
120 papers spanning the modern `transformer' era of natural language processing
(NLP), thus filling a gap in existing systematic surveys on the matter. We
present existing research along several axes and discuss trends, challenges,
and opportunities for future research.
|
2501.17831
|
TikTok's recommendations skewed towards Republican content during the
2024 U.S. presidential race
|
cs.SI cs.CY
|
TikTok is a major force among social media platforms with over a billion
monthly active users worldwide and 170 million in the United States. The
platform's status as a key news source, particularly among younger
demographics, raises concerns about its potential influence on politics in the
U.S. and globally. Despite these concerns, there is scant research
investigating TikTok's recommendation algorithm for political biases. We fill
this gap by conducting 323 independent algorithmic audit experiments testing
partisan content recommendations in the lead-up to the 2024 U.S. presidential
elections. Specifically, we create hundreds of "sock puppet" TikTok accounts in
Texas, New York, and Georgia, seeding them with varying partisan content and
collecting algorithmic content recommendations for each of them. Collectively,
these accounts viewed ~394,000 videos from April 30th to November 11th, 2024,
which we label for political and partisan content. Our analysis reveals
significant asymmetries in content distribution: Republican-seeded accounts
received ~11.8% more party-aligned recommendations compared to their
Democratic-seeded counterparts, and Democratic-seeded accounts were exposed to
~7.5% more opposite-party recommendations on average. These asymmetries exist
across all three states and persist when accounting for video- and
channel-level engagement metrics such as likes, views, shares, comments, and
followers, and are driven primarily by negative partisanship content. Our
findings provide insights into the inner workings of TikTok's recommendation
algorithm during a critical election period, raising fundamental questions
about platform neutrality.
|
2501.17834
|
Hierarchical Fallback Architecture for High Risk Online Machine Learning
Inference
|
cs.LG cs.CE cs.SE
|
Open Banking powered machine learning applications require novel robustness
approaches to deal with challenging stress and failure scenarios. In this paper
we propose an hierarchical fallback architecture for improving robustness in
high risk machine learning applications with a focus in the financial domain.
We define generic failure scenarios often found in online inference that depend
on external data providers and we describe in detail how to apply the
hierarchical fallback architecture to address them. Finally, we offer a real
world example of its applicability in the industry for near-real time
transactional fraud risk evaluation using Open Banking data and under extreme
stress scenarios.
|
2501.17836
|
Matrix Product Sketching via Coordinated Sampling
|
cs.DS cs.DB cs.LG
|
We revisit the well-studied problem of approximating a matrix product,
$\mathbf{A}^T\mathbf{B}$, based on small space sketches
$\mathcal{S}(\mathbf{A})$ and $\mathcal{S}(\mathbf{B})$ of $\mathbf{A} \in
\R^{n \times d}$ and $\mathbf{B}\in \R^{n \times m}$. We are interested in the
setting where the sketches must be computed independently of each other, except
for the use of a shared random seed. We prove that, when $\mathbf{A}$ and
$\mathbf{B}$ are sparse, methods based on \emph{coordinated random sampling}
can outperform classical linear sketching approaches, like
Johnson-Lindenstrauss Projection or CountSketch. For example, to obtain
Frobenius norm error $\epsilon\|\mathbf{A}\|_F\|\mathbf{B}\|_F$, coordinated
sampling requires sketches of size $O(s/\epsilon^2)$ when $\mathbf{A}$ and
$\mathbf{B}$ have at most $s \leq d,m$ non-zeros per row. In contrast, linear
sketching leads to sketches of size $O(d/\epsilon^2)$ and $O(m/\epsilon^2)$ for
$\mathbf{A}$ and $\mathbf{B}$. We empirically evaluate our approach on two
applications: 1) distributed linear regression in databases, a problem
motivated by tasks like dataset discovery and augmentation, and 2)
approximating attention matrices in transformer-based language models. In both
cases, our sampling algorithms yield an order of magnitude improvement over
linear sketching.
|
2501.17840
|
Learning Beyond the Surface: How Far Can Continual Pre-Training with
LoRA Enhance LLMs' Domain-Specific Insight Learning?
|
cs.CL cs.LG
|
Large Language Models (LLMs) have demonstrated remarkable performance on
various tasks, yet their ability to extract and internalize deeper insights
from domain-specific datasets remains underexplored. In this study, we
investigate how continual pre-training can enhance LLMs' capacity for insight
learning across three distinct forms: declarative, statistical, and
probabilistic insights. Focusing on two critical domains: medicine and finance,
we employ LoRA to train LLMs on two existing datasets. To evaluate each insight
type, we create benchmarks to measure how well continual pre-training helps
models go beyond surface-level knowledge. We also assess the impact of document
modification on capturing insights. The results show that, while continual
pre-training on original documents has a marginal effect, modifying documents
to retain only essential information significantly enhances the
insight-learning capabilities of LLMs.
|
2501.17841
|
acoupi: An Open-Source Python Framework for Deploying Bioacoustic AI
Models on Edge Devices
|
cs.SD cs.LG eess.AS
|
1. Passive acoustic monitoring (PAM) coupled with artificial intelligence
(AI) is becoming an essential tool for biodiversity monitoring. Traditional PAM
systems require manual data offloading and impose substantial demands on
storage and computing infrastructure. The combination of on-device AI-based
processing and network connectivity enables local data analysis and
transmission of only relevant information, greatly reducing storage needs.
However, programming these devices for robust operation is challenging,
requiring expertise in embedded systems and software engineering. Despite the
increase in AI-based models for bioacoustics, their full potential remains
unrealized without accessible tools to deploy them on custom hardware and
tailor device behaviour to specific monitoring goals. 2. To address this
challenge, we develop acoupi, an open-source Python framework that simplifies
the creation and deployment of smart bioacoustic devices. acoupi integrates
audio recording, AI-based data processing, data management, and real-time
wireless messaging into a unified and configurable framework. By modularising
key elements of the bioacoustic monitoring workflow, acoupi allows users to
easily customise, extend, or select specific components to fit their unique
monitoring needs. 3. We demonstrate the flexibility of acoupi by integrating
two bioacoustic classifiers: BirdNET, for the classification of bird species,
and BatDetect2, for the classification of UK bat species. We test the
reliability of acoupi over a month-long deployment of two acoupi-powered
devices in a UK urban park. 4. acoupi can be deployed on low-cost hardware such
as the Raspberry Pi and can be customised for various applications. acoupi
standardised framework and simplified tools facilitate the adoption of
AI-powered PAM systems for researchers and conservationists. acoupi is on
GitHub at https://github.com/acoupi/acoupi.
|
2501.17842
|
From Sparse to Dense: Toddler-inspired Reward Transition in
Goal-Oriented Reinforcement Learning
|
cs.LG cs.AI cs.RO
|
Reinforcement learning (RL) agents often face challenges in balancing
exploration and exploitation, particularly in environments where sparse or
dense rewards bias learning. Biological systems, such as human toddlers,
naturally navigate this balance by transitioning from free exploration with
sparse rewards to goal-directed behavior guided by increasingly dense rewards.
Inspired by this natural progression, we investigate the Toddler-Inspired
Reward Transition in goal-oriented RL tasks. Our study focuses on transitioning
from sparse to potential-based dense (S2D) rewards while preserving optimal
strategies. Through experiments on dynamic robotic arm manipulation and
egocentric 3D navigation tasks, we demonstrate that effective S2D reward
transitions significantly enhance learning performance and sample efficiency.
Additionally, using a Cross-Density Visualizer, we show that S2D transitions
smooth the policy loss landscape, resulting in wider minima that improve
generalization in RL models. In addition, we reinterpret Tolman's maze
experiments, underscoring the critical role of early free exploratory learning
in the context of S2D rewards.
|
2501.17845
|
Private Information Retrieval on Multigraph-Based Replicated Storage
|
cs.IT cs.CR cs.NI eess.SP math.IT
|
We consider the private information retrieval (PIR) problem for a
multigraph-based replication system, where each set of $r$ files is stored on
two of the servers according to an underlying $r$-multigraph. Our goal is to
establish upper and lower bounds on the PIR capacity of the $r$-multigraph.
Specifically, we first propose a construction for multigraph-based PIR systems
that leverages the symmetry of the underlying graph-based PIR scheme, deriving
a capacity lower bound for such multigraphs. Then, we establish a general upper
bound using linear programming, expressed as a function of the underlying graph
parameters. Our bounds are demonstrated to be tight for PIR systems on
multipaths for even number of vertices.
|
2501.17848
|
Improving Genetic Programming for Symbolic Regression with Equality
Graphs
|
cs.LG
|
The search for symbolic regression models with genetic programming (GP) has a
tendency of revisiting expressions in their original or equivalent forms.
Repeatedly evaluating equivalent expressions is inefficient, as it does not
immediately lead to better solutions. However, evolutionary algorithms require
diversity and should allow the accumulation of inactive building blocks that
can play an important role at a later point. The equality graph is a data
structure capable of compactly storing expressions and their equivalent forms
allowing an efficient verification of whether an expression has been visited in
any of their stored equivalent forms. We exploit the e-graph to adapt the
subtree operators to reduce the chances of revisiting expressions. Our
adaptation, called eggp, stores every visited expression in the e-graph,
allowing us to filter out from the available selection of subtrees all the
combinations that would create already visited expressions. Results show that,
for small expressions, this approach improves the performance of a simple GP
algorithm to compete with PySR and Operon without increasing computational
cost. As a highlight, eggp was capable of reliably delivering short and at the
same time accurate models for a selected set of benchmarks from SRBench and a
set of real-world datasets.
|
2501.17851
|
UGSim: Autonomous Buoyancy-Driven Underwater Glider Simulator with LQR
Control Strategy and Recursive Guidance System
|
cs.RO cs.SE
|
This paper presents the UGSim, a simulator for buoyancy-driven gliders, with
a LQR control strategy, and a recursive guidance system. Building on the top of
the DAVE and the UUVsim, it is designed to address unique challenges that come
from the complex hydrodynamic and hydrostatic impacts on buoyancy-driven
gliders, which conventional robotics simulators can't deal with. Since
distinguishing features of the class of vehicles, general controllers and
guidance systems developed for underwater robotics are infeasible. The
simulator is provided to accelerate the development and the evaluation of
algorithms that would otherwise require expensive and time-consuming operations
at sea. It consists of a basic kinetic module, a LQR control module and a
recursive guidance module, which allows the user to concentrate on the single
problem rather than the whole robotics system and the software infrastructure.
We demonstrate the usage of the simulator through an example, loading the
configuration of the buoyancy-driven glider named Petrel-II, presenting its
dynamics simulation, performances of the control strategy and the guidance
system.
|
2501.17855
|
GRACE: Generalizing Robot-Assisted Caregiving with User Functionality
Embeddings
|
cs.RO cs.AI cs.HC
|
Robot caregiving should be personalized to meet the diverse needs of care
recipients -- assisting with tasks as needed, while taking user agency in
action into account. In physical tasks such as handover, bathing, dressing, and
rehabilitation, a key aspect of this diversity is the functional range of
motion (fROM), which can vary significantly between individuals. In this work,
we learn to predict personalized fROM as a way to generalize robot
decision-making in a wide range of caregiving tasks. We propose a novel
data-driven method for predicting personalized fROM using functional assessment
scores from occupational therapy. We develop a neural model that learns to
embed functional assessment scores into a latent representation of the user's
physical function. The model is trained using motion capture data collected
from users with emulated mobility limitations. After training, the model
predicts personalized fROM for new users without motion capture. Through
simulated experiments and a real-robot user study, we show that the
personalized fROM predictions from our model enable the robot to provide
personalized and effective assistance while improving the user's agency in
action. See our website for more visualizations:
https://emprise.cs.cornell.edu/grace/.
|
2501.17858
|
Improving Your Model Ranking on Chatbot Arena by Vote Rigging
|
cs.CL cs.AI cs.CR cs.LG
|
Chatbot Arena is a popular platform for evaluating LLMs by pairwise battles,
where users vote for their preferred response from two randomly sampled
anonymous models. While Chatbot Arena is widely regarded as a reliable LLM
ranking leaderboard, we show that crowdsourced voting can be rigged to improve
(or decrease) the ranking of a target model $m_{t}$. We first introduce a
straightforward target-only rigging strategy that focuses on new battles
involving $m_{t}$, identifying it via watermarking or a binary classifier, and
exclusively voting for $m_{t}$ wins. However, this strategy is practically
inefficient because there are over $190$ models on Chatbot Arena and on average
only about $1\%$ of new battles will involve $m_{t}$. To overcome this, we
propose omnipresent rigging strategies, exploiting the Elo rating mechanism of
Chatbot Arena that any new vote on a battle can influence the ranking of the
target model $m_{t}$, even if $m_{t}$ is not directly involved in the battle.
We conduct experiments on around $1.7$ million historical votes from the
Chatbot Arena Notebook, showing that omnipresent rigging strategies can improve
model rankings by rigging only hundreds of new votes. While we have evaluated
several defense mechanisms, our findings highlight the importance of continued
efforts to prevent vote rigging. Our code is available at
https://github.com/sail-sg/Rigging-ChatbotArena.
|
2501.17859
|
rEGGression: an Interactive and Agnostic Tool for the Exploration of
Symbolic Regression Models
|
cs.LG
|
Regression analysis is used for prediction and to understand the effect of
independent variables on dependent variables. Symbolic regression (SR)
automates the search for non-linear regression models, delivering a set of
hypotheses that balances accuracy with the possibility to understand the
phenomena. Many SR implementations return a Pareto front allowing the choice of
the best trade-off. However, this hides alternatives that are close to
non-domination, limiting these choices. Equality graphs (e-graphs) allow to
represent large sets of expressions compactly by efficiently handling
duplicated parts occurring in multiple expressions. E-graphs allow to store and
query all SR solution candidates visited in one or multiple GP runs efficiently
and open the possibility to analyse much larger sets of SR solution candidates.
We introduce rEGGression, a tool using e-graphs to enable the exploration of a
large set of symbolic expressions which provides querying, filtering, and
pattern matching features creating an interactive experience to gain insights
about SR models. The main highlight is its focus in the exploration of the
building blocks found during the search that can help the experts to find
insights about the studied phenomena.This is possible by exploiting the pattern
matching capability of the e-graph data structure.
|
2501.17860
|
Dialogue is Better Than Monologue: Instructing Medical LLMs via
Strategical Conversations
|
cs.CL cs.AI
|
Current medical AI systems often fail to replicate real-world clinical
reasoning, as they are predominantly trained and evaluated on static text and
question-answer tasks. These tuning methods and benchmarks overlook critical
aspects like evidence-based reasoning and handling distracting information. To
bridge this gap, we introduce a novel benchmark that simulates real-world
diagnostic scenarios, integrating noise and difficulty levels aligned with
USMLE standards. Moreover, we explore dialogue-based fine-tuning, which
transforms static datasets into conversational formats to better capture
iterative reasoning processes. Experiments show that dialogue-tuned models
outperform traditional methods, with improvements of $9.64\%$ in multi-round
reasoning scenarios and $6.18\%$ in accuracy in a noisy environment. Our
findings highlight dialogue tuning as a promising approach for advancing
clinically aligned and robust medical AI systems.
|
2501.17867
|
Low-Thrust Many-Revolution Trajectory Design Under Operational
Uncertainties for DESTINY+ Mission
|
astro-ph.IM astro-ph.EP cs.SY eess.SY math.OC
|
DESTINY+ is a planned JAXA medium-class Epsilon mission from Earth to deep
space using a low-thrust, many-revolution orbit. Such a trajectory design is a
challenging problem not only for trajectory design but also for flight
operations, and in particular, it is essential to evaluate the impact of
operational uncertainties to ensure mission success. In this study, we design
the low-thrust trajectory from Earth orbit to a lunar transfer orbit by
differential dynamic programming using the Sundman transformation. The results
of Monte Carlo simulations with operational uncertainties confirm that the
spacecraft can be successfully guided to the lunar transfer orbit by using the
feedback control law of differential dynamic programming in the angular domain.
|
2501.17871
|
On the challenges of detecting MCI using EEG in the wild
|
eess.SP cs.LG
|
Recent studies have shown promising results in the detection of Mild
Cognitive Impairment (MCI) using easily accessible Electroencephalogram (EEG)
data which would help administer early and effective treatment for dementia
patients. However, the reliability and practicality of such systems remains
unclear. In this work, we investigate the potential limitations and challenges
in developing a robust MCI detection method using two contrasting datasets: 1)
CAUEEG, collected and annotated by expert neurologists in controlled settings
and 2) GENEEG, a new dataset collected and annotated in general practice
clinics, a setting where routine MCI diagnoses are typically made. We find that
training on small datasets, as is done by most previous works, tends to produce
high variance models that make overconfident predictions, and are unreliable in
practice. Additionally, distribution shifts between datasets make cross-domain
generalization challenging. Finally, we show that MCI detection using EEG may
suffer from fundamental limitations because of the overlapping nature of
feature distributions with control groups. We call for more effort in
high-quality data collection in actionable settings (like general practice
clinics) to make progress towards this salient goal of non-invasive MCI
detection.
|
2501.17876
|
SCDM: Score-Based Channel Denoising Model for Digital Semantic
Communications
|
eess.SP cs.IT math.IT
|
Score-based diffusion models represent a significant variant within the
diffusion model family and have seen extensive application in the increasingly
popular domain of generative tasks. Recent investigations have explored the
denoising potential of diffusion models in semantic communications. However, in
previous paradigms, noise distortion in the diffusion process does not match
precisely with digital channel noise characteristics. In this work, we
introduce the Score-Based Channel Denoising Model (SCDM) for Digital Semantic
Communications (DSC). SCDM views the distortion of constellation symbol
sequences in digital transmission as a score-based forward diffusion process.
We design a tailored forward noise corruption to align digital channel noise
properties in the training phase. During the inference stage, the well-trained
SCDM can effectively denoise received semantic symbols under various SNR
conditions, reducing the difficulty for the semantic decoder in extracting
semantic information from the received noisy symbols and thereby enhancing the
robustness of the reconstructed semantic information. Experimental results show
that SCDM outperforms the baseline model in PSNR, SSIM, and MSE metrics,
particularly at low SNR levels. Moreover, SCDM reduces storage requirements by
a factor of 7.8. This efficiency in storage, combined with its robust denoising
capability, makes SCDM a practical solution for DSC across diverse channel
conditions.
|
2501.17878
|
Collaborative Channel Access and Transmission for NR Sidelink and Wi-Fi
Coexistence over Unlicensed Spectrum
|
eess.SP cs.LG
|
With the rapid development of various internet of things (IoT) applications,
including industrial IoT (IIoT) and visual IoT (VIoT), the demand for direct
device-to-device communication to support high data rates continues to grow. To
address this demand, 5G-Advanced has introduced sidelink communication over the
unlicensed spectrum (SL-U) to increase data rates. However, the primary
challenge of SL-U in the unlicensed spectrum is ensuring fair coexistence with
other incumbent systems, such as Wi-Fi. In this paper, we address the challenge
by designing channel access mechanisms and power control strategies to mitigate
interference and ensure fair coexistence. First, we propose a novel
collaborative channel access (CCHA) mechanism that integrates channel access
with resource allocation through collaborative interactions between base
stations (BS) and SL-U users. This mechanism ensures fair coexistence with
incumbent systems while improving resource utilization. Second, to further
enhance the performance of the coexistence system, we develop a cooperative
subgoal-based hierarchical deep reinforcement learning (C-GHDRL) algorithm
framework. The framework enables SL-U users to make globally optimal decisions
by leveraging cooperative operations between the BS and SL-U users, effectively
overcoming the limitations of traditional optimization methods in solving joint
optimization problems with nonlinear constraints. Finally, we mathematically
model the joint channel access and power control problem and balance the
trade-off between fairness and transmission rate in the coexistence system by
defining a suitable reward function in the C-GHDRL algorithm. Simulation
results demonstrate that the proposed scheme significantly enhances the
performance of the coexistence system while ensuring fair coexistence between
SL-U and Wi-Fi users.
|
2501.17879
|
Task and Perception-aware Distributed Source Coding for Correlated
Speech under Bandwidth-constrained Channels
|
cs.IT cs.AI cs.SD eess.AS eess.SP math.IT
|
Emerging wireless AR/VR applications require real-time transmission of
correlated high-fidelity speech from multiple resource-constrained devices over
unreliable, bandwidth-limited channels. Existing autoencoder-based speech
source coding methods fail to address the combination of the following - (1)
dynamic bitrate adaptation without retraining the model, (2) leveraging
correlations among multiple speech sources, and (3) balancing downstream task
loss with realism of reconstructed speech. We propose a neural distributed
principal component analysis (NDPCA)-aided distributed source coding algorithm
for correlated speech sources transmitting to a central receiver. Our method
includes a perception-aware downstream task loss function that balances
perceptual realism with task-specific performance. Experiments show significant
PSNR improvements under bandwidth constraints over naive autoencoder methods in
task-agnostic (19%) and task-aware settings (52%). It also approaches the
theoretical upper bound, where all correlated sources are sent to a single
encoder, especially in low-bandwidth scenarios. Additionally, we present a
rate-distortion-perception trade-off curve, enabling adaptive decisions based
on application-specific realism needs.
|
2501.17880
|
Assessment of the January 2025 Los Angeles County wildfires: A
multi-modal analysis of impact, response, and population exposure
|
eess.SP cs.AI cs.LG cs.NA math.NA
|
This study presents a comprehensive analysis of four significant California
wildfires: Palisades, Eaton, Kenneth, and Hurst, examining their impacts
through multiple dimensions, including land cover change, jurisdictional
management, structural damage, and demographic vulnerability. Using the
Chebyshev-Kolmogorov-Arnold network model applied to Sentinel-2 imagery, the
extent of burned areas was mapped, ranging from 315.36 to 10,960.98 hectares.
Our analysis revealed that shrubland ecosystems were consistently the most
affected, comprising 57.4-75.8% of burned areas across all events. The
jurisdictional assessment demonstrated varying management complexities, from
singular authority (98.7% in the Palisades Fire) to distributed management
across multiple agencies. A structural impact analysis revealed significant
disparities between urban interface fires (Eaton: 9,869 structures; Palisades:
8,436 structures) and rural events (Kenneth: 24 structures; Hurst: 17
structures). The demographic analysis showed consistent gender distributions,
with 50.9% of the population identified as female and 49.1% as male.
Working-age populations made up the majority of the affected populations,
ranging from 53.7% to 54.1%, with notable temporal shifts in post-fire periods.
The study identified strong correlations between urban interface proximity,
structural damage, and population exposure. The Palisades and Eaton fires
affected over 20,000 people each, compared to fewer than 500 in rural events.
These findings offer valuable insights for the development of targeted wildfire
management strategies, particularly in wildland urban interface zones, and
emphasize the need for age- and gender-conscious approaches in emergency
response planning.
|
2501.17881
|
RayLoc: Wireless Indoor Localization via Fully Differentiable
Ray-tracing
|
eess.SP cs.AI cs.LG cs.NI
|
Wireless indoor localization has been a pivotal area of research over the
last two decades, becoming a cornerstone for numerous sensing applications.
However, conventional wireless localization methods rely on channel state
information to perform blind modelling and estimation of a limited set of
localization parameters. This oversimplification neglects many sensing scene
details, resulting in suboptimal localization accuracy. To address this
limitation, this paper presents a novel approach to wireless indoor
localization by reformulating it as an inverse problem of wireless ray-tracing,
inferring scene parameters that generates the measured CSI. At the core of our
solution is a fully differentiable ray-tracing simulator that enables
backpropagation to comprehensive parameters of the sensing scene, allowing for
precise localization. To establish a robust localization context, RayLoc
constructs a high-fidelity sensing scene by refining coarse-grained background
model. Furthermore, RayLoc overcomes the challenges of sparse gradient and
local minima by convolving the signal generation process with a Gaussian
kernel. Extensive experiments showcase that RayLoc outperforms traditional
localization baselines and is able to generalize to different sensing
environments.
|
2501.17882
|
Heterogeneous Multi-Player Multi-Armed Bandits Robust To Adversarial
Attacks
|
stat.ML cs.LG
|
We consider a multi-player multi-armed bandit setting in the presence of
adversaries that attempt to negatively affect the rewards received by the
players in the system. The reward distributions for any given arm are
heterogeneous across the players. In the event of a collision (more than one
player choosing the same arm), all the colliding users receive zero rewards.
The adversaries use collisions to affect the rewards received by the players,
i.e., if an adversary attacks an arm, any player choosing that arm will receive
zero reward. At any time step, the adversaries may attack more than one arm. It
is assumed that the players in the system do not deviate from a pre-determined
policy used by all the players, and that the probability that none of the arms
face adversarial attacks is strictly positive at every time step. In order to
combat the adversarial attacks, the players are allowed to communicate using a
single bit for $O(\log T)$ time units, where $T$ is the time horizon, and each
player can only observe their own actions and rewards at all time steps. We
propose a {policy that is used by all the players, which} achieves near order
optimal regret of order $O(\log^{1+\delta}T + W)$, where $W$ is total number of
time units for which there was an adversarial attack on at least one arm.
|
2501.17883
|
Explainable and Robust Millimeter Wave Beam Alignment for AI-Native 6G
Networks
|
eess.SP cs.AI
|
Integrated artificial intelligence (AI) and communication has been recognized
as a key pillar of 6G and beyond networks. In line with AI-native 6G vision,
explainability and robustness in AI-driven systems are critical for
establishing trust and ensuring reliable performance in diverse and evolving
environments. This paper addresses these challenges by developing a robust and
explainable deep learning (DL)-based beam alignment engine (BAE) for
millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems. The
proposed convolutional neural network (CNN)-based BAE utilizes received signal
strength indicator (RSSI) measurements over a set of wide beams to accurately
predict the best narrow beam for each UE, significantly reducing the overhead
associated with exhaustive codebook-based narrow beam sweeping for initial
access (IA) and data transmission. To ensure transparency and resilience, the
Deep k-Nearest Neighbors (DkNN) algorithm is employed to assess the internal
representations of the network via nearest neighbor approach, providing
human-interpretable explanations and confidence metrics for detecting
out-of-distribution inputs. Experimental results demonstrate that the proposed
DL-based BAE exhibits robustness to measurement noise, reduces beam training
overhead by 75% compared to the exhaustive search while maintaining
near-optimal performance in terms of spectral efficiency. Moreover, the
proposed framework improves outlier detection robustness by up to 5x and offers
clearer insights into beam prediction decisions compared to traditional
softmax-based classifiers.
|
2501.17884
|
Ranging Performance Analysis in Automotive DToF Lidars
|
eess.SP cs.RO
|
In recent years, achieving full autonomy in driving has emerged as a
paramount objective for both the industry and academia. Among various
perception technologies, Lidar (Light detection and ranging) stands out for its
high-precision and high-resolution capabilities based on the principle of light
propagation and coupling ranging module and imaging module. Lidar is a
sophisticated system that integrates multiple technologies such as optics,
mechanics, circuits, and algorithms. Therefore, there are various feasible
Lidar schemes to meet the needs of autonomous driving in different scenarios.
The ranging performance of Lidar is a key factor that determines the overall
performance of autonomous driving systems. As such, it is necessary to conduct
a systematic analysis of the ranging performance of different Lidar schemes. In
this paper, we present the ranging performance analysis methods corresponding
to different optical designs, device selec-tions and measurement mechanisms. By
using these methods, we compare the ranging perfor-mance of several typical
commercial Lidars. Our findings provide a reference framework for de-signing
Lidars with various trade-offs between cost and performance, and offer insights
into the advancement towards improving Lidar schemes.
|
2501.17887
|
Docling: An Efficient Open-Source Toolkit for AI-driven Document
Conversion
|
cs.CL cs.CV cs.SE
|
We introduce Docling, an easy-to-use, self-contained, MIT-licensed,
open-source toolkit for document conversion, that can parse several types of
popular document formats into a unified, richly structured representation. It
is powered by state-of-the-art specialized AI models for layout analysis
(DocLayNet) and table structure recognition (TableFormer), and runs efficiently
on commodity hardware in a small resource budget. Docling is released as a
Python package and can be used as a Python API or as a CLI tool. Docling's
modular architecture and efficient document representation make it easy to
implement extensions, new features, models, and customizations. Docling has
been already integrated in other popular open-source frameworks (e.g.,
LangChain, LlamaIndex, spaCy), making it a natural fit for the processing of
documents and the development of high-end applications. The open-source
community has fully engaged in using, promoting, and developing for Docling,
which gathered 10k stars on GitHub in less than a month and was reported as the
No. 1 trending repository in GitHub worldwide in November 2024.
|
2501.17888
|
RadioLLM: Introducing Large Language Model into Cognitive Radio via
Hybrid Prompt and Token Reprogrammings
|
eess.SP cs.AI cs.LG
|
The increasing scarcity of spectrum resources and the rapid growth of
wireless device have made efficient management of radio networks a critical
challenge. Cognitive Radio Technology (CRT), when integrated with deep learning
(DL), offers promising solutions for tasks such as radio signal classification
(RSC), signal denoising, and spectrum allocation. However, existing DL-based
CRT frameworks are often task-specific and lack scalability to diverse
real-world scenarios. Meanwhile, Large Language Models (LLMs) have demonstrated
exceptional generalization capabilities across multiple domains, making them a
potential candidate for advancing CRT technologies. In this paper, we introduce
RadioLLM, a novel framework that incorporates Hybrid Prompt and Token
Reprogramming (HPTR) and a Frequency Attuned Fusion (FAF) module to enhance
LLMs for CRT tasks. HPTR enables the integration of radio signal features with
expert knowledge, while FAF improves the modeling of high-frequency features
critical for precise signal processing. These innovations allow RadioLLM to
handle diverse CRT tasks, bridging the gap between LLMs and traditional signal
processing methods. Extensive empirical studies on multiple benchmark datasets
demonstrate that the proposed RadioLLM achieves superior performance over
current baselines.
|
2501.17889
|
Knoop: Practical Enhancement of Knockoff with Over-Parameterization for
Variable Selection
|
stat.ML cs.AI cs.LG
|
Variable selection plays a crucial role in enhancing modeling effectiveness
across diverse fields, addressing the challenges posed by high-dimensional
datasets of correlated variables. This work introduces a novel approach namely
Knockoff with over-parameterization (Knoop) to enhance Knockoff filters for
variable selection. Specifically, Knoop first generates multiple knockoff
variables for each original variable and integrates them with the original
variables into an over-parameterized Ridgeless regression model. For each
original variable, Knoop evaluates the coefficient distribution of its
knockoffs and compares these with the original coefficients to conduct an
anomaly-based significance test, ensuring robust variable selection. Extensive
experiments demonstrate superior performance compared to existing methods in
both simulation and real-world datasets. Knoop achieves a notably higher Area
under the Curve (AUC) of the Receiver Operating Characteristic (ROC) Curve for
effectively identifying relevant variables against the ground truth by
controlled simulations, while showcasing enhanced predictive accuracy across
diverse regression and classification tasks. The analytical results further
backup our observations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.