id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.10012
|
Dream to Drive: Model-Based Vehicle Control Using Analytic World Models
|
cs.AI cs.RO
|
Differentiable simulators have recently shown great promise for training
autonomous vehicle controllers. Being able to backpropagate through them, they
can be placed into an end-to-end training loop where their known dynamics turn
into useful priors for the policy to learn, removing the typical black box
assumption of the environment. So far, these systems have only been used to
train policies. However, this is not the end of the story in terms of what they
can offer. Here, for the first time, we use them to train world models.
Specifically, we present three new task setups that allow us to learn next
state predictors, optimal planners, and optimal inverse states. Unlike analytic
policy gradients (APG), which requires the gradient of the next simulator state
with respect to the current actions, our proposed setups rely on the gradient
of the next state with respect to the current state. We call this approach
Analytic World Models (AWMs) and showcase its applications, including how to
use it for planning in the Waymax simulator. Apart from pushing the limits of
what is possible with such simulators, we offer an improved training recipe
that increases performance on the large-scale Waymo Open Motion dataset by up
to 12% compared to baselines at essentially no additional cost.
|
2502.10013
|
Probabilistic Lexical Manifold Construction in Large Language Models via
Hierarchical Vector Field Interpolation
|
cs.CL
|
Hierarchical vector field interpolation introduces a structured probabilistic
framework for lexical representation, ensuring that word embeddings transition
smoothly across a continuous manifold rather than being constrained to discrete
token mappings. The proposed methodology constructs a probabilistic function
space where word representations adhere to topological consistency, mitigating
representational discontinuities commonly observed in transformer-based
embeddings. Empirical evaluations reveal that probabilistic constraints enhance
lexical coherence by refining contextual relationships, leading to improvements
in semantic stability across multiple linguistic distributions. The application
of divergence minimization techniques ensures that interpolated embeddings
maintain probabilistic consistency while preserving computational feasibility
for large-scale implementations. Experimental findings demonstrate that
interpolated lexical manifolds improve representation density alignment,
reducing anisotropic distortions in contextual embedding distributions.
Comparative analyses with standard transformer-based models highlight that
structured interpolation yields more stable representations, particularly in
tasks requiring fine-grained semantic differentiation. The statistical
evaluation of embedding divergence confirms that probabilistic lexical
manifolds reduce representational inconsistencies while maintaining coherence
across varying scales of contextual abstraction. An assessment of computational
efficiency reveals that while interpolation introduces minor processing
overhead, the structured representation learning approach remains scalable for
practical deployment.
|
2502.10014
|
Recovering nonlinear dynamics from non-uniform observations: A
physics-based identification approach with practical case studies
|
eess.SY cs.SY
|
Uniform and smooth data collection is often infeasible in real-world
scenarios. In this paper, we propose an identification framework to effectively
handle the so-called non-uniform observations, i.e., data scenarios that
include missing measurements, multiple runs, or aggregated observations. The
goal is to provide a general approach for accurately recovering the overall
dynamics of possibly nonlinear systems, allowing the capture of the system
behavior over time from non-uniform observations. The proposed approach
exploits prior knowledge by integrating domain-specific, interpretable,
physical principles with black-box approximators, proving significant
flexibility and adaptability in handling different types of non-uniform
measurements, and addressing the limitations of traditional linear and
black-box methods. The description of this novel framework is supported by a
theoretical study on the effect of non-uniform observations on the accuracy of
parameter estimation. Specifically, we demonstrate the existence of upper
bounds on the parametric error resulting from missing measurements and
aggregated observations. Then, the effectiveness of the approach is
demonstrated through two case studies. These include a practical application
with missing samples, i.e., the identification of a continuous stirred-tank
reactor using real data, and a simulated Lotka-Volterra system under aggregated
observations. The results highlight the ability of the framework to robustly
estimate the system parameters and to accurately reconstruct the model dynamics
despite the availability of non-uniform measurements.
|
2502.10019
|
A Differential Equation Approach to the Most-Informative Boolean
Function Conjecture
|
cs.IT math.IT
|
We study the most-informative Boolean function conjecture using a
differential equation approach. This leads to a formulation of a functional
inequality on finite-dimensional random variables. We also develop a similar
inequality in the case of the Hellinger conjecture. Finally, we conjecture a
specific finite dimensional inequality that, if proved, will lead to a proof of
the Boolean function conjecture in the balanced case. We further show that the
above inequality holds modulo four explicit inequalities (all of which seems to
hold via numerical simulation) with the first three containing just two
variables and a final one involving four variables.
|
2502.10020
|
Improved Online Confidence Bounds for Multinomial Logistic Bandits
|
stat.ML cs.LG
|
In this paper, we propose an improved online confidence bound for multinomial
logistic (MNL) models and apply this result to MNL bandits, achieving
variance-dependent optimal regret. Recently, Lee & Oh (2024) established an
online confidence bound for MNL models and achieved nearly minimax-optimal
regret in MNL bandits. However, their results still depend on the
norm-boundedness of the unknown parameter $B$ and the maximum size of possible
outcomes $K$. To address this, we first derive an online confidence bound of
$O\left(\sqrt{d \log t} + B \right)$, which is a significant improvement over
the previous bound of $O (B \sqrt{d} \log t \log K )$ (Lee & Oh, 2024). This is
mainly achieved by establishing tighter self-concordant properties of the MNL
loss and introducing a novel intermediary term to bound the estimation error.
Using this new online confidence bound, we propose a constant-time algorithm,
OFU-MNL++, which achieves a variance-dependent regret bound of $O \Big( d \log
T \sqrt{ \smash[b]{\sum_{t=1}^T} \sigma_t^2 } \Big) $ for sufficiently large
$T$, where $\sigma_t^2$ denotes the variance of the rewards at round $t$, $d$
is the dimension of the contexts, and $T$ is the total number of rounds.
Furthermore, we introduce a Maximum Likelihood Estimation (MLE)-based
algorithm, OFU-MN$^2$L, which achieves an anytime poly(B)-free regret of $O
\Big( d \log (BT) \sqrt{ \smash[b]{\sum_{t=1}^T} \sigma_t^2 } \Big) $.
|
2502.10027
|
Heterogeneous Resource Allocation with Multi-task Learning for Wireless
Networks
|
cs.LG
|
The optimal solution to an optimization problem depends on the problem's
objective function, constraints, and size. While deep neural networks (DNNs)
have proven effective in solving optimization problems, changes in the
problem's size, objectives, or constraints often require adjustments to the DNN
architecture to maintain effectiveness, or even retraining a new DNN from
scratch. Given the dynamic nature of wireless networks, which involve multiple
and diverse objectives that can have conflicting requirements and constraints,
we propose a multi-task learning (MTL) framework to enable a single DNN to
jointly solve a range of diverse optimization problems. In this framework,
optimization problems with varying dimensionality values, objectives, and
constraints are treated as distinct tasks. To jointly address these tasks, we
propose a conditional computation-based MTL approach with routing. The
multi-task DNN consists of two components, the base DNN (bDNN), which is the
single DNN used to extract the solutions for all considered optimization
problems, and the routing DNN (rDNN), which manages which nodes and layers of
the bDNN to be used during the forward propagation of each task. The output of
the rDNN is a binary vector which is multiplied with all bDNN's weights during
the forward propagation, creating a unique computational path through the bDNN
for each task. This setup allows the tasks to either share parameters or use
independent ones, with the decision controlled by the rDNN. The proposed
framework supports both supervised and unsupervised learning scenarios.
Numerical results demonstrate the efficiency of the proposed MTL approach in
solving diverse optimization problems. In contrast, benchmark DNNs lacking the
rDNN mechanism were unable to achieve similar levels of performance,
highlighting the effectiveness of the proposed architecture.
|
2502.10028
|
ManiTrend: Bridging Future Generation and Action Prediction with 3D Flow
for Robotic Manipulation
|
cs.CV cs.RO
|
Language-conditioned manipulation is a vital but challenging robotic task due
to the high-level abstraction of language. To address this, researchers have
sought improved goal representations derived from natural language. In this
paper, we highlight 3D flow - representing the motion trend of 3D particles
within a scene - as an effective bridge between language-based future image
generation and fine-grained action prediction. To this end, we develop
ManiTrend, a unified framework that models the dynamics of 3D particles, vision
observations and manipulation actions with a causal transformer. Within this
framework, features for 3D flow prediction serve as additional conditions for
future image generation and action prediction, alleviating the complexity of
pixel-wise spatiotemporal modeling and providing seamless action guidance.
Furthermore, 3D flow can substitute missing or heterogeneous action labels
during large-scale pretraining on cross-embodiment demonstrations. Experiments
on two comprehensive benchmarks demonstrate that our method achieves
state-of-the-art performance with high efficiency. Our code and model
checkpoints will be available upon acceptance.
|
2502.10038
|
POI-Enhancer: An LLM-based Semantic Enhancement Framework for POI
Representation Learning
|
cs.AI
|
POI representation learning plays a crucial role in handling tasks related to
user mobility data. Recent studies have shown that enriching POI
representations with multimodal information can significantly enhance their
task performance. Previously, the textual information incorporated into POI
representations typically involved only POI categories or check-in content,
leading to relatively weak textual features in existing methods. In contrast,
large language models (LLMs) trained on extensive text data have been found to
possess rich textual knowledge. However leveraging such knowledge to enhance
POI representation learning presents two key challenges: first, how to extract
POI-related knowledge from LLMs effectively, and second, how to integrate the
extracted information to enhance POI representations. To address these
challenges, we propose POI-Enhancer, a portable framework that leverages LLMs
to improve POI representations produced by classic POI learning models. We
first design three specialized prompts to extract semantic information from
LLMs efficiently. Then, the Dual Feature Alignment module enhances the quality
of the extracted information, while the Semantic Feature Fusion module
preserves its integrity. The Cross Attention Fusion module then fully
adaptively integrates such high-quality information into POI representations
and Multi-View Contrastive Learning further injects human-understandable
semantic information into these representations. Extensive experiments on three
real-world datasets demonstrate the effectiveness of our framework, showing
significant improvements across all baseline representations.
|
2502.10040
|
Diffusion Trajectory-guided Policy for Long-horizon Robot Manipulation
|
cs.RO
|
Recently, Vision-Language-Action models (VLA) have advanced robot imitation
learning, but high data collection costs and limited demonstrations hinder
generalization and current imitation learning methods struggle in
out-of-distribution scenarios, especially for long-horizon tasks. A key
challenge is how to mitigate compounding errors in imitation learning, which
lead to cascading failures over extended trajectories. To address these
challenges, we propose the Diffusion Trajectory-guided Policy (DTP) framework,
which generates 2D trajectories through a diffusion model to guide policy
learning for long-horizon tasks. By leveraging task-relevant trajectories, DTP
provides trajectory-level guidance to reduce error accumulation. Our two-stage
approach first trains a generative vision-language model to create
diffusion-based trajectories, then refines the imitation policy using them.
Experiments on the CALVIN benchmark show that DTP outperforms state-of-the-art
baselines by 25% in success rate, starting from scratch without external
pretraining. Moreover, DTP significantly improves real-world robot performance.
|
2502.10042
|
Scaling Law Tradeoff Between Throughput and Sensing Distance in Large
ISAC Networks
|
cs.IT math.IT
|
In this paper, we investigate the fundamental tradeoff between communication
and sensing performance of \emph{ad hoc} integrated sensing and communication
(ISAC) wireless networks. Specifically, we consider that $n$ nodes are randomly
located in an extended network with area $n$ and transmit ISAC signals. Under
the pure path loss channel gain model and the condition that the transmission
power scales according to the communication distance, we fully characterize the
optimal scaling law tradeoff between throughput and sensing distance by
proposing an achievable scheme and proving its converse. Our results can be
interpreted as follows: by reducing the throughput by a factor of a function of
$n$, the sensing range order improves according to the same function of $n$,
raised to the power of the ratio between the path loss factors in communication
and sensing. We prove that the same result also holds true for ISAC networks
with random fading, despite the uncertainty on the connectivity and power level
created by random fading. In addition, we show that the scaling law tradeoff
cannot be improved by allowing the transmission power and communication
distance to scale freely. To the best of our knowledge, this is the first work
formally formulating and characterizing the communication and sensing
performance scaling law tradeoff of \emph{ad hoc} ISAC networks.
|
2502.10044
|
Unsupervised Entity Alignment Based on Personalized Discriminative
Rooted Tree
|
cs.AI
|
Entity Alignment (EA) is to link potential equivalent entities across
different knowledge graphs (KGs). Most existing EA methods are supervised as
they require the supervision of seed alignments, i.e., manually specified
aligned entity pairs. Very recently, several EA studies have made some attempts
to get rid of seed alignments. Despite achieving preliminary progress, they
still suffer two limitations: (1) The entity embeddings produced by their
GNN-like encoders lack personalization since some of the aggregation subpaths
are shared between different entities. (2) They cannot fully alleviate the
distribution distortion issue between candidate KGs due to the absence of the
supervised signal. In this work, we propose a novel unsupervised entity
alignment approach called UNEA to address the above two issues. First, we
parametrically sample a tree neighborhood rooted at each entity, and
accordingly develop a tree attention aggregation mechanism to extract a
personalized embedding for each entity. Second, we introduce an auxiliary task
of maximizing the mutual information between the input and the output of the KG
encoder, to regularize the model and prevent the distribution distortion.
Extensive experiments show that our UNEA achieves a new state-of-the-art for
the unsupervised EA task, and can even outperform many existing supervised EA
baselines.
|
2502.10046
|
ViRAC: A Vision-Reasoning Agent Head Movement Control Framework in
Arbitrary Virtual Environments
|
cs.GR cs.CV
|
Creating lifelike virtual agents capable of interacting with their
environments is a longstanding goal in computer graphics. This paper addresses
the challenge of generating natural head rotations, a critical aspect of
believable agent behavior for visual information gathering and dynamic
responses to environmental cues. Although earlier methods have made significant
strides, many rely on data-driven or saliency-based approaches, which often
underperform in diverse settings and fail to capture deeper cognitive factors
such as risk assessment, information seeking, and contextual prioritization.
Consequently, generated behaviors can appear rigid or overlook critical scene
elements, thereby diminishing the sense of realism. In this paper, we propose
\textbf{ViRAC}, a \textbf{Vi}sion-\textbf{R}easoning \textbf{A}gent Head
Movement \textbf{C}ontrol framework, which exploits the common-sense knowledge
and reasoning capabilities of large-scale models, including Vision-Language
Models (VLMs) and Large-Language Models (LLMs). Rather than explicitly modeling
every cognitive mechanism, ViRAC leverages the biases and patterns internalized
by these models from extensive training, thus emulating human-like perceptual
processes without hand-tuned heuristics. Experimental results in multiple
scenarios reveal that ViRAC produces more natural and context-aware head
rotations than recent state-of-the-art techniques. Quantitative evaluations
show a closer alignment with real human head-movement data, while user studies
confirm improved realism and cognitive plausibility.
|
2502.10047
|
Janus: Collaborative Vision Transformer Under Dynamic Network
Environment
|
cs.DC cs.AI
|
Vision Transformers (ViTs) have outperformed traditional Convolutional Neural
Network architectures and achieved state-of-the-art results in various computer
vision tasks. Since ViTs are computationally expensive, the models either have
to be pruned to run on resource-limited edge devices only or have to be
executed on remote cloud servers after receiving the raw data transmitted over
fluctuating networks. The resulting degraded performance or high latency all
hinder their widespread applications. In this paper, we present Janus, the
first framework for low-latency cloud-device collaborative Vision Transformer
inference over dynamic networks. Janus overcomes the intrinsic model
limitations of ViTs and realizes collaboratively executing ViT models on both
cloud and edge devices, achieving low latency, high accuracy, and low
communication overhead. Specifically, Janus judiciously combines token pruning
techniques with a carefully designed fine-to-coarse model splitting policy and
non-static mixed pruning policy. It attains a balance between accuracy and
latency by dynamically selecting the optimal pruning level and split point.
Experimental results across various tasks demonstrate that Janus enhances
throughput by up to 5.15 times and reduces latency violation ratios by up to
98.7% when compared with baseline approaches under various network
environments.
|
2502.10050
|
A Survey on LLM-powered Agents for Recommender Systems
|
cs.IR cs.AI
|
Recommender systems are essential components of many online platforms, yet
traditional approaches still struggle with understanding complex user
preferences and providing explainable recommendations. The emergence of Large
Language Model (LLM)-powered agents offers a promising approach by enabling
natural language interactions and interpretable reasoning, potentially
transforming research in recommender systems. This survey provides a systematic
review of the emerging applications of LLM-powered agents in recommender
systems. We identify and analyze three key paradigms in current research: (1)
Recommender-oriented approaches, which leverage intelligent agents to enhance
the fundamental recommendation mechanisms; (2) Interaction-oriented approaches,
which facilitate dynamic user engagement through natural dialogue and
interpretable suggestions; and (3) Simulation-oriented approaches, which employ
multi-agent frameworks to model complex user-item interactions and system
dynamics. Beyond paradigm categorization, we analyze the architectural
foundations of LLM-powered recommendation agents, examining their essential
components: profile construction, memory management, strategic planning, and
action execution. Our investigation extends to a comprehensive analysis of
benchmark datasets and evaluation frameworks in this domain. This systematic
examination not only illuminates the current state of LLM-powered agent
recommender systems but also charts critical challenges and promising research
directions in this transformative field.
|
2502.10051
|
ORI: O Routing Intelligence
|
cs.CL
|
Single large language models (LLMs) often fall short when faced with the
ever-growing range of tasks, making a single-model approach insufficient. We
address this challenge by proposing ORI (O Routing Intelligence), a dynamic
framework that leverages a set of LLMs. By intelligently routing incoming
queries to the most suitable model, ORI not only improves task-specific
accuracy, but also maintains efficiency. Comprehensive evaluations across
diverse benchmarks demonstrate consistent accuracy gains while controlling
computational overhead. By intelligently routing queries, ORI outperforms the
strongest individual models by up to 2.7 points on MMLU and 1.8 points on MuSR,
ties the top performance on ARC, and on BBH. These results underscore the
benefits of a multi-model strategy and demonstrate how ORI's adaptive
architecture can more effectively handle diverse tasks, offering a scalable,
high-performance solution for a system of multiple large language models.
|
2502.10054
|
Towards Polyp Counting In Full-Procedure Colonoscopy Videos
|
cs.CV
|
Automated colonoscopy reporting holds great potential for enhancing quality
control and improving cost-effectiveness of colonoscopy procedures. A major
challenge lies in the automated identification, tracking, and re-association
(ReID) of polyps tracklets across full-procedure colonoscopy videos. This is
essential for precise polyp counting and enables automated computation of key
quality metrics, such as Adenoma Detection Rate (ADR) and Polyps Per
Colonoscopy (PPC). However, polyp ReID is challenging due to variations in
polyp appearance, frequent disappearance from the field of view, and
occlusions. In this work, we leverage the REAL-Colon dataset, the first
open-access dataset providing full-procedure videos, to define tasks, data
splits and metrics for the problem of automatically count polyps in
full-procedure videos, establishing an open-access framework. We re-implement
previously proposed SimCLR-based methods for learning representations of polyp
tracklets, both single-frame and multi-view, and adapt them to the polyp
counting task. We then propose an Affinity Propagation-based clustering method
to further improve ReID based on these learned representations, ultimately
enhancing polyp counting. Our approach achieves state-of-the-art performance,
with a polyp fragmentation rate of 6.30 and a false positive rate (FPR) below
5% on the REAL-Colon dataset. We release code at
https://github.com/lparolari/towards-polyp-counting.
|
2502.10057
|
A Generalized Modeling Approach to Liquid-driven Ballooning Membranes
|
cs.RO
|
Soft robotics is advancing the use of flexible materials for adaptable
robotic systems. Membrane-actuated soft robots address the limitations of
traditional soft robots by using pressurized, extensible membranes to achieve
stable, large deformations, yet control and state estimation remain challenging
due to their complex deformation dynamics. This paper presents a novel modeling
approach for liquid-driven ballooning membranes, employing an ellipsoid
approximation to model shape and stretch under planar deformation. Relying
solely on intrinsic feedback from pressure data and controlled liquid volume,
this approach enables accurate membrane state estimation. We demonstrate the
effectiveness of the proposed model for ballooning membrane-based actuators by
experimental validation, obtaining the indentation depth error of
$RMSE_{h_2}=0.80\;$mm, which is $23\%$ of the indentation range and $6.67\%$ of
the unindented actuator height range. For the force estimation, the error range
is obtained to be $RMSE_{F}=0.15\;$N which is $10\%$ of the measured force
range.
|
2502.10058
|
MTLM: an Innovative Language Model Training Paradigm for ASR
|
cs.CL eess.AS
|
Pre-training Transformer-based language models (LMs) on a large amount of
text has proven crucial for improving automatic speech recognition (ASR)
performance. Generally, traditional LMs are unidirectional and unable to access
the context on the right. This paper proposes a method for training LMs that
enable traditional unidirectional LMs to fully utilize left and right contexts.
Compared with the unidirectional LMs, our LM facilitates ASR to transcribe
hypotheses more consistently and in a more semantically unambiguous way, as it
incorporates richer contextual representations. Finally, our experimental
results on the LibriSpeech corpus demonstrate that our model outperforms
traditional unidirectional LMs, whether n-best rescoring or shallow fusion is
used as the decoding algorithm.
|
2502.10059
|
RealCam-I2V: Real-World Image-to-Video Generation with Interactive
Complex Camera Control
|
cs.CV
|
Recent advancements in camera-trajectory-guided image-to-video generation
offer higher precision and better support for complex camera control compared
to text-based approaches. However, they also introduce significant usability
challenges, as users often struggle to provide precise camera parameters when
working with arbitrary real-world images without knowledge of their depth nor
scene scale. To address these real-world application issues, we propose
RealCam-I2V, a novel diffusion-based video generation framework that integrates
monocular metric depth estimation to establish 3D scene reconstruction in a
preprocessing step. During training, the reconstructed 3D scene enables scaling
camera parameters from relative to absolute values, ensuring compatibility and
scale consistency across diverse real-world images. In inference, RealCam-I2V
offers an intuitive interface where users can precisely draw camera
trajectories by dragging within the 3D scene. To further enhance precise camera
control and scene consistency, we propose scene-constrained noise shaping,
which shapes high-level noise and also allows the framework to maintain
dynamic, coherent video generation in lower noise stages. RealCam-I2V achieves
significant improvements in controllability and video quality on the
RealEstate10K and out-of-domain images. We further enables applications like
camera-controlled looping video generation and generative frame interpolation.
We will release our absolute-scale annotation, codes, and all checkpoints.
Please see dynamic results in https://zgctroy.github.io/RealCam-I2V.
|
2502.10060
|
DiSciPLE: Learning Interpretable Programs for Scientific Visual
Discovery
|
cs.CV cs.LG
|
Visual data is used in numerous different scientific workflows ranging from
remote sensing to ecology. As the amount of observation data increases, the
challenge is not just to make accurate predictions but also to understand the
underlying mechanisms for those predictions. Good interpretation is important
in scientific workflows, as it allows for better decision-making by providing
insights into the data. This paper introduces an automatic way of obtaining
such interpretable-by-design models, by learning programs that interleave
neural networks. We propose DiSciPLE (Discovering Scientific Programs using
LLMs and Evolution) an evolutionary algorithm that leverages common sense and
prior knowledge of large language models (LLMs) to create Python programs
explaining visual data. Additionally, we propose two improvements: a program
critic and a program simplifier to improve our method further to synthesize
good programs. On three different real-world problems, DiSciPLE learns
state-of-the-art programs on novel tasks with no prior literature. For example,
we can learn programs with 35% lower error than the closest non-interpretable
baseline for population density estimation.
|
2502.10061
|
Annotating Compositionality Scores for Irish Noun Compounds is Hard Work
|
cs.CL
|
Noun compounds constitute a challenging construction for NLP applications,
given their variability in idiomaticity and interpretation. In this paper, we
present an analysis of compound nouns identified in Irish text of varied
domains by expert annotators, focusing on compositionality as a key feature,
but also domain specificity, as well as familiarity and confidence of the
annotator giving the ratings. Our findings and the discussion that ensued
contributes towards a greater understanding of how these constructions appear
in Irish language, and how they might be treated separately from English noun
compounds.
|
2502.10062
|
Adaptive Bi-Level Multi-Robot Task Allocation and Learning under
Uncertainty with Temporal Logic Constraints
|
cs.RO cs.AI cs.FL
|
This work addresses the problem of multi-robot coordination under unknown
robot transition models, ensuring that tasks specified by Time Window Temporal
Logic are satisfied with user-defined probability thresholds. We present a
bi-level framework that integrates (i) high-level task allocation, where tasks
are assigned based on the robots' estimated task completion probabilities and
expected rewards, and (ii) low-level distributed policy learning and execution,
where robots independently optimize auxiliary rewards while fulfilling their
assigned tasks. To handle uncertainty in robot dynamics, our approach leverages
real-time task execution data to iteratively refine expected task completion
probabilities and rewards, enabling adaptive task allocation without explicit
robot transition models. We theoretically validate the proposed algorithm,
demonstrating that the task assignments meet the desired probability thresholds
with high confidence. Finally, we demonstrate the effectiveness of our
framework through comprehensive simulations.
|
2502.10063
|
Strassen Multisystolic Array Hardware Architectures
|
cs.AR cs.AI cs.PF
|
While Strassen's matrix multiplication algorithm reduces the complexity of
naive matrix multiplication, general-purpose hardware is not suitable for
achieving the algorithm's promised theoretical speedups. This leaves the
question of if it could be better exploited in custom hardware architectures
designed specifically for executing the algorithm. However, there is limited
prior work on this and it is not immediately clear how to derive such
architectures or if they can ultimately lead to real improvements. We bridge
this gap, presenting and evaluating new systolic array architectures that
efficiently translate the theoretical complexity reductions of Strassen's
algorithm directly into hardware resource savings. Furthermore, the
architectures are multisystolic array designs that can multiply smaller
matrices with higher utilization than single-systolic array designs. The
proposed designs implemented on FPGA reduce DSP requirements by a factor of
$1.14^r$ for $r$ implemented Strassen recursion levels, and otherwise require
overall similar soft logic resources when instantiated to support matrix sizes
down to 32x32 and 24x24 at 1-2 levels of Strassen recursion, respectively. We
evaluate the proposed designs both in isolation and in an end-to-end machine
learning accelerator compared to baseline designs and prior works, achieving
state-of-the-art performance.
|
2502.10064
|
Hands-off Image Editing: Language-guided Editing without any
Task-specific Labeling, Masking or even Training
|
cs.CL cs.CV
|
Instruction-guided image editing consists in taking an image and an
instruction and deliverring that image altered according to that instruction.
State-of-the-art approaches to this task suffer from the typical scaling up and
domain adaptation hindrances related to supervision as they eventually resort
to some kind of task-specific labelling, masking or training. We propose a
novel approach that does without any such task-specific supervision and offers
thus a better potential for improvement. Its assessment demonstrates that it is
highly effective, achieving very competitive performance.
|
2502.10070
|
Topological Neural Networks over the Air
|
cs.IT cs.LG math.IT
|
Topological neural networks (TNNs) are information processing architectures
that model representations from data lying over topological spaces (e.g.,
simplicial or cell complexes) and allow for decentralized implementation
through localized communications over different neighborhoods. Existing TNN
architectures have not yet been considered in realistic communication
scenarios, where channel effects typically introduce disturbances such as
fading and noise. This paper aims to propose a novel TNN design, operating on
regular cell complexes, that performs over-the-air computation, incorporating
the wireless communication model into its architecture. Specifically, during
training and inference, the proposed method considers channel impairments such
as fading and noise in the topological convolutional filtering operation, which
takes place over different signal orders and neighborhoods. Numerical results
illustrate the architecture's robustness to channel impairments during testing
and the superior performance with respect to existing architectures, which are
either communication-agnostic or graph-based.
|
2502.10072
|
LifeSaver: Predictive Load Limit Estimation for Transport Vehicles in
Hilly Areas
|
eess.SY cs.SY
|
The transportation of essential goods in mountainous regions faces severe
logistical challenges and frequent disruptions. To mitigate these difficulties,
transport companies often overload trucks, which, though cost-saving,
significantly heightens the risk of accidents and mechanical failures. This
paper presents the development of a device that detects overloaded and
insecurely fastened loads on trucks and commercial vehicles. Using advanced
load sensors, the device offers real-time monitoring of cargo weight
distribution, alerting drivers and authorities to unsafe conditions. The
initial prototype utilised two basic load cells and an Arduino microcontroller.
The second version was enhanced with four load cells and extended sensors. This
version was tested by placing an electric golf cart onto the prototype. Various
loads were then added to the cart in different orientations to assess whether
the system could accurately detect improper or excessive load conditions.
|
2502.10076
|
Classification of Temporal Graphs using Persistent Homology
|
cs.LG cs.CG math.AT
|
Temporal graphs effectively model dynamic systems by representing
interactions as timestamped edges. However, analytical tools for temporal
graphs are limited compared to static graphs. We propose a novel method for
analyzing temporal graphs using Persistent Homology. Our approach leverages
$\delta$-temporal motifs (recurrent subgraphs) to capture temporal dynamics
%without aggregation
. By evolving these motifs, we define the \textit{average filtration} and
compute PH on the associated clique complex. This method captures both local
and global temporal structures and is stable with respect to reference models.
We demonstrate the applicability of our approach to the temporal graph
classification task. Experiments verify the effectiveness of our approach,
achieving over 92\% accuracy, with some cases reaching 100\%. Unlike existing
methods that require node classes, our approach is node class free, offering
flexibility for a wide range of temporal graph analysis.
|
2502.10077
|
Towards Empowerment Gain through Causal Structure Learning in
Model-Based RL
|
cs.AI cs.LG
|
In Model-Based Reinforcement Learning (MBRL), incorporating causal structures
into dynamics models provides agents with a structured understanding of the
environments, enabling efficient decision. Empowerment as an intrinsic
motivation enhances the ability of agents to actively control their
environments by maximizing the mutual information between future states and
actions. We posit that empowerment coupled with causal understanding can
improve controllability, while enhanced empowerment gain can further facilitate
causal reasoning in MBRL. To improve learning efficiency and controllability,
we propose a novel framework, Empowerment through Causal Learning (ECL), where
an agent with the awareness of causal dynamics models achieves
empowerment-driven exploration and optimizes its causal structure for task
learning. Specifically, ECL operates by first training a causal dynamics model
of the environment based on collected data. We then maximize empowerment under
the causal structure for exploration, simultaneously using data gathered
through exploration to update causal dynamics model to be more controllable
than dense dynamics model without causal structure. In downstream task
learning, an intrinsic curiosity reward is included to balance the causality,
mitigating overfitting. Importantly, ECL is method-agnostic and is capable of
integrating various causal discovery methods. We evaluate ECL combined with 3
causal discovery methods across 6 environments including pixel-based tasks,
demonstrating its superior performance compared to other causal MBRL methods,
in terms of causal discovery, sample efficiency, and asymptotic performance.
|
2502.10080
|
Coordinated control of multiple autonomous surface vehicles: challenges
and advances -- a systematic review
|
cs.RO cs.SY eess.SY
|
The increasing use and implementation of Autonomous Surface Vessels (ASVs)
for various activities in maritime environments is expected to drive a rise in
developments and research on their control. Particularly, the coordination of
multiple ASVs presents novel challenges and opportunities, requiring
interdisciplinary research efforts at the intersection of robotics, control
theory, communication systems, and marine sciences. The wide variety of
missions or objectives for which these vessels can be collectively used allows
for the application and combination of different control techniques. This
includes the exploration of machine learning to consider aspects previously
deemed infeasible. This review provides a comprehensive exploration of
coordinated ASV control while addressing critical gaps left by previous
reviews. Unlike previous works, we adopt a systematic approach to ensure
integrity and minimize bias in article selection. We delve into the complex
world of sub-actuated ASVs with a focus on customized control strategies and
the integration of machine learning techniques for increased autonomy. By
synthesizing recent advances and identifying emerging trends, we offer insights
that drive this field forward, providing both a comprehensive overview of
state-of-the-art techniques and guidance for future research efforts.
|
2502.10089
|
A Hybrid Edge Classifier: Combining TinyML-Optimised CNN with RRAM-CMOS
ACAM for Energy-Efficient Inference
|
cs.LG cs.AI cs.AR
|
In recent years, the development of smart edge computing systems to process
information locally is on the rise. Many near-sensor machine learning (ML)
approaches have been implemented to introduce accurate and energy efficient
template matching operations in resource-constrained edge sensing systems, such
as wearables. To introduce novel solutions that can be viable for extreme edge
cases, hybrid solutions combining conventional and emerging technologies have
started to be proposed. Deep Neural Networks (DNN) optimised for edge
application alongside new approaches of computing (both device and architecture
-wise) could be a strong candidate in implementing edge ML solutions that aim
at competitive accuracy classification while using a fraction of the power of
conventional ML solutions. In this work, we are proposing a hybrid
software-hardware edge classifier aimed at the extreme edge near-sensor
systems. The classifier consists of two parts: (i) an optimised digital tinyML
network, working as a front-end feature extractor, and (ii) a back-end
RRAM-CMOS analogue content addressable memory (ACAM), working as a final stage
template matching system. The combined hybrid system exhibits a competitive
trade-off in accuracy versus energy metric with $E_{front-end}$ = $96.23 nJ$
and $E_{back-end}$ = $1.45 nJ$ for each classification operation compared with
78.06$\mu$J for the original teacher model, representing a 792-fold reduction,
making it a viable solution for extreme edge applications.
|
2502.10090
|
Manual2Skill: Learning to Read Manuals and Acquire Robotic Skills for
Furniture Assembly Using Vision-Language Models
|
cs.RO cs.AI
|
Humans possess an extraordinary ability to understand and execute complex
manipulation tasks by interpreting abstract instruction manuals. For robots,
however, this capability remains a substantial challenge, as they cannot
interpret abstract instructions and translate them into executable actions. In
this paper, we present Manual2Skill, a novel framework that enables robots to
perform complex assembly tasks guided by high-level manual instructions. Our
approach leverages a Vision-Language Model (VLM) to extract structured
information from instructional images and then uses this information to
construct hierarchical assembly graphs. These graphs represent parts,
subassemblies, and the relationships between them. To facilitate task
execution, a pose estimation model predicts the relative 6D poses of components
at each assembly step. At the same time, a motion planning module generates
actionable sequences for real-world robotic implementation. We demonstrate the
effectiveness of Manual2Skill by successfully assembling several real-world
IKEA furniture items. This application highlights its ability to manage
long-horizon manipulation tasks with both efficiency and precision,
significantly enhancing the practicality of robot learning from instruction
manuals. This work marks a step forward in advancing robotic systems capable of
understanding and executing complex manipulation tasks in a manner akin to
human capabilities.
|
2502.10091
|
ELAA-ISAC: Environmental Mapping Utilizing the LoS State of
Communication Channel
|
cs.IT eess.SP math.IT
|
In this paper, a novel environmental mapping method is proposed to outline
the indoor layout utilizing the line-of-sight (LoS) state information of
extremely large aperture array (ELAA) channels. It leverages the spatial
resolution provided by ELAA and the mobile terminal (MT)'s mobility to infer
the presence and location of obstacles in the environment. The LoS state
estimation is formulated as a binary hypothesis testing problem, and the
optimal decision rule is derived based on the likelihood ratio test.
Subsequently, the theoretical error probability of LoS estimation is derived,
showing close alignment with simulation results. Then, an environmental mapping
method is proposed, which progressively outlines the layout by combining LoS
state information from multiple MT locations. It is demonstrated that the
proposed method can accurately outline the environment layout, with the mapping
accuracy improving as the number of service-antennas and MT locations
increases. This paper also investigates the impact of channel estimation error
and non-LoS (NLoS) components on the quality of environmental mapping. The
proposed method exhibits particularly promising performance in LoS dominated
wireless environments characterized by high Rician K-factor. Specifically, it
achieves an average intersection over union (IoU) exceeding 80% when utilizing
256 service antennas and 18 MT locations.
|
2502.10092
|
A novel approach to data generation in generative model
|
cs.LG cs.AI
|
Variational Autoencoders (VAEs) and other generative models are widely
employed in artificial intelligence to synthesize new data. However, current
approaches rely on Euclidean geometric assumptions and statistical
approximations that fail to capture the structured and emergent nature of data
generation. This paper introduces the Convergent Fusion Paradigm (CFP) theory,
a novel geometric framework that redefines data generation by integrating
dimensional expansion accompanied by qualitative transformation. By modifying
the latent space geometry to interact with emergent high-dimensional
structures, CFP theory addresses key challenges such as identifiability issues
and unintended artifacts like hallucinations in Large Language Models (LLMs).
CFP theory is based on two key conceptual hypotheses that redefine how
generative models structure relationships between data and algorithms. Through
the lens of CFP theory, we critically examine existing metric-learning
approaches. CFP theory advances this perspective by introducing time-reversed
metric embeddings and structural convergence mechanisms, leading to a novel
geometric approach that better accounts for data generation as a structured
epistemic process. Beyond its computational implications, CFP theory provides
philosophical insights into the ontological underpinnings of data generation.
By offering a systematic framework for high-dimensional learning dynamics, CFP
theory contributes to establishing a theoretical foundation for understanding
the data-relationship structures in AI. Finally, future research in CFP theory
will be led to its implications for fully realizing qualitative
transformations, introducing the potential of Hilbert space in generative
modeling.
|
2502.10095
|
Representation Learning on Out of Distribution in Tabular Data
|
cs.LG
|
The open-world assumption in model development suggests that a model might
lack sufficient information to adequately handle data that is entirely distinct
or out of distribution (OOD). While deep learning methods have shown promising
results in handling OOD data through generalization techniques, they often
require specialized hardware that may not be accessible to all users. We
present TCL, a lightweight yet effective solution that operates efficiently on
standard CPU hardware. Our approach adapts contrastive learning principles
specifically for tabular data structures, incorporating full matrix
augmentation and simplified loss calculation. Through comprehensive experiments
across 10 diverse datasets, we demonstrate that TCL outperforms existing
models, including FT-Transformer and ResNet, particularly in classification
tasks, while maintaining competitive performance in regression problems. TCL
achieves these results with significantly reduced computational requirements,
making it accessible to users with limited hardware capabilities. This study
also provides practical guidance for detecting and evaluating OOD data through
straightforward experiments and visualizations. Our findings show that TCL
offers a promising balance between performance and efficiency in handling OOD
prediction tasks, which is particularly beneficial for general machine learning
practitioners working with computational constraints.
|
2502.10097
|
Causal Information Prioritization for Efficient Reinforcement Learning
|
cs.AI cs.LG
|
Current Reinforcement Learning (RL) methods often suffer from
sample-inefficiency, resulting from blind exploration strategies that neglect
causal relationships among states, actions, and rewards. Although recent causal
approaches aim to address this problem, they lack grounded modeling of
reward-guided causal understanding of states and actions for goal-orientation,
thus impairing learning efficiency. To tackle this issue, we propose a novel
method named Causal Information Prioritization (CIP) that improves sample
efficiency by leveraging factored MDPs to infer causal relationships between
different dimensions of states and actions with respect to rewards, enabling
the prioritization of causal information. Specifically, CIP identifies and
leverages causal relationships between states and rewards to execute
counterfactual data augmentation to prioritize high-impact state features under
the causal understanding of the environments. Moreover, CIP integrates a
causality-aware empowerment learning objective, which significantly enhances
the agent's execution of reward-guided actions for more efficient exploration
in complex environments. To fully assess the effectiveness of CIP, we conduct
extensive experiments across 39 tasks in 5 diverse continuous control
environments, encompassing both locomotion and manipulation skills learning
with pixel-based and sparse reward settings. Experimental results demonstrate
that CIP consistently outperforms existing RL methods across a wide range of
scenarios.
|
2502.10100
|
Statistical data analysis for Tourism in Poland in R Programming
Environment
|
math.NA cs.CE cs.ET cs.NA cs.PL
|
This study utilises the R programming language for statistical data analysis
to understand Tourism dynamics in Poland. It focuses on methods for data
visualisation, multivariate statistics, and hypothesis testing. To investigate
the expenditure behavior of tourist, spending patterns, correlations, and
associations among variables were analysed in the dataset. The results revealed
a significant relationship between accommodation type and the purpose of trip,
showing that the purpose of a trip impacts the selection of accommodation. A
strong correlation was observed between organizer expenditure and private
expenditure, indicating that individual spending are more when the spending on
organizing the trip are higher. However, no significant difference was observed
in total expenditure across different accommodation types and purpose of the
trip revealing that travelers tend to spend similar amounts regardless of their
reason for travel or choice of accommodation. Although significant
relationships were observed among certain variables, ANOVA could not be applied
because the dataset was not able to hold on the normality assumption. In
future, the dataset can be explored further to find more meaningful insights.
The developed code is available on GitHub:
https://github.com/SaadAhmedJamal/DataAnalysis RProgEnv.
|
2502.10106
|
Data-Adaptive Low-Rank Sparse Subspace Clustering
|
cs.LG
|
Low-rank sparse subspace clustering (LRSSC) algorithms built on
self-expressive model effectively capture both the global and local structure
of the data. However, existing solutions, primarily based on proximal operators
associated with Sp/Lp , p e {0, 1/2, 2/3, 1}, norms are not data-adaptive. In
this work, we propose an LRSSC algorithm incorporating a data-adaptive
surrogate for the S0/L0 quasi-norm. We provide a numerical solution for the
corresponding proximal operator in cases where an analytical expression is
unavailable. The proposed LRSSC algorithm is formulated within the proximal
mapping framework, and we present theoretical proof of its global convergence
toward a stationary point. We evaluate the performance of the proposed method
on three well known datasets, comparing it against LRSSC algorithms constrained
by Sp/Lp, p e {0, 1/2, 2/3, 1}, norms.
|
2502.10108
|
NeuroXVocal: Detection and Explanation of Alzheimer's Disease through
Non-invasive Analysis of Picture-prompted Speech
|
cs.LG q-bio.NC
|
The early diagnosis of Alzheimer's Disease (AD) through non invasive methods
remains a significant healthcare challenge. We present NeuroXVocal, a novel
dual-component system that not only classifies but also explains potential AD
cases through speech analysis. The classification component (Neuro) processes
three distinct data streams: acoustic features capturing speech patterns and
voice characteristics, textual features extracted from speech transcriptions,
and precomputed embeddings representing linguistic patterns. These streams are
fused through a custom transformer-based architecture that enables robust
cross-modal interactions. The explainability component (XVocal) implements a
Retrieval-Augmented Generation (RAG) approach, leveraging Large Language Models
combined with a domain-specific knowledge base of AD research literature. This
architecture enables XVocal to retrieve relevant clinical studies and research
findings to generate evidence-based context-sensitive explanations of the
acoustic and linguistic markers identified in patient speech. Using the IS2021
ADReSSo Challenge benchmark dataset, our system achieved state-of-the-art
performance with 95.77% accuracy in AD classification, significantly
outperforming previous approaches. The explainability component was
qualitatively evaluated using a structured questionnaire completed by medical
professionals, validating its clinical relevance. NeuroXVocal's unique
combination of high-accuracy classification and interpretable,
literature-grounded explanations demonstrates its potential as a practical tool
for supporting clinical AD diagnosis.
|
2502.10111
|
COMBINEX: A Unified Counterfactual Explainer for Graph Neural Networks
via Node Feature and Structural Perturbations
|
cs.LG
|
Counterfactual explanations have emerged as a powerful tool to unveil the
opaque decision-making processes of graph neural networks (GNNs). However,
existing techniques primarily focus on edge modifications, often overlooking
the crucial role of node feature perturbations in shaping model predictions. To
address this limitation, we propose COMBINEX, a novel GNN explainer that
generates counterfactual explanations for both node and graph classification
tasks. Unlike prior methods, which treat structural and feature-based changes
independently, COMBINEX optimally balances modifications to edges and node
features by jointly optimizing these perturbations. This unified approach
ensures minimal yet effective changes required to flip a model's prediction,
resulting in realistic and interpretable counterfactuals. Additionally,
COMBINEX seamlessly handles both continuous and discrete node features,
enhancing its versatility across diverse datasets and GNN architectures.
Extensive experiments on real-world datasets and various GNN architectures
demonstrate the effectiveness and robustness of our approach over existing
baselines.
|
2502.10112
|
Accelerometry-based Energy Expenditure Estimation During Activities of
Daily Living: A Comparison Among Different Accelerometer Compositions
|
cs.LG
|
Physical activity energy expenditure (PAEE) can be measured from
breath-by-breath respiratory data, which can serve as a reference.
Alternatively, PAEE can be predicted from the body movements, which can be
measured and estimated with accelerometers. The body center of mass (COM)
acceleration reflects the movements of the whole body and thus serves as a good
predictor for PAEE. However, the wrist has also become a popular location due
to recent advancements in wrist-worn devices. Therefore, in this work, using
the respiratory data measured by COSMED K5 as the reference, we evaluated and
compared the performances of COM-based settings and wrist-based settings. The
COM-based settings include two different accelerometer compositions, using only
the pelvis accelerometer (pelvis-acc) and the pelvis accelerometer with two
accelerometers from two thighs (3-acc). The wrist-based settings include using
only the left wrist accelerometer (l-wrist-acc) and only the right wrist
accelerometer (r-wrist-acc). We implemented two existing PAEE estimation
methods on our collected dataset, where 9 participants performed activities of
daily living while wearing 5 accelerometers (i.e., pelvis, two thighs, and two
wrists). These two methods include a linear regression (LR) model and a
CNN-LSTM model. Both models yielded the best results with the COM-based 3-acc
setting (LR: $R^2$ = 0.41, CNN-LSTM: $R^2$ = 0.53). No significant difference
was found between the 3-acc and pelvis-acc settings (p-value = 0.278). For both
models, neither the l-wrist-acc nor the r-wrist-acc settings demonstrated
predictive power on PAEE with $R^2$ values close to 0, significantly
outperformed by the two COM-based settings (p-values $<$ 0.05). No significant
difference was found between the two wrists (p-value = 0.329).
|
2502.10113
|
Strain-Induced Optical and Molecular Transformations in PET Films for
Organic Electronic Applications
|
physics.app-ph cs.SY eess.SY physics.optics
|
Poly(ethylene terephthalate) (PET) films are widely used in flexible
electronics and optoelectronics, where their mechanical durability and optical
performance under strain are essential for device reliability. This study
investigates the impact of applied mechanical strain on the optical and
molecular properties of PET at room temperature,using UV-Vis absorption and
Raman spectroscopy. The work explores how varying strain levels, from 0%
(unstretched) to 30%, affect the transparency, vibrational modes, and molecular
reorganization within PET films. UV-Vis absorbance measurements reveal that
strain induces significant changes in the light transmission properties of PET,
particularly in the visible range, and increases absorption in the UVA and
visible region by up to 100%. Raman spectra indicate that strain levels higher
than 5% lead to irreversible shifts of vibrational lines, accompanied by an
increase of their full width at half maximum (FWHM), suggesting molecular
reorientation and crystallinity changes. The phonon mode coupled with C-O
stretching [O-CH2] shows the strongest response to applied mechanical stress.
This study provides a comprehensive understanding of strain-induced optical and
structural alterations in PET, with implications for improving the mechanical
and optical performance of PET-based devices in strainsensitive applications,
such as organic solar cells (OSCs), organic light-emitting diodes (OLEDs), and
flexible sensors.
|
2502.10118
|
Image Embedding Sampling Method for Diverse Captioning
|
cs.CV cs.AI
|
Image Captioning for state-of-the-art VLMs has significantly improved over
time; however, this comes at the cost of increased computational complexity,
making them less accessible for resource-constrained applications such as
mobile devices and assistive technologies. Alternatively, smaller VLMs
prioritize high-level scene descriptions, overlooking finer details that
contribute to a richer understanding of an image. In this paper, we introduce a
training-free framework that enhances caption diversity and informativeness by
explicitly attending to distinct image regions using a comparably small VLM,
BLIP, as the backbone. Our approach leverages structured segmentation to
produce hierarchical representations that capture both global and localized
semantics. Without requiring additional model training, we demonstrate that our
method allows smaller VLMs to achieve performance comparable to larger models
in terms of image-caption alignment, semantic integrity, and diversity. We
evaluate our framework on MSCOCO, Flickr30k, and Nocaps test datasets,
achieving a Div-2 score of 0.735, 0.750, and 0.748 for each dataset
respectively, while maintaining strong image-caption relevancy and semantic
integrity with the human-annotated captions.
|
2502.10119
|
SeWA: Selective Weight Average via Probabilistic Masking
|
cs.LG
|
Weight averaging has become a standard technique for enhancing model
performance. However, methods such as Stochastic Weight Averaging (SWA) and
Latest Weight Averaging (LAWA) often require manually designed procedures to
sample from the training trajectory, and the results depend heavily on
hyperparameter tuning. To minimize human effort, this paper proposes a simple
yet efficient algorithm called Selective Weight Averaging (SeWA), which
adaptively selects checkpoints during the final stages of training for
averaging. Based on SeWA, we show that only a few points are needed to achieve
better generalization and faster convergence. Theoretically, solving the
discrete subset selection problem is inherently challenging. To address this,
we transform it into a continuous probabilistic optimization framework and
employ the Gumbel-Softmax estimator to learn the non-differentiable mask for
each checkpoint. Further, we theoretically derive the SeWA's stability-based
generalization bounds, which are sharper than that of SGD under both convex and
non-convex assumptions. Finally, solid extended experiments in various domains,
including behavior cloning, image classification, and text classification,
further validate the effectiveness of our approach.
|
2502.10120
|
Compress image to patches for Vision Transformer
|
cs.CV
|
The Vision Transformer (ViT) has made significant strides in the field of
computer vision. However, as the depth of the model and the resolution of the
input images increase, the computational cost associated with training and
running ViT models has surged dramatically. This paper proposes a hybrid model
based on CNN and Vision Transformer, named CI2P-ViT. The model incorporates a
module called CI2P, which utilizes the CompressAI encoder to compress images
and subsequently generates a sequence of patches through a series of
convolutions. CI2P can replace the Patch Embedding component in the ViT model,
enabling seamless integration into existing ViT models. Compared to ViT-B/16,
CI2P-ViT has the number of patches input to the self-attention layer reduced to
a quarter of the original. This design not only significantly reduces the
computational cost of the ViT model but also effectively enhances the model's
accuracy by introducing the inductive bias properties of CNN. The ViT model's
precision is markedly enhanced. When trained from the ground up on the
Animals-10 dataset, CI2P-ViT achieved an accuracy rate of 92.37%, representing
a 3.3% improvement over the ViT-B/16 baseline. Additionally, the model's
computational operations, measured in floating-point operations per second
(FLOPs), were diminished by 63.35%, and it exhibited a 2-fold increase in
training velocity on identical hardware configurations.
|
2502.10122
|
Modern Hopfield Networks with Continuous-Time Memories
|
cs.LG
|
Recent research has established a connection between modern Hopfield networks
(HNs) and transformer attention heads, with guarantees of exponential storage
capacity. However, these models still face challenges scaling storage
efficiently. Inspired by psychological theories of continuous neural resource
allocation in working memory, we propose an approach that compresses large
discrete Hopfield memories into smaller, continuous-time memories. Leveraging
continuous attention, our new energy function modifies the update rule of HNs,
replacing the traditional softmax-based probability mass function with a
probability density, over the continuous memory. This formulation aligns with
modern perspectives on human executive function, offering a principled link
between attractor dynamics in working memory and resource-efficient memory
allocation. Our framework maintains competitive performance with HNs while
leveraging a compressed memory, reducing computational costs across synthetic
and video datasets.
|
2502.10125
|
Learning Relational Tabular Data without Shared Features
|
cs.LG cs.AI
|
Learning relational tabular data has gained significant attention recently,
but most studies focus on single tables, overlooking the potential of
cross-table learning. Cross-table learning, especially in scenarios where
tables lack shared features and pre-aligned data, offers vast opportunities but
also introduces substantial challenges. The alignment space is immense, and
determining accurate alignments between tables is highly complex. We propose
Latent Entity Alignment Learning (Leal), a novel framework enabling effective
cross-table training without requiring shared features or pre-aligned data.
Leal operates on the principle that properly aligned data yield lower loss than
misaligned data, a concept embodied in its soft alignment mechanism. This
mechanism is coupled with a differentiable cluster sampler module, ensuring
efficient scaling to large relational tables. Furthermore, we provide a
theoretical proof of the cluster sampler's approximation capacity. Extensive
experiments on five real-world and five synthetic datasets show that Leal
achieves up to a 26.8% improvement in predictive performance compared to
state-of-the-art methods, demonstrating its effectiveness and scalability.
|
2502.10127
|
Leveraging V2X for Collaborative HD Maps Construction Using Scene Graph
Generation
|
cs.CV
|
High-Definition (HD) maps play a crucial role in autonomous vehicle
navigation, complementing onboard perception sensors for improved accuracy and
safety. Traditional HD map generation relies on dedicated mapping vehicles,
which are costly and fail to capture real-time infrastructure changes. This
paper presents HDMapLaneNet, a novel framework leveraging V2X communication and
Scene Graph Generation to collaboratively construct a localized geometric layer
of HD maps. The approach extracts lane centerlines from front-facing camera
images, represents them as graphs, and transmits the data for global
aggregation to the cloud via V2X. Preliminary results on the nuScenes dataset
demonstrate superior association prediction performance compared to a
state-of-the-art method.
|
2502.10138
|
Provably Efficient RL under Episode-Wise Safety in Constrained MDPs with
Linear Function Approximation
|
cs.LG
|
We study the reinforcement learning (RL) problem in a constrained Markov
decision process (CMDP), where an agent explores the environment to maximize
the expected cumulative reward while satisfying a single constraint on the
expected total utility value in every episode. While this problem is well
understood in the tabular setting, theoretical results for function
approximation remain scarce. This paper closes the gap by proposing an RL
algorithm for linear CMDPs that achieves $\tilde{\mathcal{O}}(\sqrt{K})$ regret
with an episode-wise zero-violation guarantee. Furthermore, our method is
computationally efficient, scaling polynomially with problem-dependent
parameters while remaining independent of the state space size. Our results
significantly improve upon recent linear CMDP algorithms, which either violate
the constraint or incur exponential computational costs.
|
2502.10140
|
Small Models, Big Impact: Efficient Corpus and Graph-Based Adaptation of
Small Multilingual Language Models for Low-Resource Languages
|
cs.CL
|
Low-resource languages (LRLs) face significant challenges in natural language
processing (NLP) due to limited data. While current state-of-the-art large
language models (LLMs) still struggle with LRLs, smaller multilingual models
(mLMs) such as mBERT and XLM-R offer greater promise due to a better fit of
their capacity to low training data sizes. This study systematically
investigates parameter-efficient adapter-based methods for adapting mLMs to
LRLs, evaluating three architectures: Sequential Bottleneck, Invertible
Bottleneck, and Low-Rank Adaptation. Using unstructured text from GlotCC and
structured knowledge from ConceptNet, we show that small adaptation datasets
(e.g., up to 1 GB of free-text or a few MB of knowledge graph data) yield gains
in intrinsic (masked language modeling) and extrinsic tasks (topic
classification, sentiment analysis, and named entity recognition). We find that
Sequential Bottleneck adapters excel in language modeling, while Invertible
Bottleneck adapters slightly outperform other methods on downstream tasks due
to better embedding alignment and larger parameter counts. Adapter-based
methods match or outperform full fine-tuning while using far fewer parameters,
and smaller mLMs prove more effective for LRLs than massive LLMs like LLaMA-3,
GPT-4, and DeepSeek-R1-based distilled models. While adaptation improves
performance, pre-training data size remains the dominant factor, especially for
languages with extensive pre-training coverage.
|
2502.10141
|
Pangraphs as models of higher-order interactions
|
physics.soc-ph cs.SI math.CO q-bio.MN q-bio.PE
|
Graphs depict pairwise relationships between objects within a system.
Higher-order interactions (HOIs), which involve more than two objects
simultaneously, are common in nature. Such interactions can change the
stability of a complex system. Hypergraphs can represent an HOI as an arbitrary
subset of vertices. However, they fail to capture the specific roles of the
vertices involved, which can be highly asymmetric, particularly in the case of
interaction modifications.
We introduce pangraphs, a robust and quantitative generalisation of graphs
that accurately captures arbitrarily complex higher-order interactions. We
demonstrate that several higher-order representations proposed in the
literature are specific instances of pangraphs. Additionally, we introduce an
incidence multilayer digraph representation of a pangraph, referred to as Levi
digraph. We adapt degree and Katz centrality measures to the pangraph framework
and show that a consistent generalisation of recursive graph measures cannot be
simplified to a Levi digraph of a pangraph.
We construct a pangraph for a real-world coffee agroecosystem and compare
Katz centrality between its dihypergraph and pangraph representations, both
analytically and numerically. The choice of representation significantly
affects centrality values and alters vertex ranks. Additionally, we emphasise
the use of real-valued incidence matrices to quantify interaction strengths and
the roles of vertices within the system.
|
2502.10145
|
Interpretable Concept-based Deep Learning Framework for Multimodal Human
Behavior Modeling
|
cs.CV cs.MM
|
In the contemporary era of intelligent connectivity, Affective Computing
(AC), which enables systems to recognize, interpret, and respond to human
behavior states, has become an integrated part of many AI systems. As one of
the most critical components of responsible AI and trustworthiness in all
human-centered systems, explainability has been a major concern in AC.
Particularly, the recently released EU General Data Protection Regulation
requires any high-risk AI systems to be sufficiently interpretable, including
biometric-based systems and emotion recognition systems widely used in the
affective computing field. Existing explainable methods often compromise
between interpretability and performance. Most of them focus only on
highlighting key network parameters without offering meaningful,
domain-specific explanations to the stakeholders. Additionally, they also face
challenges in effectively co-learning and explaining insights from multimodal
data sources. To address these limitations, we propose a novel and
generalizable framework, namely the Attention-Guided Concept Model (AGCM),
which provides learnable conceptual explanations by identifying what concepts
that lead to the predictions and where they are observed. AGCM is extendable to
any spatial and temporal signals through multimodal concept alignment and
co-learning, empowering stakeholders with deeper insights into the model's
decision-making process. We validate the efficiency of AGCM on well-established
Facial Expression Recognition benchmark datasets while also demonstrating its
generalizability on more complex real-world human behavior understanding
applications.
|
2502.10148
|
Cooperative Multi-Agent Planning with Adaptive Skill Synthesis
|
cs.AI cs.MA
|
Despite much progress in training distributed artificial intelligence (AI),
building cooperative multi-agent systems with multi-agent reinforcement
learning (MARL) faces challenges in sample efficiency, interpretability, and
transferability. Unlike traditional learning-based methods that require
extensive interaction with the environment, large language models (LLMs)
demonstrate remarkable capabilities in zero-shot planning and complex
reasoning. However, existing LLM-based approaches heavily rely on text-based
observations and struggle with the non-Markovian nature of multi-agent
interactions under partial observability. We present COMPASS, a novel
multi-agent architecture that integrates vision-language models (VLMs) with a
dynamic skill library and structured communication for decentralized
closed-loop decision-making. The skill library, bootstrapped from
demonstrations, evolves via planner-guided tasks to enable adaptive strategies.
COMPASS propagates entity information through multi-hop communication under
partial observability. Evaluations on the improved StarCraft Multi-Agent
Challenge (SMACv2) demonstrate COMPASS achieves up to 30\% higher win rates
than state-of-the-art MARL algorithms in symmetric scenarios.
|
2502.10151
|
Semantica: Decentralized Search using a LLM-Guided Semantic Tree Overlay
|
cs.IR cs.DC cs.NI cs.SY eess.SY
|
Centralized search engines are key for the Internet, but lead to undesirable
concentration of power. Decentralized alternatives fail to offer equal document
retrieval accuracy and speed. Nevertheless, Semantic Overlay Networks can come
close to the performance of centralized solutions when the semantics of
documents are properly captured. This work uses embeddings from Large Language
Models to capture semantics and fulfill the promise of Semantic Overlay
Networks. Our proposed algorithm, called Semantica, constructs a prefix tree
(trie) utilizing document embeddings calculated by a language model. Users
connect to each other based on the embeddings of their documents, ensuring that
semantically similar users are directly linked. Thereby, this construction
makes it more likely for user searches to be answered by the users that they
are directly connected to, or by the users they are close to in the network
connection graph. The implementation of our algorithm also accommodates the
semantic diversity of individual users by spawning "clone" user identifiers in
the tree. Our experiments use emulation with a real-world workload to show
Semantica's ability to identify and connect to similar users quickly. Semantica
finds up to ten times more semantically similar users than current
state-of-the-art approaches. At the same time, Semantica can retrieve more than
two times the number of relevant documents given the same network load. We also
make our code publicly available to facilitate further research in the area.
|
2502.10154
|
Video Soundtrack Generation by Aligning Emotions and Temporal Boundaries
|
cs.SD cs.AI cs.LG cs.MM eess.AS eess.IV
|
We introduce EMSYNC, a video-based symbolic music generation model that
aligns music with a video's emotional content and temporal boundaries. It
follows a two-stage framework, where a pretrained video emotion classifier
extracts emotional features, and a conditional music generator produces MIDI
sequences guided by both emotional and temporal cues. We introduce boundary
offsets, a novel temporal conditioning mechanism that enables the model to
anticipate and align musical chords with scene cuts. Unlike existing models,
our approach retains event-based encoding, ensuring fine-grained timing control
and expressive musical nuances. We also propose a mapping scheme to bridge the
video emotion classifier, which produces discrete emotion categories, with the
emotion-conditioned MIDI generator, which operates on continuous-valued
valence-arousal inputs. In subjective listening tests, EMSYNC outperforms
state-of-the-art models across all subjective metrics, for music theory-aware
participants as well as the general listeners.
|
2502.10156
|
MonoForce: Learnable Image-conditioned Physics Engine
|
cs.RO cs.CV
|
We propose a novel model for the prediction of robot trajectories on rough
offroad terrain from the onboard camera images. This model enforces the laws of
classical mechanics through a physics-aware neural symbolic layer while
preserving the ability to learn from large-scale data as it is end-to-end
differentiable. The proposed hybrid model integrates a black-box component that
predicts robot-terrain interaction forces with a neural-symbolic layer. This
layer includes a differentiable physics engine that computes the robot's
trajectory by querying these forces at the points of contact with the terrain.
As the proposed architecture comprises substantial geometrical and physics
priors, the resulting model can also be seen as a learnable physics engine
conditioned on real images that delivers $10^4$ trajectories per second. We
argue and empirically demonstrate that this architecture reduces the
sim-to-real gap and mitigates out-of-distribution sensitivity. The
differentiability, in conjunction with the rapid simulation speed, makes the
model well-suited for various applications including model predictive control,
trajectory shooting, supervised and reinforcement learning or SLAM. The codes
and data are publicly available.
|
2502.10157
|
SessionRec: Next Session Prediction Paradigm For Generative Sequential
Recommendation
|
cs.IR cs.AI
|
We introduce SessionRec, a novel next-session prediction paradigm (NSPP) for
generative sequential recommendation, addressing the fundamental misalignment
between conventional next-item prediction paradigm (NIPP) and real-world
recommendation scenarios. Unlike NIPP's item-level autoregressive generation
that contradicts actual session-based user interactions, our framework
introduces a session-aware representation learning through hierarchical
sequence aggregation (intra/inter-session), reducing attention computation
complexity while enabling implicit modeling of massive negative interactions,
and a session-based prediction objective that better captures users' diverse
interests through multi-item recommendation in next sessions. Moreover, we
found that incorporating a rank loss for items within the session under the
next session prediction paradigm can significantly improve the ranking
effectiveness of generative sequence recommendation models. We also verified
that SessionRec exhibits clear power-law scaling laws similar to those observed
in LLMs. Extensive experiments conducted on public datasets and online A/B test
in Meituan App demonstrate the effectiveness of SessionRec. The proposed
paradigm establishes new foundations for developing industrial-scale generative
recommendation systems through its model-agnostic architecture and
computational efficiency.
|
2502.10158
|
Combinatorial Reinforcement Learning with Preference Feedback
|
stat.ML cs.LG
|
In this paper, we consider combinatorial reinforcement learning with
preference feedback, where a learning agent sequentially offers an action--an
assortment of multiple items to--a user, whose preference feedback follows a
multinomial logistic (MNL) model. This framework allows us to model real-world
scenarios, particularly those involving long-term user engagement, such as in
recommender systems and online advertising. However, this framework faces two
main challenges: (1) the unknown value of each item, unlike traditional MNL
bandits that only address single-step preference feedback, and (2) the
difficulty of ensuring optimism while maintaining tractable assortment
selection in the combinatorial action space with unknown values. In this paper,
we assume a contextual MNL preference model, where the mean utilities are
linear, and the value of each item is approximated by a general function. We
propose an algorithm, MNL-VQL, that addresses these challenges, making it both
computationally and statistically efficient. As a special case, for linear MDPs
(with the MNL preference feedback), we establish the first regret lower bound
in this framework and show that MNL-VQL achieves nearly minimax-optimal regret.
To the best of our knowledge, this is the first work to provide statistical
guarantees in combinatorial RL with preference feedback.
|
2502.10162
|
Revisiting Generalization Power of a DNN in Terms of Symbolic
Interactions
|
cs.LG cs.AI cs.CL cs.CV
|
This paper aims to analyze the generalization power of deep neural networks
(DNNs) from the perspective of interactions. Unlike previous analysis of a
DNN's generalization power in a highdimensional feature space, we find that the
generalization power of a DNN can be explained as the generalization power of
the interactions. We found that the generalizable interactions follow a
decay-shaped distribution, while non-generalizable interactions follow a
spindle-shaped distribution. Furthermore, our theory can effectively
disentangle these two types of interactions from a DNN. We have verified that
our theory can well match real interactions in a DNN in experiments.
|
2502.10163
|
Enhancing anomaly detection with topology-aware autoencoders
|
hep-ph cs.LG hep-ex
|
Anomaly detection in high-energy physics is essential for identifying new
physics beyond the Standard Model. Autoencoders provide a signal-agnostic
approach but are limited by the topology of their latent space. This work
explores topology-aware autoencoders, embedding phase-space distributions onto
compact manifolds that reflect energy-momentum conservation. We construct
autoencoders with spherical ($S^n$), product ($S^2 \otimes S^2$), and
projective ($\mathbb{RP}^2$) latent spaces and compare their anomaly detection
performance against conventional Euclidean embeddings. Our results show that
autoencoders with topological priors significantly improve anomaly separation
by preserving the global structure of the data manifold and reducing spurious
reconstruction errors. Applying our approach to simulated hadronic top-quark
decays, we show that latent spaces with appropriate topological constraints
enhance sensitivity and robustness in detecting anomalous events. This study
establishes topology-aware autoencoders as a powerful tool for unsupervised
searches for new physics in particle-collision data.
|
2502.10173
|
Agentic End-to-End De Novo Protein Design for Tailored Dynamics Using a
Language Diffusion Model
|
q-bio.BM cond-mat.mes-hall cond-mat.mtrl-sci cs.LG
|
Proteins are dynamic molecular machines whose biological functions, spanning
enzymatic catalysis, signal transduction, and structural adaptation, are
intrinsically linked to their motions. Designing proteins with targeted dynamic
properties, however, remains a challenge due to the complex, degenerate
relationships between sequence, structure, and molecular motion. Here, we
introduce VibeGen, a generative AI framework that enables end-to-end de novo
protein design conditioned on normal mode vibrations. VibeGen employs an
agentic dual-model architecture, comprising a protein designer that generates
sequence candidates based on specified vibrational modes and a protein
predictor that evaluates their dynamic accuracy. This approach synergizes
diversity, accuracy, and novelty during the design process. Via full-atom
molecular simulations as direct validation, we demonstrate that the designed
proteins accurately reproduce the prescribed normal mode amplitudes across the
backbone while adopting various stable, functionally relevant structures.
Notably, generated sequences are de novo, exhibiting no significant similarity
to natural proteins, thereby expanding the accessible protein space beyond
evolutionary constraints. Our work integrates protein dynamics into generative
protein design, and establishes a direct, bidirectional link between sequence
and vibrational behavior, unlocking new pathways for engineering biomolecules
with tailored dynamical and functional properties. This framework holds broad
implications for the rational design of flexible enzymes, dynamic scaffolds,
and biomaterials, paving the way toward dynamics-informed AI-driven protein
engineering.
|
2502.10174
|
Technical Risks of (Lethal) Autonomous Weapons Systems
|
cs.CY cs.AI cs.SY eess.SY
|
The autonomy and adaptability of (Lethal) Autonomous Weapons Systems, (L)AWS
in short, promise unprecedented operational capabilities, but they also
introduce profound risks that challenge the principles of control,
accountability, and stability in international security. This report outlines
the key technological risks associated with (L)AWS deployment, emphasizing
their unpredictability, lack of transparency, and operational unreliability,
which can lead to severe unintended consequences.
Key Takeaways:
1. Proposed advantages of (L)AWS can only be achieved through objectification
and classification, but a range of systematic risks limit the reliability and
predictability of classifying algorithms.
2. These systematic risks include the black-box nature of AI decision-making,
susceptibility to reward hacking, goal misgeneralization and potential for
emergent behaviors that escape human control.
3. (L)AWS could act in ways that are not just unexpected but also
uncontrollable, undermining mission objectives and potentially escalating
conflicts.
4. Even rigorously tested systems may behave unpredictably and harmfully in
real-world conditions, jeopardizing both strategic stability and humanitarian
principles.
|
2502.10177
|
STMA: A Spatio-Temporal Memory Agent for Long-Horizon Embodied Task
Planning
|
cs.AI
|
A key objective of embodied intelligence is enabling agents to perform
long-horizon tasks in dynamic environments while maintaining robust
decision-making and adaptability. To achieve this goal, we propose the
Spatio-Temporal Memory Agent (STMA), a novel framework designed to enhance task
planning and execution by integrating spatio-temporal memory. STMA is built
upon three critical components: (1) a spatio-temporal memory module that
captures historical and environmental changes in real time, (2) a dynamic
knowledge graph that facilitates adaptive spatial reasoning, and (3) a
planner-critic mechanism that iteratively refines task strategies. We evaluate
STMA in the TextWorld environment on 32 tasks, involving multi-step planning
and exploration under varying levels of complexity. Experimental results
demonstrate that STMA achieves a 31.25% improvement in success rate and a 24.7%
increase in average score compared to the state-of-the-art model. The results
highlight the effectiveness of spatio-temporal memory in advancing the memory
capabilities of embodied agents.
|
2502.10178
|
From Markov to Laplace: How Mamba In-Context Learns Markov Chains
|
cs.LG cs.AI cs.IT math.IT
|
While transformer-based language models have driven the AI revolution thus
far, their computational complexity has spurred growing interest in viable
alternatives, such as structured state space sequence models (SSMs) and
Selective SSMs. Among these, Mamba (S6) and its variant Mamba-2 have shown
remarkable inference speed ups over transformers while achieving comparable or
superior performance on complex language modeling tasks. However, despite these
architectural innovations and empirical successes, the fundamental learning
capabilities of Mamba remain poorly understood. In this paper, we address this
gap by studying in-context learning (ICL) on Markov chains and uncovering a
surprising phenomenon: unlike transformers, even a single-layer Mamba
efficiently learns the in-context Laplacian smoothing estimator, which is both
Bayes and minimax optimal, for all Markovian orders. To explain this, we
theoretically characterize the representation capacity of Mamba and reveal the
fundamental role of convolution in enabling it to represent the optimal
Laplacian smoothing. These theoretical insights align strongly with empirical
results and, to the best of our knowledge, represent the first formal
connection between Mamba and optimal statistical estimators. Finally, we
outline promising research directions inspired by these findings.
|
2502.10180
|
Safe platooning control of connected and autonomous vehicles on curved
multi-lane roads
|
eess.SY cs.SY
|
This paper investigates the safe platoon formation tracking and merging
control problem of connected and automated vehicles (CAVs) on curved multi-lane
roads. The first novelty is the separation of the control designs into two
distinct parts: a lateral control law that ensures a geometrical convergence
towards the reference path regardless of the translational velocity, and a
longitudinal control design for each vehicle to achieve the desired relative
arc length and velocity with respect to its neighboring vehicle. The second
novelty is exploiting the constructive barrier feedback as an additive term to
the nominal tracking control, ensuring both lateral and longitudinal collision
avoidance. This constructive barrier feedback acts as a dissipative term,
slowing down the relative velocity toward obstacles without affecting the
nominal controller's performance. Consequently, our proposed control method
enables safe platoon formation of vehicles on curved multi-lane roads, with
theoretical guarantees for safety invariance and stability analysis. Simulation
and experimental results on connected vehicles are provided to further validate
the effectiveness of the proposed method.
|
2502.10183
|
Doing More With Less: Towards More Data-Efficient Syndrome-Based Neural
Decoders
|
cs.IT math.IT
|
While significant research efforts have been directed toward developing more
capable neural decoding architectures, comparatively little attention has been
paid to the quality of training data. In this study, we address the challenge
of constructing effective training datasets to maximize the potential of
existing syndrome-based neural decoder architectures. We emphasize the
advantages of using fixed datasets over generating training data dynamically
and explore the problem of selecting appropriate training targets within this
framework. Furthermore,we propose several heuristics for selecting training
samples and present experimental evidence demonstrating that, with carefully
curated datasets, it is possible to train neural decoders to achieve superior
performance while requiring fewer training examples.
|
2502.10184
|
Realistic Evaluation of Deep Partial-Label Learning Algorithms
|
cs.LG
|
Partial-label learning (PLL) is a weakly supervised learning problem in which
each example is associated with multiple candidate labels and only one is the
true label. In recent years, many deep PLL algorithms have been developed to
improve model performance. However, we find that some early developed
algorithms are often underestimated and can outperform many later algorithms
with complicated designs. In this paper, we delve into the empirical
perspective of PLL and identify several critical but previously overlooked
issues. First, model selection for PLL is non-trivial, but has never been
systematically studied. Second, the experimental settings are highly
inconsistent, making it difficult to evaluate the effectiveness of the
algorithms. Third, there is a lack of real-world image datasets that can be
compatible with modern network architectures. Based on these findings, we
propose PLENCH, the first Partial-Label learning bENCHmark to systematically
compare state-of-the-art deep PLL algorithms. We investigate the model
selection problem for PLL for the first time, and propose novel model selection
criteria with theoretical guarantees. We also create Partial-Label CIFAR-10
(PLCIFAR10), an image dataset of human-annotated partial labels collected from
Amazon Mechanical Turk, to provide a testbed for evaluating the performance of
PLL algorithms in more realistic scenarios. Researchers can quickly and
conveniently perform a comprehensive and fair evaluation and verify the
effectiveness of newly developed algorithms based on PLENCH. We hope that
PLENCH will facilitate standardized, fair, and practical evaluation of PLL
algorithms in the future.
|
2502.10185
|
A Powerful Random Forest Featuring Linear Extensions (RaFFLE)
|
cs.LG
|
Random forests are widely used in regression. However, the decision trees
used as base learners are poor approximators of linear relationships. To
address this limitation we propose RaFFLE (Random Forest Featuring Linear
Extensions), a novel framework that integrates the recently developed PILOT
trees (Piecewise Linear Organic Trees) as base learners within a random forest
ensemble. PILOT trees combine the computational efficiency of traditional
decision trees with the flexibility of linear model trees. To ensure sufficient
diversity of the individual trees, we introduce an adjustable regularization
parameter and use node-level feature sampling. These modifications improve the
accuracy of the forest. We establish theoretical guarantees for the consistency
of RaFFLE under weak conditions, and its faster convergence when the data are
generated by a linear model. Empirical evaluations on 136 regression datasets
demonstrate that RaFFLE outperforms the classical CART and random forest
methods, the regularized linear methods Lasso and Ridge, and the
state-of-the-art XGBoost algorithm, across both linear and nonlinear datasets.
By balancing predictive accuracy and computational efficiency, RaFFLE proves to
be a versatile tool for tackling a wide variety of regression problems.
|
2502.10187
|
Reinforcement Learning based Constrained Optimal Control: an
Interpretable Reward Design
|
eess.SY cs.SY
|
This paper presents an interpretable reward design framework for
reinforcement learning based constrained optimal control problems with state
and terminal constraints. The problem is formalized within a standard partially
observable Markov decision process framework. The reward function is
constructed from four weighted components: a terminal constraint reward, a
guidance reward, a penalty for state constraint violations, and a cost
reduction incentive reward. A theoretically justified reward design is then
presented, which establishes bounds on the weights of the components. This
approach ensures that constraints are satisfied and objectives are optimized
while mitigating numerical instability. Acknowledging the importance of prior
knowledge in reward design, we sequentially solve two subproblems, using each
solution to inform the reward design for the subsequent problem. Subsequently,
we integrate reinforcement learning with curriculum learning, utilizing
policies derived from simpler subproblems to assist in tackling more complex
challenges, thereby facilitating convergence. The framework is evaluated
against original and randomly weighted reward designs in a multi-agent particle
environment. Experimental results demonstrate that the proposed approach
significantly enhances satisfaction of terminal and state constraints and
optimization of control cost.
|
2502.10192
|
A Note on "Constructing Bent Functions Outside the Maiorana-McFarland
Class Using a General Form of Rothaus"
|
cs.IT math.IT
|
In 2017, Zhang et al. proposed a question (not open problem) and two open
problems in [IEEE TIT 63 (8): 5336--5349, 2017] about constructing bent
functions by using Rothaus' construction. In this note, we prove that the
sufficient conditions of Rothaus' construction are also necessary, which
answers their question. Besides, we demonstrate that the second open problem,
which considers the iterative method of constructing bent functions by using
Rothaus' construction, has only a trivial solution. It indicates that all bent
functions obtained by using Rothaus' construction iteratively can be generated
from the direct sum of an initial bent function and a quadratic bent function.
This directly means that Zhang et al.'s construction idea makes no contribution
to the construction of bent functions. To compensate the weakness of their
work, we propose an iterative construction of bent functions by using a
secondary construction in [DCC 88: 2007--2035, 2020].
|
2502.10193
|
Merging public elementary schools to reduce racial/ethnic segregation
|
cs.CY cs.AI
|
Diverse schools can help address implicit biases and increase empathy, mutual
respect, and reflective thought by fostering connections between students from
different racial/ethnic, socioeconomic, and other backgrounds. Unfortunately,
demographic segregation remains rampant in US public schools, despite over 70
years since the passing of federal legislation formally outlawing segregation
by race. However, changing how students are assigned to schools can help foster
more integrated learning environments. In this paper, we explore "school
mergers" as one such under-explored, yet promising, student assignment policy
change. School mergers involve merging the school attendance boundaries, or
catchment areas, of schools and subsequently changing the grades each school
offers. We develop an algorithm to simulate elementary school mergers across
200 large school districts serving 4.5 million elementary school students and
find that pairing or tripling schools in this way could reduce racial/ethnic
segregation by a median relative 20% -- and as much as nearly 60% in some
districts -- while increasing driving times to schools by an average of a few
minutes each way. Districts with many interfaces between
racially/ethnically-disparate neighborhoods tend to be prime candidates for
mergers. We also compare the expected results of school mergers to other
typical integration policies, like redistricting, and find that different
policies may be more or less suitable in different places. Finally, we make our
results available through a public dashboard for policymakers and community
members to explore further (https://mergers.schooldiversity.org). Together, our
study offers new findings and tools to support integration policy-making across
US public school districts.
|
2502.10195
|
Exploring the Camera Bias of Person Re-identification
|
cs.CV cs.AI cs.LG
|
We empirically investigate the camera bias of person re-identification (ReID)
models. Previously, camera-aware methods have been proposed to address this
issue, but they are largely confined to training domains of the models. We
measure the camera bias of ReID models on unseen domains and reveal that camera
bias becomes more pronounced under data distribution shifts. As a debiasing
method for unseen domain data, we revisit feature normalization on embedding
vectors. While the normalization has been used as a straightforward solution,
its underlying causes and broader applicability remain unexplored. We analyze
why this simple method is effective at reducing bias and show that it can be
applied to detailed bias factors such as low-level image properties and body
angle. Furthermore, we validate its generalizability across various models and
benchmarks, highlighting its potential as a simple yet effective test-time
postprocessing method for ReID. In addition, we explore the inherent risk of
camera bias in unsupervised learning of ReID models. The unsupervised models
remain highly biased towards camera labels even for seen domain data,
indicating substantial room for improvement. Based on observations of the
negative impact of camera-biased pseudo labels on training, we suggest simple
training strategies to mitigate the bias. By applying these strategies to
existing unsupervised learning algorithms, we show that significant performance
improvements can be achieved with minor modifications.
|
2502.10197
|
MathConstruct: Challenging LLM Reasoning with Constructive Proofs
|
cs.AI
|
While Large Language Models (LLMs) demonstrate impressive performance in
mathematics, existing math benchmarks come with significant limitations. Many
focus on problems with fixed ground-truth answers, and are often saturated due
to problem simplicity or the viability of guessing or memorization. Crucially,
they capture only a narrow subset of relevant math problems. To address this
research gap, we introduce \mc, a new benchmark of 126 challenging problems
sourced from various math competitions, which targets constructive proofs, a
widely encountered problem type requiring the construction of mathematical
objects with specific properties. These proofs are particularly suitable for
LLM evaluation, as solution correctness can be easily verified. Our automated
verifiers also enable MathConstruct to generate problem variations, used to
evaluate robustness. State-of-the-art LLMs solve only 54% of MathConstruct
problems, highlighting its complexity and importance for LLM evaluation.
|
2502.10200
|
Dynamic Reinforcement Learning for Actors
|
cs.LG cs.AI cs.NE
|
Dynamic Reinforcement Learning (Dynamic RL), proposed in this paper, directly
controls system dynamics, instead of the actor (action-generating neural
network) outputs at each moment, bringing about a major qualitative shift in
reinforcement learning (RL) from static to dynamic. The actor is initially
designed to generate chaotic dynamics through the loop with its environment,
enabling the agent to perform flexible and deterministic exploration. Dynamic
RL controls global system dynamics using a local index called "sensitivity,"
which indicates how much the input neighborhood contracts or expands into the
corresponding output neighborhood through each neuron's processing. While
sensitivity adjustment learning (SAL) prevents excessive convergence of the
dynamics, sensitivity-controlled reinforcement learning (SRL) adjusts them --
to converge more to improve reproducibility around better state transitions
with positive TD error and to diverge more to enhance exploration around worse
transitions with negative TD error. Dynamic RL was applied only to the actor in
an Actor-Critic RL architecture while applying it to the critic remains a
challenge. It was tested on two dynamic tasks and functioned effectively
without external exploration noise or backward computation through time.
Moreover, it exhibited excellent adaptability to new environments, although
some problems remain. Drawing parallels between 'exploration' and 'thinking,'
the author hypothesizes that "exploration grows into thinking through learning"
and believes this RL could be a key technique for the emergence of thinking,
including inspiration that cannot be reconstructed from massive existing text
data. Finally, despite being presumptuous, the author presents the argument
that this research should not proceed due to its potentially fatal risks,
aiming to encourage discussion.
|
2502.10201
|
Prediction hubs are context-informed frequent tokens in LLMs
|
cs.CL cs.AI
|
Hubness, the tendency for few points to be among the nearest neighbours of a
disproportionate number of other points, commonly arises when applying standard
distance measures to high-dimensional data, often negatively impacting
distance-based analysis. As autoregressive large language models (LLMs) operate
on high-dimensional representations, we ask whether they are also affected by
hubness. We first show, theoretically, that the only representation comparison
operation performed by LLMs, namely that between context and unembedding
vectors to determine continuation probabilities, is not characterized by the
concentration of distances phenomenon that typically causes the appeareance of
nuisance hubness. We then empirically show that this comparison still leads to
a high degree of hubness, but the hubs in this case do not constitute a
disturbance. They are rather the result of context-modulated frequent tokens
often appearing in the pool of likely candidates for next token prediction. On
the other hand, when other distance computations involving LLM representations
are performed, we do not have the same theoretical guarantees, and, indeed, we
see nuisance hubs appear. In summary, our work highlights, on the one hand, how
hubness, while omnipresent in high-dimensional spaces, is not always a negative
property that needs to be mitigated, and, on the other hand, it shows that
various widely-used LLMs have developed a guessing strategy that consists in
constantly assigning a high probability to frequent tokens.
|
2502.10202
|
Can Post-Training Quantization Benefit from an Additional QLoRA
Integration?
|
cs.CL
|
Large language models (LLMs) have transformed natural language processing but
pose significant challenges for real-world deployment. These models necessitate
considerable computing resources, which can be costly and frequently
unavailable. Model compression techniques such as quantization are often
leveraged to alleviate resource demand, but they may have a negative impact on
the generation quality. In this study, we explore the integration of 4-bit
Post-training Quantization (PTQ) with QLoRA to address these issues. We
demonstrate through extensive experiments that this integration outperforms
standard PTQ, and in some cases even 16-bit full-parameter fine-tuning on LLMs,
validated across proprietary and public datasets with different quantization
algorithms. The results demonstrate the efficacy of PTQ-QLoRA integration,
offering a viable solution for deploying powerful LLMs in resource-constrained
environments without compromising on performance.
|
2502.10203
|
AI-in-the-Loop Sensing and Communication Joint Design for Edge
Intelligence
|
cs.LG cs.DC
|
Recent breakthroughs in artificial intelligence (AI), wireless
communications, and sensing technologies have accelerated the evolution of edge
intelligence. However, conventional systems still grapple with issues such as
low communication efficiency, redundant data acquisition, and poor model
generalization. To overcome these challenges, we propose an innovative
framework that enhances edge intelligence through AI-in-the-loop joint sensing
and communication (JSAC). This framework features an AI-driven closed-loop
control architecture that jointly optimizes system resources, thereby
delivering superior system-level performance. A key contribution of our work is
establishing an explicit relationship between validation loss and the system's
tunable parameters. This insight enables dynamic reduction of the
generalization error through AI-driven closed-loop control. Specifically, for
sensing control, we introduce an adaptive data collection strategy based on
gradient importance sampling, allowing edge devices to autonomously decide when
to terminate data acquisition and how to allocate sample weights based on
real-time model feedback. For communication control, drawing inspiration from
stochastic gradient Langevin dynamics (SGLD), our joint optimization of
transmission power and batch size converts channel and data noise into gradient
perturbations that help mitigate overfitting. Experimental evaluations
demonstrate that our framework reduces communication energy consumption by up
to 77 percent and sensing costs measured by the number of collected samples by
up to 52 percent while significantly improving model generalization -- with up
to 58 percent reductions of the final validation loss. It validates that the
proposed scheme can harvest the mutual benefit of AI and JSAC systems by
incorporating the model itself into the control loop of the system.
|
2502.10205
|
Looking around you: external information enhances representations for
event sequences
|
cs.LG
|
Representation learning produces models in different domains, such as store
purchases, client transactions, and general people's behaviour. However, such
models for sequential data usually process a single sequence, ignoring context
from other relevant ones, even in domains with rapidly changing external
environments like finance or misguiding the prediction for a user with no
recent events.
We are the first to propose a method that aggregates information from
multiple user representations augmenting a specific user one for a scenario of
multiple co-occurring event sequences. Our study considers diverse aggregation
approaches, ranging from simple pooling techniques to trainable attention-based
approaches, especially Kernel attention aggregation, that can highlight more
complex information flow from other users. The proposed method operates atop an
existing encoder and supports its efficient fine-tuning. Across considered
datasets of financial transactions and downstream tasks, Kernel attention
improves ROC AUC scores, both with and without fine-tuning, while mean pooling
yields a smaller but still significant gain.
|
2502.10207
|
RIPOST: Two-Phase Private Decomposition for Multidimensional Data
|
cs.DB
|
Differential privacy (DP) is considered as the gold standard for data
privacy. While the problem of answering simple queries and functions under DP
guarantees has been thoroughly addressed in recent years, the problem of
releasing multidimensional data under DP remains challenging. In this paper, we
focus on this problem, in particular on how to construct privacy-preserving
views using a domain decomposition approach. The main idea is to recursively
split the domain into sub-domains until a convergence condition is met. The
resulting sub-domains are perturbed and then published in order to be used to
answer arbitrary queries. Existing methods that have addressed this problem
using domain decomposition face two main challenges: (i) efficient privacy
budget management over a variable and undefined decomposition depth $h$; and
(ii) defining an optimal data-dependent splitting strategy that minimizes the
error in the sub-domains while ensuring the smallest possible decomposition. To
address these challenges, we present RIPOST, a multidimensional data
decomposition algorithm that bypasses the constraint of predefined depth $h$
and applies a data-aware splitting strategy to optimize the quality of the
decomposition results.The core of RIPOST is a two-phase strategy that separates
non-empty sub-domains at an early stage from empty sub-domains by exploiting
the properties of multidimensional datasets, and then decomposes the resulting
sub-domains with minimal inaccuracies using the mean function. Moreover, RIPOST
introduces a privacy budget distribution that allows decomposition without
requiring prior computation of the depth $h$. Through extensive experiments, we
demonstrated that \texttt{RIPOST} outperforms state-of-the-art methods in terms
of data utility and accuracy on a variety of datasets and test cases
|
2502.10208
|
SGS-GNN: A Supervised Graph Sparsification method for Graph Neural
Networks
|
cs.LG
|
We propose SGS-GNN, a novel supervised graph sparsifier that learns the
sampling probability distribution of edges and samples sparse subgraphs of a
user-specified size to reduce the computational costs required by GNNs for
inference tasks on large graphs. SGS-GNN employs regularizers in the loss
function to enhance homophily in sparse subgraphs, boosting the accuracy of
GNNs on heterophilic graphs, where a significant number of the neighbors of a
node have dissimilar labels. SGS-GNN also supports conditional updates of the
probability distribution learning module based on a prior, which helps narrow
the search space for sparse graphs. SGS-GNN requires fewer epochs to obtain
high accuracies since it learns the search space of subgraphs more effectively
than methods using fixed distributions such as random sampling. Extensive
experiments using 33 homophilic and heterophilic graphs demonstrate the
following: (i) with only 20% of edges retained in the sparse subgraphs, SGS-GNN
improves the F1-scores by a geometric mean of 4% relative to the original
graph; on heterophilic graphs, the prediction accuracy is better up to 30%.
(ii) SGS-GNN outperforms state-of-the-art methods with improvement in F1-scores
of 4-7% in geometric mean with similar sparsities in the sampled subgraphs, and
(iii) compared to sparsifiers that employ fixed distributions, SGS-GNN requires
about half the number of epochs to converge.
|
2502.10209
|
Mutual Coupling in Holographic MIMO: Physical Modeling and
Information-Theoretic Analysis
|
cs.IT math.IT
|
This paper presents a comprehensive framework for holographic multiantenna
communication, a paradigm that integrates both wide apertures and closely
spaced antennas relative to the wavelength. The presented framework is
physically grounded, enabling information-theoretic analyses that inherently
incorporate correlation and mutual coupling among the antennas. This
establishes the combined effects of correlation and coupling on the
information-theoretic performance limits across SNR levels. Additionally, it
reveals that, by suitably selecting the individual antenna patterns, mutual
coupling can be harnessed to either reinforce or counter spatial correlations
as appropriate for specific SNRs, thereby improving the performance.
|
2502.10211
|
Control-flow anomaly detection by process mining-based feature
extraction and dimensionality reduction
|
cs.LG
|
The business processes of organizations may deviate from normal control flow
due to disruptive anomalies, including unknown, skipped, and wrongly-ordered
activities. To identify these control-flow anomalies, process mining can check
control-flow correctness against a reference process model through conformance
checking, an explainable set of algorithms that allows linking any deviations
with model elements. However, the effectiveness of conformance checking-based
techniques is negatively affected by noisy event data and low-quality process
models. To address these shortcomings and support the development of
competitive and explainable conformance checking-based techniques for
control-flow anomaly detection, we propose a novel process mining-based feature
extraction approach with alignment-based conformance checking. This variant
aligns the deviating control flow with a reference process model; the resulting
alignment can be inspected to extract additional statistics such as the number
of times a given activity caused mismatches. We integrate this approach into a
flexible and explainable framework for developing techniques for control-flow
anomaly detection. The framework combines process mining-based feature
extraction and dimensionality reduction to handle high-dimensional feature
sets, achieve detection effectiveness, and support explainability. The results
show that the framework techniques implementing our approach outperform the
baseline conformance checking-based techniques while maintaining the
explainable nature of conformance checking. We also provide an explanation of
why existing conformance checking-based techniques may be ineffective.
|
2502.10214
|
Mapping bathymetry of inland water bodies on the North Slope of Alaska
with Landsat using Random Forest
|
cs.CV cs.LG
|
The North Slope of Alaska is dominated by small waterbodies that provide
critical ecosystem services for local population and wildlife. Detailed
information on the depth of the waterbodies is scarce due to the challenges
with collecting such information. In this work we have trained a machine
learning (Random Forest Regressor) model to predict depth from multispectral
Landsat data in waterbodies across the North Slope of Alaska. The greatest
challenge is the scarcity of in situ data, which is expensive and difficult to
obtain, to train the model. We overcame this challenge by using modeled depth
predictions from a prior study as synthetic training data to provide a more
diverse training data pool for the Random Forest. The final Random Forest model
was more robust than models trained directly on the in situ data and when
applied to 208 Landsat 8 scenes from 2016 to 2018 yielded a map with an overall
$r^{2}$ value of 0.76 on validation. The final map has been made available
through the Oak Ridge National Laboratory Distribute Active Archive Center
(ORNL-DAAC). This map represents a first of its kind regional assessment of
waterbody depth with per pixel estimates of depth for the entire North Slope of
Alaska.
|
2502.10215
|
Do Large Language Models Reason Causally Like Us? Even Better?
|
cs.AI cs.LG
|
Causal reasoning is a core component of intelligence. Large language models
(LLMs) have shown impressive capabilities in generating human-like text,
raising questions about whether their responses reflect true understanding or
statistical patterns. We compared causal reasoning in humans and four LLMs
using tasks based on collider graphs, rating the likelihood of a query variable
occurring given evidence from other variables. We find that LLMs reason
causally along a spectrum from human-like to normative inference, with
alignment shifting based on model, context, and task. Overall, GPT-4o and
Claude showed the most normative behavior, including "explaining away", whereas
Gemini-Pro and GPT-3.5 did not. Although all agents deviated from the expected
independence of causes - Claude the least - they exhibited strong associative
reasoning and predictive inference when assessing the likelihood of the effect
given its causes. These findings underscore the need to assess AI biases as
they increasingly assist human decision-making.
|
2502.10216
|
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
|
cs.LG cs.AI
|
We introduce model folding, a novel data-free model compression technique
that merges structurally similar neurons across layers, significantly reducing
the model size without the need for fine-tuning or access to training data.
Unlike existing methods, model folding preserves data statistics during
compression by leveraging k-means clustering, and using novel data-free
techniques to prevent variance collapse or explosion. Our theoretical framework
and experiments across standard benchmarks, including ResNet18 and LLaMA-7B,
demonstrate that model folding achieves comparable performance to data-driven
compression techniques and outperforms recently proposed data-free methods,
especially at high sparsity levels. This approach is particularly effective for
compressing large-scale models, making it suitable for deployment in
resource-constrained environments.
|
2502.10218
|
Integrated Multi-Simulation Environments for Aerial Robotics Research
|
cs.RO
|
Simulation frameworks play a pivotal role in the safe development of robotic
applications. However, often different components of an envisioned robotic
system are best simulated in different environments/simulators. This poses a
significant challenge in simulating the entire project into a single integrated
robotic framework. Specifically, for partially-open or closed-source
simulators, often two core limitations arise. i) Actors in the scene other than
the designated robots cannot be controlled during runtime via interfaces such
as ROS and ii) retrieving real-time state information (such as pose, velocity
etc.) of objects in the scene is prevented. In this work, we address these
limitations and describe our solution for the use case of integrating aerial
drones simulated by the powerful simulator Sphinx (provided by Parrot Drone)
into the Gazebo simulator. We achieve this by means of a mirrored instance of a
drone that is included into existing Gazebo-based environments. A promising
application of our integrated simulation environment is the task of target
tracking that is common in aerial multi-robot scenarios. Therefore, to
demonstrate the effectiveness our our integrated simulation, we also implement
a model predictive controller (MPC) that outperforms the default PID-based
controller framework provided with the Parrot's popular Anafi drone in various
dynamic tracking scenarios thus enhancing the utility of the overall system. We
test our solution by including the Anafi drone in an existing Gazebo-based
simulation and evaluate the performance of the MPC through rigorous testing in
simulated and real-world tracking experiments against a customized PID
controller baseline. Source code is published on
https://github.com/robot-perception-group/anafi_sim.
|
2502.10220
|
Optimal and Coordinated Voltage Control: Case Study on a 132 kV
Norwegian Grid Subsystem
|
eess.SY cs.SY
|
This work presents a framework for dynamic performance assessment of the
higher layers in the hierarchical voltage regulation scheme, with case studies
applied to specific areas of the Norwegian grid. Unlike the primary (PVR)
level, the secondary (SVR) and tertiary (TVR) levels are not tuned to a single
device at a time, handling instead several reactive power resources available
within a control zone including generator units, static VAr compensators and
others. Proper SVR-TVR coordination for realistic transmission systems is a
challenging topic at the core of many ongoing discussions in voltage control
literature. Special focus is placed on practical considerations from the system
operator perspective, since this research is also aimed at simplifying daily
control centre routines. Dynamic simulation results concern a 21-bus equivalent
of a 132 kV network model that accurately represents a Norwegian grid
subsystem. Case studies address daily grid operation with real-life load demand
and wind power generation profiles, showing that the proposed strategy is
effective not only to minimize total active power losses as much as possible
within system-wide limitations, but also to maintain adequate voltage profiles
and reactive power flows. Findings pertaining to this work showcase the
benefits of applying hierarchical voltage regulation layers as an asset to
day-to-day control center management of a realistic transmission network.
|
2502.10224
|
Comparison of Deep Recurrent Neural Networks and Bayesian Neural
Networks for Detecting Electric Motor Damage Through Sound Signal Analysis
|
cs.LG
|
Fault detection in electric motors is a critical challenge in various
industries, where failures can result in significant operational disruptions.
This study investigates the use of Recurrent Neural Networks (RNNs) and
Bayesian Neural Networks (BNNs) for diagnosing motor damage using acoustic
signal analysis. A novel approach is proposed, leveraging frequency domain
representation of sound signals for enhanced diagnostic accuracy. The
architectures of both RNNs and BNNs are designed and evaluated on real-world
acoustic data collected from household appliances using smartphones.
Experimental results demonstrate that BNNs provide superior fault detection
performance, particularly for imbalanced datasets, offering more robust and
interpretable predictions compared to traditional methods. The findings suggest
that BNNs, with their ability to incorporate uncertainty, are well-suited for
industrial diagnostic applications. Further analysis and benchmarks are
suggested to explore resource efficiency and classification capabilities of
these architectures.
|
2502.10226
|
A Multiagent Path Search Algorithm for Large-Scale Coalition Structure
Generation
|
cs.MA cs.AI cs.GT
|
Coalition structure generation (CSG), i.e. the problem of optimally
partitioning a set of agents into coalitions to maximize social welfare, is a
fundamental computational problem in multiagent systems. This problem is
important for many applications where small run times are necessary, including
transportation and disaster response. In this paper, we develop SALDAE, a
multiagent path finding algorithm for CSG that operates on a graph of coalition
structures. Our algorithm utilizes a variety of heuristics and strategies to
perform the search and guide it. It is an anytime algorithm that can handle
large problems with hundreds and thousands of agents. We show empirically on
nine standard value distributions, including disaster response and electric
vehicle allocation benchmarks, that our algorithm enables a rapid finding of
high-quality solutions and compares favorably with other state-of-the-art
methods.
|
2502.10230
|
ProReco: A Process Discovery Recommender System
|
cs.LG cs.IR
|
Process discovery aims to automatically derive process models from historical
execution data (event logs). While various process discovery algorithms have
been proposed in the last 25 years, there is no consensus on a dominating
discovery algorithm. Selecting the most suitable discovery algorithm remains a
challenge due to competing quality measures and diverse user requirements.
Manually selecting the most suitable process discovery algorithm from a range
of options for a given event log is a time-consuming and error-prone task. This
paper introduces ProReco, a Process discovery Recommender system designed to
recommend the most appropriate algorithm based on user preferences and event
log characteristics. ProReco incorporates state-of-the-art discovery
algorithms, extends the feature pools from previous work, and utilizes
eXplainable AI (XAI) techniques to provide explanations for its
recommendations.
|
2502.10233
|
Learning to Solve the Min-Max Mixed-Shelves Picker-Routing Problem via
Hierarchical and Parallel Decoding
|
cs.MA cs.LG stat.ML
|
The Mixed-Shelves Picker Routing Problem (MSPRP) is a fundamental challenge
in warehouse logistics, where pickers must navigate a mixed-shelves environment
to retrieve SKUs efficiently. Traditional heuristics and optimization-based
approaches struggle with scalability, while recent machine learning methods
often rely on sequential decision-making, leading to high solution latency and
suboptimal agent coordination. In this work, we propose a novel hierarchical
and parallel decoding approach for solving the min-max variant of the MSPRP via
multi-agent reinforcement learning. While our approach generates a joint
distribution over agent actions, allowing for fast decoding and effective
picker coordination, our method introduces a sequential action selection to
avoid conflicts in the multi-dimensional action space. Experiments show
state-of-the-art performance in both solution quality and inference speed,
particularly for large-scale and out-of-distribution instances. Our code is
publicly available at http://github.com/LTluttmann/marl4msprp.
|
2502.10235
|
AdaPTS: Adapting Univariate Foundation Models to Probabilistic
Multivariate Time Series Forecasting
|
stat.ML cs.LG
|
Pre-trained foundation models (FMs) have shown exceptional performance in
univariate time series forecasting tasks. However, several practical challenges
persist, including managing intricate dependencies among features and
quantifying uncertainty in predictions. This study aims to tackle these
critical limitations by introducing adapters; feature-space transformations
that facilitate the effective use of pre-trained univariate time series FMs for
multivariate tasks. Adapters operate by projecting multivariate inputs into a
suitable latent space and applying the FM independently to each dimension.
Inspired by the literature on representation learning and partially stochastic
Bayesian neural networks, we present a range of adapters and
optimization/inference strategies. Experiments conducted on both synthetic and
real-world datasets confirm the efficacy of adapters, demonstrating substantial
enhancements in forecasting accuracy and uncertainty quantification compared to
baseline methods. Our framework, AdaPTS, positions adapters as a modular,
scalable, and effective solution for leveraging time series FMs in multivariate
contexts, thereby promoting their wider adoption in real-world applications. We
release the code at https://github.com/abenechehab/AdaPTS.
|
2502.10236
|
Shaping Inductive Bias in Diffusion Models through Frequency-Based Noise
Control
|
cs.LG cs.AI
|
Diffusion Probabilistic Models (DPMs) are powerful generative models that
have achieved unparalleled success in a number of generative tasks. In this
work, we aim to build inductive biases into the training and sampling of
diffusion models to better accommodate the target distribution of the data to
model. For topologically structured data, we devise a frequency-based noising
operator to purposefully manipulate, and set, these inductive biases. We first
show that appropriate manipulations of the noising forward process can lead
DPMs to focus on particular aspects of the distribution to learn. We show that
different datasets necessitate different inductive biases, and that appropriate
frequency-based noise control induces increased generative performance compared
to standard diffusion. Finally, we demonstrate the possibility of ignoring
information at particular frequencies while learning. We show this in an image
corruption and recovery task, where we train a DPM to recover the original
target distribution after severe noise corruption.
|
2502.10239
|
Efficient Zero-Order Federated Finetuning of Language Models for
Resource-Constrained Devices
|
cs.LG cs.AI
|
Federated fine-tuning offers a promising approach for tuning Large Language
Models (LLMs) on edge devices while preserving data privacy. However,
fine-tuning these models on edge devices remains challenging due to high
memory, communication, and computational demands. Zero-order optimization with
task alignment provides a potential solution, enabling fine-tuning with
inference-level memory requirements but requires a longer convergence time. In
this paper, we propose Federated Split-Perturbation Zero-order Optimization
(FedSPZO) that divides the network into two blocks, applying a different number
of perturbations per block in a computationally effective way, achieving faster
convergence. Our evaluation shows a $2.5 - 7\times $ reduction in computation
overhead compared to zero-order state of the art techniques in federated
learning.
|
2502.10243
|
Safety Blind Spot in Remote Driving: Considerations for Risk Assessment
of Connection Loss Fallback Strategies
|
eess.SY cs.SY
|
As part of the overall goal of driverless road vehicles, remote driving is a
major emerging field of research of its own. Current remote driving concepts
for public road traffic often establish a fallback strategy of immediate
braking to a standstill in the event of a connection loss. This may seem like
the most logical option when human control of the vehicle is lost. However, our
simulation results from hundreds of scenarios based on naturalistic traffic
scenes indicate high collision rates for any immediate substantial deceleration
to a standstill in urban settings. We show that such a fallback strategy can
result in a SOTIF relevant hazard, making it questionable whether such a design
decision can be considered acceptable. Therefore, from a safety perspective, we
would call this problem a safety blind spot, as safety analyses in this regard
seem to be very rare.
In this article, we first present a simulation on a naturalistic dataset that
shows a high probability of collision in the described case. Second, we discuss
the severity of the resulting potential rear-end collisions and provide an even
more severe example by including a large commercial vehicle in the potential
collision.
|
2502.10248
|
Step-Video-T2V Technical Report: The Practice, Challenges, and Future of
Video Foundation Model
|
cs.CV cs.CL
|
We present Step-Video-T2V, a state-of-the-art text-to-video pre-trained model
with 30B parameters and the ability to generate videos up to 204 frames in
length. A deep compression Variational Autoencoder, Video-VAE, is designed for
video generation tasks, achieving 16x16 spatial and 8x temporal compression
ratios, while maintaining exceptional video reconstruction quality. User
prompts are encoded using two bilingual text encoders to handle both English
and Chinese. A DiT with 3D full attention is trained using Flow Matching and is
employed to denoise input noise into latent frames. A video-based DPO approach,
Video-DPO, is applied to reduce artifacts and improve the visual quality of the
generated videos. We also detail our training strategies and share key
observations and insights. Step-Video-T2V's performance is evaluated on a novel
video generation benchmark, Step-Video-T2V-Eval, demonstrating its
state-of-the-art text-to-video quality when compared with both open-source and
commercial engines. Additionally, we discuss the limitations of current
diffusion-based model paradigm and outline future directions for video
foundation models. We make both Step-Video-T2V and Step-Video-T2V-Eval
available at https://github.com/stepfun-ai/Step-Video-T2V. The online version
can be accessed from https://yuewen.cn/videos as well. Our goal is to
accelerate the innovation of video foundation models and empower video content
creators.
|
2502.10250
|
VisCon-100K: Leveraging Contextual Web Data for Fine-tuning Vision
Language Models
|
cs.CL cs.CV
|
Vision-language models (VLMs) excel in various visual benchmarks but are
often constrained by the lack of high-quality visual fine-tuning data. To
address this challenge, we introduce VisCon-100K, a novel dataset derived from
interleaved image-text web documents. Our approach transforms 45K web documents
from the OBELICS dataset into 100K image conversation samples. We utilize
GPT-4V to generate image-contextual captions and OpenChat 3.5 model to convert
these captions into diverse free-form and multiple-choice question-answer
pairs. Integrating this dataset for fine-tuning considerably enhances VLM
performance across multiple benchmarks. Unlike methods that focus solely on
fine-grained visual content, our approach leverages accompanying web context,
yielding superior results. We also discover that a `leaky modality mix,' where
conversation samples contain questions answerable from both the image and its
contextual caption, outperforms non-leaky combinations of captions and Q\&A
pairs. VisCon-100k dataset shows strong performance with two popular VLM
approaches: text-only large language model (LLM) aligned with a vision encoder
using image captions data (ShareGPT4V-7b) and multimodally pretrained LLM
(IDEFICS2-8b) using interleaved image-text data. In addition to releasing the
VisCon-100K dataset, we provide a contextual captioner trained on this dataset,
facilitating scalable fine-tuning data generation for future research and
open-source applications. Using the same pipeline, but substituting our trained
contextual captioner for GPT-4V, we also release the larger VisCon-1M dataset.
|
2502.10258
|
PromptArtisan: Multi-instruction Image Editing in Single Pass with
Complete Attention Control
|
cs.CV cs.HC
|
We present PromptArtisan, a groundbreaking approach to multi-instruction
image editing that achieves remarkable results in a single pass, eliminating
the need for time-consuming iterative refinement. Our method empowers users to
provide multiple editing instructions, each associated with a specific mask
within the image. This flexibility allows for complex edits involving mask
intersections or overlaps, enabling the realization of intricate and nuanced
image transformations. PromptArtisan leverages a pre-trained InstructPix2Pix
model in conjunction with a novel Complete Attention Control Mechanism (CACM).
This mechanism ensures precise adherence to user instructions, granting
fine-grained control over the editing process. Furthermore, our approach is
zero-shot, requiring no additional training, and boasts improved processing
complexity compared to traditional iterative methods. By seamlessly integrating
multi-instruction capabilities, single-pass efficiency, and complete attention
control, PromptArtisan unlocks new possibilities for creative and efficient
image editing workflows, catering to both novice and expert users alike.
|
2502.10259
|
MITO: Enabling Non-Line-of-Sight Perception using Millimeter-waves
through Real-World Datasets and Simulation Tools
|
cs.CV
|
We present MITO, the first dataset of multi-spectral millimeter-wave (mmWave)
images of everyday objects. Unlike visible light, mmWave signals can image
through everyday occlusions (e.g., cardboard boxes, fabric, plastic). However,
due to the dearth of publicly-available mmWave images and the interdisciplinary
challenges in collecting and processing mmWave signals, it remains difficult
today for computer vision researchers to develop mmWave-based non-line-of-sight
perception algorithms and models.
To overcome these challenges, we introduce a real-world dataset and
open-source simulation tool for mmWave imaging. The dataset is acquired using a
UR5 robotic arm with two mmWave radars operating at different frequencies and
an RGB-D camera. Through a signal processing pipeline, we capture and create
over 580 real-world 3D mmWave images from over 76 different objects in the YCB
dataset, a standard dataset for robotics manipulation. We provide real-world
mmWave images in line-of-sight and non-line-of-sight, as well as RGB-D images
and ground truth segmentation masks. We also develop an open-source simulation
tool that can be used to generate synthetic mmWave images for any 3D triangle
mesh, which achieves a median F-Score of 94% when compared to real-world mmWave
images.
We show the usefulness of this dataset and simulation tool in multiple CV
tasks in non-line-of-sight. First, we perform object segmentation for mmWave
images using the segment anything model (SAM), and achieve a median precision
and recall of 92.6% and 64%. Second, we train a classifier that can recognize
objects in non-line-of-sight. It is trained on synthetic images and can
classify real-world images with 85% accuracy.
We believe MITO will be a valuable resource for computer vision researchers
in developing non-line-of-sight perception, similar to how early camera-based
datasets shaped the field.
|
2502.10263
|
Large Language Models and Synthetic Data for Monitoring Dataset Mentions
in Research Papers
|
cs.CL cs.AI cs.CY cs.DB cs.LG
|
Tracking how data is mentioned and used in research papers provides critical
insights for improving data discoverability, quality, and production. However,
manually identifying and classifying dataset mentions across vast academic
literature is resource-intensive and not scalable. This paper presents a
machine learning framework that automates dataset mention detection across
research domains by leveraging large language models (LLMs), synthetic data,
and a two-stage fine-tuning process. We employ zero-shot extraction from
research papers, an LLM-as-a-Judge for quality assessment, and a reasoning
agent for refinement to generate a weakly supervised synthetic dataset. The
Phi-3.5-mini instruct model is pre-fine-tuned on this dataset, followed by
fine-tuning on a manually annotated subset. At inference, a ModernBERT-based
classifier efficiently filters dataset mentions, reducing computational
overhead while maintaining high recall. Evaluated on a held-out manually
annotated sample, our fine-tuned model outperforms NuExtract-v1.5 and
GLiNER-large-v2.1 in dataset extraction accuracy. Our results highlight how
LLM-generated synthetic data can effectively address training data scarcity,
improving generalization in low-resource settings. This framework offers a
pathway toward scalable monitoring of dataset usage, enhancing transparency,
and supporting researchers, funders, and policymakers in identifying data gaps
and strengthening data accessibility for informed decision-making.
|
2502.10266
|
Are Large Language Models the future crowd workers of Linguistics?
|
cs.CL cs.AI
|
Data elicitation from human participants is one of the core data collection
strategies used in empirical linguistic research. The amount of participants in
such studies may vary considerably, ranging from a handful to crowdsourcing
dimensions. Even if they provide resourceful extensive data, both of these
settings come alongside many disadvantages, such as low control of
participants' attention during task completion, precarious working conditions
in crowdsourcing environments, and time-consuming experimental designs. For
these reasons, this research aims to answer the question of whether Large
Language Models (LLMs) may overcome those obstacles if included in empirical
linguistic pipelines. Two reproduction case studies are conducted to gain
clarity into this matter: Cruz (2023) and Lombard et al. (2021). The two forced
elicitation tasks, originally designed for human participants, are reproduced
in the proposed framework with the help of OpenAI's GPT-4o-mini model. Its
performance with our zero-shot prompting baseline shows the effectiveness and
high versatility of LLMs, that tend to outperform human informants in
linguistic tasks. The findings of the second replication further highlight the
need to explore additional prompting techniques, such as Chain-of-Thought (CoT)
prompting, which, in a second follow-up experiment, demonstrates higher
alignment to human performance on both critical and filler items. Given the
limited scale of this study, it is worthwhile to further explore the
performance of LLMs in empirical Linguistics and in other future applications
in the humanities.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.