id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.03119
|
Comparison of the Cox proportional hazards model and Random Survival
Forest algorithm for predicting patient-specific survival probabilities in
clinical trial data
|
stat.ML cs.LG
|
The Cox proportional hazards model is often used for model development in
data from randomized controlled trials (RCT) with time-to-event outcomes.
Random survival forests (RSF) is a machine-learning algorithm known for its
high predictive performance. We conduct a comprehensive neutral comparison
study to compare the predictive performance of Cox regression and RSF in
real-world as well as simulated data. Performance is compared using multiple
performance measures according to recommendations for the comparison of
prognostic prediction models. We found that while the RSF usually outperforms
the Cox model when using the $C$ index, Cox model predictions may be better
calibrated. With respect to overall performance, the Cox model often exceeds
the RSF in nonproportional hazards settings, while otherwise the RSF typically
performs better especially for smaller sample sizes. Overall performance of the
RSF is more affected by higher censoring rates, while overall performance of
the Cox model suffers more from smaller sample sizes.
|
2502.03120
|
At the Mahakumbh, Faith Met Tragedy: Computational Analysis of Stampede
Patterns Using Machine Learning and NLP
|
cs.LG cs.AI cs.CY cs.SI
|
This study employs machine learning, historical analysis, and natural
language processing (NLP) to examine recurring lethal stampedes at Indias mass
religious gatherings, focusing on the 2025 Mahakumbh tragedy in Prayagraj (48+
deaths) and its 1954 predecessor (700+ casualties). Through computational
modeling of crowd dynamics and administrative records, it investigates how
systemic vulnerabilities contribute to these disasters. Temporal trend analysis
identifies persistent choke points, with narrow riverbank access routes linked
to 92% of past stampede sites and lethal crowd densities (eight or more persons
per square meter) recurring during spiritually significant moments like Mauni
Amavasya. NLP analysis of seven decades of inquiry reports reveals cyclical
administrative failures, where VIP route prioritization diverted safety
resources in both 1954 and 2025, exacerbating fatalities. Statistical modeling
demonstrates how ritual urgency overrides risk perception, leading to panic
propagation patterns that mirror historical incidents. Findings support the
Institutional Amnesia Theory, highlighting how disaster responses remain
reactionary rather than preventive. By correlating archival patterns with
computational crowd behavior analysis, this study frames stampedes as a
collision of infrastructure limitations, socio spiritual urgency, and
governance inertia, challenging disaster discourse to address how spiritual
economies normalize preventable mortality.
|
2502.03122
|
HiLo: Learning Whole-Body Human-like Locomotion with Motion Tracking
Controller
|
cs.RO
|
Deep Reinforcement Learning (RL) has emerged as a promising method to develop
humanoid robot locomotion controllers. Despite the robust and stable locomotion
demonstrated by previous RL controllers, their behavior often lacks the natural
and agile motion patterns necessary for human-centric scenarios. In this work,
we propose HiLo (human-like locomotion with motion tracking), an effective
framework designed to learn RL policies that perform human-like locomotion. The
primary challenges of human-like locomotion are complex reward engineering and
domain randomization. HiLo overcomes these issues by developing an RL-based
motion tracking controller and simple domain randomization through random force
injection and action delay. Within the framework of HiLo, the whole-body
control problem can be decomposed into two components: One part is solved using
an open-loop control method, while the residual part is addressed with RL
policies. A distributional value function is also implemented to stabilize the
training process by improving the estimation of cumulative rewards under
perturbed dynamics. Our experiments demonstrate that the motion tracking
controller trained using HiLo can perform natural and agile human-like
locomotion while exhibiting resilience to external disturbances in real-world
systems. Furthermore, we show that the motion patterns of humanoid robots can
be adapted through the residual mechanism without fine-tuning, allowing quick
adjustments to task requirements.
|
2502.03123
|
Disentanglement in Difference: Directly Learning Semantically
Disentangled Representations by Maximizing Inter-Factor Differences
|
cs.LG cs.AI
|
In this study, Disentanglement in Difference(DiD) is proposed to address the
inherent inconsistency between the statistical independence of latent variables
and the goal of semantic disentanglement in disentanglement representation
learning. Conventional disentanglement methods achieve disentanglement
representation by improving statistical independence among latent variables.
However, the statistical independence of latent variables does not necessarily
imply that they are semantically unrelated, thus, improving statistical
independence does not always enhance disentanglement performance. To address
the above issue, DiD is proposed to directly learn semantic differences rather
than the statistical independence of latent variables. In the DiD, a Difference
Encoder is designed to measure the semantic differences; a contrastive loss
function is established to facilitate inter-dimensional comparison. Both of
them allow the model to directly differentiate and disentangle distinct
semantic factors, thereby resolving the inconsistency between statistical
independence and semantic disentanglement. Experimental results on the dSprites
and 3DShapes datasets demonstrate that the proposed DiD outperforms existing
mainstream methods across various disentanglement metrics.
|
2502.03124
|
Levelised Cost of Demand Response: Estimating the Cost-Competitiveness
of Flexible Demand
|
eess.SY cs.SY
|
To make well-informed investment decisions, energy system stakeholders
require reliable cost frameworks for demand response (DR) and storage
technologies. While the levelised cost of storage (LCOS) permits comprehensive
cost comparisons between different storage technologies, no generic cost
measure for the comparison of different DR schemes exists. This paper
introduces the levelised cost of demand response (LCODR) which is an analogous
measure to the LCOS but crucially differs from it by considering consumer
reward payments. Additionally, the value factor from cost estimations of
variable renewable energy is adapted to account for the variable availability
of DR. The LCODRs for four direct load control (DLC) schemes and twelve storage
applications are estimated and contrasted against LCOS literature values for
the most competitive storage technologies. The DLC schemes are vehicle-to-grid,
smart charging, smart heat pumps, and heat pumps with thermal storage. The
results show that only heat pumps with thermal storage consistently outcompete
storage technologies with EV-based DR schemes being competitive for some
applications. The results and the underlying methodology offer a tool for
energy system stakeholders to assess the competitiveness of DR schemes even
with limited user data.
|
2502.03125
|
Double Distillation Network for Multi-Agent Reinforcement Learning
|
cs.MA cs.LG
|
Multi-agent reinforcement learning typically employs a centralized
training-decentralized execution (CTDE) framework to alleviate the
non-stationarity in environment. However, the partial observability during
execution may lead to cumulative gap errors gathered by agents, impairing the
training of effective collaborative policies. To overcome this challenge, we
introduce the Double Distillation Network (DDN), which incorporates two
distillation modules aimed at enhancing robust coordination and facilitating
the collaboration process under constrained information. The external
distillation module uses a global guiding network and a local policy network,
employing distillation to reconcile the gap between global training and local
execution. In addition, the internal distillation module introduces intrinsic
rewards, drawn from state information, to enhance the exploration capabilities
of agents. Extensive experiments demonstrate that DDN significantly improves
performance across multiple scenarios.
|
2502.03128
|
Metis: A Foundation Speech Generation Model with Masked Generative
Pre-training
|
cs.SD cs.AI cs.LG eess.AS eess.SP
|
We introduce Metis, a foundation model for unified speech generation. Unlike
previous task-specific or multi-task models, Metis follows a pre-training and
fine-tuning paradigm. It is pre-trained on large-scale unlabeled speech data
using masked generative modeling and then fine-tuned to adapt to diverse speech
generation tasks. Specifically, 1) Metis utilizes two discrete speech
representations: SSL tokens derived from speech self-supervised learning (SSL)
features, and acoustic tokens directly quantized from waveforms. 2) Metis
performs masked generative pre-training on SSL tokens, utilizing 300K hours of
diverse speech data, without any additional condition. 3) Through fine-tuning
with task-specific conditions, Metis achieves efficient adaptation to various
speech generation tasks while supporting multimodal input, even when using
limited data and trainable parameters. Experiments demonstrate that Metis can
serve as a foundation model for unified speech generation: Metis outperforms
state-of-the-art task-specific or multi-task systems across five speech
generation tasks, including zero-shot text-to-speech, voice conversion, target
speaker extraction, speech enhancement, and lip-to-speech, even with fewer than
20M trainable parameters or 300 times less training data. Audio samples are are
available at https://metis-demo.github.io/.
|
2502.03129
|
Teaching Large Language Models Number-Focused Headline Generation With
Key Element Rationales
|
cs.CL cs.LG
|
Number-focused headline generation is a summarization task requiring both
high textual quality and precise numerical accuracy, which poses a unique
challenge for Large Language Models (LLMs). Existing studies in the literature
focus only on either textual quality or numerical reasoning and thus are
inadequate to address this challenge. In this paper, we propose a novel
chain-of-thought framework for using rationales comprising key elements of the
Topic, Entities, and Numerical reasoning (TEN) in news articles to enhance the
capability for LLMs to generate topic-aligned high-quality texts with precise
numerical accuracy. Specifically, a teacher LLM is employed to generate TEN
rationales as supervision data, which are then used to teach and fine-tune a
student LLM. Our approach teaches the student LLM automatic generation of
rationales with enhanced capability for numerical reasoning and topic-aligned
numerical headline generation. Experiments show that our approach achieves
superior performance in both textual quality and numerical accuracy.
|
2502.03132
|
SPARK: A Modular Benchmark for Humanoid Robot Safety
|
cs.RO cs.SY eess.SY
|
This paper introduces the Safe Protective and Assistive Robot Kit (SPARK), a
comprehensive benchmark designed to ensure safety in humanoid autonomy and
teleoperation. Humanoid robots pose significant safety risks due to their
physical capabilities of interacting with complex environments. The physical
structures of humanoid robots further add complexity to the design of general
safety solutions. To facilitate the safe deployment of complex robot systems,
SPARK can be used as a toolbox that comes with state-of-the-art safe control
algorithms in a modular and composable robot control framework. Users can
easily configure safety criteria and sensitivity levels to optimize the balance
between safety and performance. To accelerate humanoid safety research and
development, SPARK provides a simulation benchmark that compares safety
approaches in a variety of environments, tasks, and robot models. Furthermore,
SPARK allows quick deployment of synthesized safe controllers on real robots.
For hardware deployment, SPARK supports Apple Vision Pro (AVP) or a Motion
Capture System as external sensors, while also offering interfaces for seamless
integration with alternative hardware setups. This paper demonstrates SPARK's
capability with both simulation experiments and case studies with a Unitree G1
humanoid robot. Leveraging these advantages of SPARK, users and researchers can
significantly improve the safety of their humanoid systems as well as
accelerate relevant research. The open-source code is available at
https://github.com/intelligent-control-lab/spark.
|
2502.03134
|
Gotham Dataset 2025: A Reproducible Large-Scale IoT Network Dataset for
Intrusion Detection and Security Research
|
cs.CR cs.AI
|
In this paper, a dataset of IoT network traffic is presented. Our dataset was
generated by utilising the Gotham testbed, an emulated large-scale Internet of
Things (IoT) network designed to provide a realistic and heterogeneous
environment for network security research. The testbed includes 78 emulated IoT
devices operating on various protocols, including MQTT, CoAP, and RTSP. Network
traffic was captured in Packet Capture (PCAP) format using tcpdump, and both
benign and malicious traffic were recorded. Malicious traffic was generated
through scripted attacks, covering a variety of attack types, such as Denial of
Service (DoS), Telnet Brute Force, Network Scanning, CoAP Amplification, and
various stages of Command and Control (C&C) communication. The data were
subsequently processed in Python for feature extraction using the Tshark tool,
and the resulting data was converted to Comma Separated Values (CSV) format and
labelled. The data repository includes the raw network traffic in PCAP format
and the processed labelled data in CSV format. Our dataset was collected in a
distributed manner, where network traffic was captured separately for each IoT
device at the interface between the IoT gateway and the device. Our dataset was
collected in a distributed manner, where network traffic was separately
captured for each IoT device at the interface between the IoT gateway and the
device. With its diverse traffic patterns and attack scenarios, this dataset
provides a valuable resource for developing Intrusion Detection Systems and
security mechanisms tailored to complex, large-scale IoT environments. The
dataset is publicly available at Zenodo.
|
2502.03135
|
Underwater Soft Fin Flapping Motion with Deep Neural Network Based
Surrogate Model
|
cs.RO cs.LG
|
This study presents a novel framework for precise force control of
fin-actuated underwater robots by integrating a deep neural network (DNN)-based
surrogate model with reinforcement learning (RL). To address the complex
interactions with the underwater environment and the high experimental costs, a
DNN surrogate model acts as a simulator for enabling efficient training for the
RL agent. Additionally, grid-switching control is applied to select optimized
models for specific force reference ranges, improving control accuracy and
stability. Experimental results show that the RL agent, trained in the
surrogate simulation, generates complex thrust motions and achieves precise
control of a real soft fin actuator. This approach provides an efficient
control solution for fin-actuated robots in challenging underwater
environments.
|
2502.03139
|
Fast Sampling of Cosmological Initial Conditions with Gaussian Neural
Posterior Estimation
|
astro-ph.CO astro-ph.IM cs.LG
|
Knowledge of the primordial matter density field from which the large-scale
structure of the Universe emerged over cosmic time is of fundamental importance
for cosmology. However, reconstructing these cosmological initial conditions
from late-time observations is a notoriously difficult task, which requires
advanced cosmological simulators and sophisticated statistical methods to
explore a multi-million-dimensional parameter space. We show how
simulation-based inference (SBI) can be used to tackle this problem and to
obtain data-constrained realisations of the primordial dark matter density
field in a simulation-efficient way with general non-differentiable simulators.
Our method is applicable to full high-resolution dark matter $N$-body
simulations and is based on modelling the posterior distribution of the
constrained initial conditions to be Gaussian with a diagonal covariance matrix
in Fourier space. As a result, we can generate thousands of posterior samples
within seconds on a single GPU, orders of magnitude faster than existing
methods, paving the way for sequential SBI for cosmological fields.
Furthermore, we perform an analytical fit of the estimated dependence of the
covariance on the wavenumber, effectively transforming any point-estimator of
initial conditions into a fast sampler. We test the validity of our obtained
samples by comparing them to the true values with summary statistics and
performing a Bayesian consistency test.
|
2502.03143
|
Machine Learning-Driven Student Performance Prediction for Enhancing
Tiered Instruction
|
cs.LG cs.CY
|
Student performance prediction is one of the most important subjects in
educational data mining. As a modern technology, machine learning offers
powerful capabilities in feature extraction and data modeling, providing
essential support for diverse application scenarios, as evidenced by recent
studies confirming its effectiveness in educational data mining. However,
despite extensive prediction experiments, machine learning methods have not
been effectively integrated into practical teaching strategies, hindering their
application in modern education. In addition, massive features as input
variables for machine learning algorithms often leads to information
redundancy, which can negatively impact prediction accuracy. Therefore, how to
effectively use machine learning methods to predict student performance and
integrate the prediction results with actual teaching scenarios is a worthy
research subject. To this end, this study integrates the results of machine
learning-based student performance prediction with tiered instruction, aiming
to enhance student outcomes in target course, which is significant for the
application of educational data mining in contemporary teaching scenarios.
Specifically, we collect original educational data and perform feature
selection to reduce information redundancy. Then, the performance of five
representative machine learning methods is analyzed and discussed with Random
Forest showing the best performance. Furthermore, based on the results of the
classification of students, tiered instruction is applied accordingly, and
different teaching objectives and contents are set for all levels of students.
The comparison of teaching outcomes between the control and experimental
classes, along with the analysis of questionnaire results, demonstrates the
effectiveness of the proposed framework.
|
2502.03144
|
Group Trip Planning Query Problem with Multimodal Journey
|
cs.MA cs.DB cs.DS
|
In Group Trip Planning (GTP) Query Problem, we are given a city road network
where a number of Points of Interest (PoI) have been marked with their
respective categories (e.g., Cafeteria, Park, Movie Theater, etc.). A group of
agents want to visit one PoI from every category from their respective starting
location and once finished, they want to reach their respective destinations.
This problem asks which PoI from every category should be chosen so that the
aggregated travel cost of the group is minimized. This problem has been studied
extensively in the last decade, and several solution approaches have been
proposed. However, to the best of our knowledge, none of the existing studies
have considered the different modalities of the journey, which makes the
problem more practical. To bridge this gap, we introduce and study the GTP
Query Problem with Multimodal Journey in this paper. Along with the other
inputs of the GTP Query Problem, we are also given the different modalities of
the journey that are available and their respective cost. Now, the problem is
not only to select the PoIs from respective categories but also to select the
modality of the journey. For this problem, we have proposed an efficient
solution approach, which has been analyzed to understand their time and space
requirements. A large number of experiments have been conducted using real-life
datasets and the results have been reported. From the results, we observe that
the PoIs and modality of journey recommended by the proposed solution approach
lead to much less time and cost than the baseline methods.
|
2502.03146
|
Symmetry-Aware Bayesian Flow Networks for Crystal Generation
|
cs.LG cond-mat.mtrl-sci
|
The discovery of new crystalline materials is essential to scientific and
technological progress. However, traditional trial-and-error approaches are
inefficient due to the vast search space. Recent advancements in machine
learning have enabled generative models to predict new stable materials by
incorporating structural symmetries and to condition the generation on desired
properties. In this work, we introduce SymmBFN, a novel symmetry-aware Bayesian
Flow Network (BFN) for crystalline material generation that accurately
reproduces the distribution of space groups found in experimentally observed
crystals. SymmBFN substantially improves efficiency, generating stable
structures at least 50 times faster than the next-best method. Furthermore, we
demonstrate its capability for property-conditioned generation, enabling the
design of materials with tailored properties. Our findings establish BFNs as an
effective tool for accelerating the discovery of crystalline materials.
|
2502.03147
|
Scalable In-Context Learning on Tabular Data via Retrieval-Augmented
Large Language Models
|
cs.CL cs.AI
|
Recent studies have shown that large language models (LLMs), when customized
with post-training on tabular data, can acquire general tabular in-context
learning (TabICL) capabilities. These models are able to transfer effectively
across diverse data schemas and different task domains. However, existing
LLM-based TabICL approaches are constrained to few-shot scenarios due to the
sequence length limitations of LLMs, as tabular instances represented in plain
text consume substantial tokens. To address this limitation and enable scalable
TabICL for any data size, we propose retrieval-augmented LLMs tailored to
tabular data. Our approach incorporates a customized retrieval module, combined
with retrieval-guided instruction-tuning for LLMs. This enables LLMs to
effectively leverage larger datasets, achieving significantly improved
performance across 69 widely recognized datasets and demonstrating promising
scaling behavior. Extensive comparisons with state-of-the-art tabular models
reveal that, while LLM-based TabICL still lags behind well-tuned numeric models
in overall performance, it uncovers powerful algorithms under limited contexts,
enhances ensemble diversity, and excels on specific datasets. These unique
properties underscore the potential of language as a universal and accessible
interface for scalable tabular data learning.
|
2502.03148
|
Abnormal Mutations: Evolution Strategies Don't Require Gaussianity
|
cs.NE
|
The mutation process in evolution strategies has been interlinked with the
normal distribution since its inception. Many lines of reasoning have been
given for this strong dependency, ranging from maximum entropy arguments to the
need for isotropy. However, some theoretical results suggest that other
distributions might lead to similar local convergence properties. This paper
empirically shows that a wide range of evolutionary strategies, from the
(1+1)-ES to CMA-ES, show comparable optimization performance when using a
mutation distribution other than the standard Gaussian. Replacing it with,
e.g., uniformly distributed mutations, does not deteriorate the performance of
ES, when using the default adaptation mechanism for the strategy parameters. We
observe that these results hold not only for the sphere model but also for a
wider range of benchmark problems.
|
2502.03159
|
PICBench: Benchmarking LLMs for Photonic Integrated Circuits Design
|
cs.LG cs.AR
|
While large language models (LLMs) have shown remarkable potential in
automating various tasks in digital chip design, the field of Photonic
Integrated Circuits (PICs)-a promising solution to advanced chip
designs-remains relatively unexplored in this context. The design of PICs is
time-consuming and prone to errors due to the extensive and repetitive nature
of code involved in photonic chip design. In this paper, we introduce PICBench,
the first benchmarking and evaluation framework specifically designed to
automate PIC design generation using LLMs, where the generated output takes the
form of a netlist. Our benchmark consists of dozens of meticulously crafted PIC
design problems, spanning from fundamental device designs to more complex
circuit-level designs. It automatically evaluates both the syntax and
functionality of generated PIC designs by comparing simulation outputs with
expert-written solutions, leveraging an open-source simulator. We evaluate a
range of existing LLMs, while also conducting comparative tests on various
prompt engineering techniques to enhance LLM performance in automated PIC
design. The results reveal the challenges and potential of LLMs in the PIC
design domain, offering insights into the key areas that require further
research and development to optimize automation in this field. Our benchmark
and evaluation code is available at https://github.com/PICDA/PICBench.
|
2502.03162
|
Low-Complexity Cram\'er-Rao Lower Bound and Sum Rate Optimization in
ISAC Systems
|
cs.IT eess.SP math.IT
|
While Cram\'er-Rao lower bound is an important metric in sensing functions in
integrated sensing and communications (ISAC) designs, its optimization usually
involves a computationally expensive solution such as semidefinite relaxation.
In this paper, we aim to develop a low-complexity yet efficient algorithm for
CRLB optimization. We focus on a beamforming design that maximizes the weighted
sum between the communications sum rate and the sensing CRLB, subject to a
transmit power constraint. Given the non-convexity of this problem, we propose
a novel method that combines successive convex approximation (SCA) with a
shifted generalized power iteration (SGPI) approach, termed SCA-SGPI. The SCA
technique is utilized to approximate the non-convex objective function with
convex surrogates, while the SGPI efficiently solves the resulting quadratic
subproblems. Simulation results demonstrate that the proposed SCA-SGPI
algorithm not only achieves superior tradeoff performance compared to existing
method but also significantly reduces computational time, making it a promising
solution for practical ISAC applications.
|
2502.03163
|
Signature Reconstruction from Randomized Signatures
|
math.CA cs.LG math.PR stat.ML
|
Controlled ordinary differential equations driven by continuous bounded
variation curves can be considered a continuous time analogue of recurrent
neural networks for the construction of expressive features of the input
curves. We ask up to which extent well known signature features of such curves
can be reconstructed from controlled ordinary differential equations with
(untrained) random vector fields. The answer turns out to be algebraically
involved, but essentially the number of signature features, which can be
reconstructed from the non-linear flow of the controlled ordinary differential
equation, is exponential in its hidden dimension, when the vector fields are
chosen to be neural with depth two. Moreover, we characterize a general linear
independence condition on arbitrary vector fields, under which the signature
features up to some fixed order can always be reconstructed. Algebraically
speaking this complements in a quantitative manner several well known results
from the theory of Lie algebras of vector fields and puts them in a context of
machine learning.
|
2502.03183
|
MaxInfo: A Training-Free Key-Frame Selection Method Using Maximum Volume
for Enhanced Video Understanding
|
cs.CV cs.LG
|
Modern Video Large Language Models (VLLMs) often rely on uniform frame
sampling for video understanding, but this approach frequently fails to capture
critical information due to frame redundancy and variations in video content.
We propose MaxInfo, a training-free method based on the maximum volume
principle, which selects and retains the most representative frames from the
input video. By maximizing the geometric volume formed by selected embeddings,
MaxInfo ensures that the chosen frames cover the most informative regions of
the embedding space, effectively reducing redundancy while preserving
diversity. This method enhances the quality of input representations and
improves long video comprehension performance across benchmarks. For instance,
MaxInfo achieves a 3.28% improvement on LongVideoBench and a 6.4% improvement
on EgoSchema for LLaVA-Video-7B. It also achieves a 3.47% improvement for
LLaVA-Video-72B. The approach is simple to implement and works with existing
VLLMs without the need for additional training, making it a practical and
effective alternative to traditional uniform sampling methods.
|
2502.03188
|
Euska\~nolDS: A Naturally Sourced Corpus for Basque-Spanish
Code-Switching
|
cs.CL cs.AI
|
Code-switching (CS) remains a significant challenge in Natural Language
Processing (NLP), mainly due a lack of relevant data. In the context of the
contact between the Basque and Spanish languages in the north of the Iberian
Peninsula, CS frequently occurs in both formal and informal spontaneous
interactions. However, resources to analyse this phenomenon and support the
development and evaluation of models capable of understanding and generating
code-switched language for this language pair are almost non-existent. We
introduce a first approach to develop a naturally sourced corpus for
Basque-Spanish code-switching. Our methodology consists of identifying CS texts
from previously available corpora using language identification models, which
are then manually validated to obtain a reliable subset of CS instances. We
present the properties of our corpus and make it available under the name
Euska\~nolDS.
|
2502.03198
|
SimSort: A Powerful Framework for Spike Sorting by Large-Scale
Electrophysiology Simulation
|
q-bio.NC cs.LG
|
Spike sorting is an essential process in neural recording, which identifies
and separates electrical signals from individual neurons recorded by electrodes
in the brain, enabling researchers to study how specific neurons communicate
and process information. Although there exist a number of spike sorting methods
which have contributed to significant neuroscientific breakthroughs, many are
heuristically designed, making it challenging to verify their correctness due
to the difficulty of obtaining ground truth labels from real-world neural
recordings. In this work, we explore a data-driven, deep learning-based
approach. We begin by creating a large-scale dataset through electrophysiology
simulations using biologically realistic computational models. We then present
\textbf{SimSort}, a pretraining framework for spike sorting. Remarkably, when
trained on our simulated dataset, SimSort demonstrates strong zero-shot
generalization to real-world spike sorting tasks, significantly outperforming
existing methods. Our findings underscore the potential of data-driven
techniques to enhance the reliability and scalability of spike sorting in
experimental neuroscience.
|
2502.03199
|
Improve Decoding Factuality by Token-wise Cross Layer Entropy of Large
Language Models
|
cs.CL cs.AI
|
Despite their impressive capacities, Large language models (LLMs) often
struggle with the hallucination issue of generating inaccurate or fabricated
content even when they possess correct knowledge. In this paper, we extend the
exploration of the correlation between hidden-state prediction changes and
output factuality into a deeper, token-wise level. Based on the insights , we
propose cross-layer Entropy eNhanced Decoding (END), a decoding method that
mitigates hallucinations without requiring extra training. END leverages inner
probability changes across layers to individually quantify the factual
knowledge required for each candidate token, and adjusts the final predicting
distribution to prioritize tokens with higher factuality. Experiments on both
hallucination and QA benchmarks demonstrate that END significantly enhances the
truthfulness and informativeness of generated content while maintaining robust
QA accuracy. Moreover, our work provides a deeper perspective on understanding
the correlations between inherent knowledge and output factuality.
|
2502.03200
|
CORTEX: A Cost-Sensitive Rule and Tree Extraction Method
|
cs.AI cs.LG
|
Tree-based and rule-based machine learning models play pivotal roles in
explainable artificial intelligence (XAI) due to their unique ability to
provide explanations in the form of tree or rule sets that are easily
understandable and interpretable, making them essential for applications in
which trust in model decisions is necessary. These transparent models are
typically used in surrogate modeling, a post-hoc XAI approach for explaining
the logic of black-box models, enabling users to comprehend and trust complex
predictive systems while maintaining competitive performance. This study
proposes the Cost-Sensitive Rule and Tree Extraction (CORTEX) method, a novel
rule-based XAI algorithm grounded in the multi-class cost-sensitive decision
tree (CSDT) method. The original version of the CSDT is extended to
classification problems with more than two classes by inducing the concept of
an n-dimensional class-dependent cost matrix. The performance of CORTEX as a
rule-extractor XAI method is compared to other post-hoc tree and rule
extraction methods across several datasets with different numbers of classes.
Several quantitative evaluation metrics are employed to assess the
explainability of generated rule sets. Our findings demonstrate that CORTEX is
competitive with other tree-based methods and can be superior to other
rule-based methods across different datasets. The extracted rule sets suggest
the advantages of using the CORTEX method over other methods by producing
smaller rule sets with shorter rules on average across datasets with a diverse
number of classes. Overall, the results underscore the potential of CORTEX as a
powerful XAI tool for scenarios that require the generation of clear,
human-understandable rules while maintaining good predictive performance.
|
2502.03201
|
SpaceGNN: Multi-Space Graph Neural Network for Node Anomaly Detection
with Extremely Limited Labels
|
cs.LG
|
Node Anomaly Detection (NAD) has gained significant attention in the deep
learning community due to its diverse applications in real-world scenarios.
Existing NAD methods primarily embed graphs within a single Euclidean space,
while overlooking the potential of non-Euclidean spaces. Besides, to address
the prevalent issue of limited supervision in real NAD tasks, previous methods
tend to leverage synthetic data to collect auxiliary information, which is not
an effective solution as shown in our experiments. To overcome these
challenges, we introduce a novel SpaceGNN model designed for NAD tasks with
extremely limited labels. Specifically, we provide deeper insights into a
task-relevant framework by empirically analyzing the benefits of different
spaces for node representations, based on which, we design a Learnable Space
Projection function that effectively encodes nodes into suitable spaces.
Besides, we introduce the concept of weighted homogeneity, which we empirically
and theoretically validate as an effective coefficient during information
propagation. This concept inspires the design of the Distance Aware Propagation
module. Furthermore, we propose the Multiple Space Ensemble module, which
extracts comprehensive information for NAD under conditions of extremely
limited supervision. Our findings indicate that this module is more beneficial
than data augmentation techniques for NAD. Extensive experiments conducted on 9
real datasets confirm the superiority of SpaceGNN, which outperforms the best
rival by an average of 8.55% in AUC and 4.31% in F1 scores. Our code is
available at https://github.com/xydong127/SpaceGNN.
|
2502.03202
|
Low-cost analog signal chain for transmit-receive circuits of passive
induction-based resonators
|
eess.SP cs.SY eess.SY
|
Passive wireless sensors are crucial in modern medical and industrial
settings to monitor procedures and conditions. We demonstrate a circuit to
inductively excite passive resonators and to conduct their decaying signal
response to a low noise amplifier. Two design variations of a generic
transmit-receive signal chain are proposed, measured, and described in detail
for the purpose of facilitating replication. Instrumentation and design aim to
be scalable for multi-channel array configurations, using either off-the-shelf
class-D audio amplifiers or a custom full H-bridge. Measurements are conducted
on miniature magneto-mechanical resonators in the ultra low frequency range to
enable sensing and tracking applications of such devices in different
environments.
|
2502.03206
|
A Unified and General Humanoid Whole-Body Controller for Fine-Grained
Locomotion
|
cs.RO cs.AI
|
Locomotion is a fundamental skill for humanoid robots. However, most existing
works made locomotion a single, tedious, unextendable, and passive movement.
This limits the kinematic capabilities of humanoid robots. In contrast, humans
possess versatile athletic abilities-running, jumping, hopping, and finely
adjusting walking parameters such as frequency, and foot height. In this paper,
we investigate solutions to bring such versatility into humanoid locomotion and
thereby propose HUGWBC: a unified and general humanoid whole-body controller
for fine-grained locomotion. By designing a general command space in the aspect
of tasks and behaviors, along with advanced techniques like symmetrical loss
and intervention training for learning a whole-body humanoid controlling policy
in simulation, HugWBC enables real-world humanoid robots to produce various
natural gaits, including walking (running), jumping, standing, and hopping,
with customizable parameters such as frequency, foot swing height, further
combined with different body height, waist rotation, and body pitch, all in one
single policy. Beyond locomotion, HUGWBC also supports real-time interventions
from external upper-body controllers like teleoperation, enabling
loco-manipulation while maintaining precise control under any locomotive
behavior. Our experiments validate the high tracking accuracy and robustness of
HUGWBC with/without upper-body intervention for all commands, and we further
provide an in-depth analysis of how the various commands affect humanoid
movement and offer insights into the relationships between these commands. To
our knowledge, HugWBC is the first humanoid whole-body controller that supports
such fine-grained locomotion behaviors with high robustness and flexibility.
|
2502.03207
|
MotionAgent: Fine-grained Controllable Video Generation via Motion Field
Agent
|
cs.CV cs.GR
|
We propose MotionAgent, enabling fine-grained motion control for text-guided
image-to-video generation. The key technique is the motion field agent that
converts motion information in text prompts into explicit motion fields,
providing flexible and precise motion guidance. Specifically, the agent
extracts the object movement and camera motion described in the text and
converts them into object trajectories and camera extrinsics, respectively. An
analytical optical flow composition module integrates these motion
representations in 3D space and projects them into a unified optical flow. An
optical flow adapter takes the flow to control the base image-to-video
diffusion model for generating fine-grained controlled videos. The significant
improvement in the Video-Text Camera Motion metrics on VBench indicates that
our method achieves precise control over camera motion. We construct a subset
of VBench to evaluate the alignment of motion information in the text and the
generated video, outperforming other advanced models on motion generation
accuracy.
|
2502.03210
|
From Kernels to Features: A Multi-Scale Adaptive Theory of Feature
Learning
|
cond-mat.dis-nn cs.LG stat.ML
|
Theoretically describing feature learning in neural networks is crucial for
understanding their expressive power and inductive biases, motivating various
approaches. Some approaches describe network behavior after training through a
simple change in kernel scale from initialization, resulting in a
generalization power comparable to a Gaussian process. Conversely, in other
approaches training results in the adaptation of the kernel to the data,
involving complex directional changes to the kernel. While these approaches
capture different facets of network behavior, their relationship and respective
strengths across scaling regimes remains an open question. This work presents a
theoretical framework of multi-scale adaptive feature learning bridging these
approaches. Using methods from statistical mechanics, we derive analytical
expressions for network output statistics which are valid across scaling
regimes and in the continuum between them. A systematic expansion of the
network's probability distribution reveals that mean-field scaling requires
only a saddle-point approximation, while standard scaling necessitates
additional correction terms. Remarkably, we find across regimes that kernel
adaptation can be reduced to an effective kernel rescaling when predicting the
mean network output of a linear network. However, even in this case, the
multi-scale adaptive approach captures directional feature learning effects,
providing richer insights than what could be recovered from a rescaling of the
kernel alone.
|
2502.03214
|
iVISPAR -- An Interactive Visual-Spatial Reasoning Benchmark for VLMs
|
cs.CL cs.AI cs.CV
|
Vision-Language Models (VLMs) are known to struggle with spatial reasoning
and visual alignment. To help overcome these limitations, we introduce iVISPAR,
an interactive multi-modal benchmark designed to evaluate the spatial reasoning
capabilities of VLMs acting as agents. iVISPAR is based on a variant of the
sliding tile puzzle-a classic problem that demands logical planning, spatial
awareness, and multi-step reasoning. The benchmark supports visual 2D, 3D, and
text-based input modalities, enabling comprehensive assessments of VLMs'
planning and reasoning skills. We evaluate a broad suite of state-of-the-art
open-source and closed-source VLMs, comparing their performance while also
providing optimal path solutions and a human baseline to assess the task's
complexity and feasibility for humans. Results indicate that while some VLMs
perform well on simple spatial tasks, they encounter difficulties with more
complex configurations and problem properties. Notably, while VLMs generally
perform better in 2D vision compared to 3D or text-based representations, they
consistently fall short of human performance, illustrating the persistent
challenge of visual alignment. This highlights critical gaps in current VLM
capabilities, highlighting their limitations in achieving human-level
cognition.
|
2502.03218
|
Data Dams: A Novel Framework for Regulating and Managing Data Flow in
Large-Scale Systems
|
cs.IR cs.DB cs.DC
|
In the era of big data, managing dynamic data flows efficiently is crucial as
traditional storage models struggle with real-time regulation and risk
overflow. This paper introduces Data Dams, a novel framework designed to
optimize data inflow, storage, and outflow by dynamically adjusting flow rates
to prevent congestion while maximizing resource utilization. Inspired by
physical dam mechanisms, the framework employs intelligent sluice controls and
predictive analytics to regulate data flow based on system conditions such as
bandwidth availability, processing capacity, and security constraints.
Simulation results demonstrate that the Data Dam significantly reduces average
storage levels (371.68 vs. 426.27 units) and increases total outflow (7999.99
vs. 7748.76 units) compared to static baseline models. By ensuring stable and
adaptive outflow rates under fluctuating data loads, this approach enhances
system efficiency, mitigates overflow risks, and outperforms existing static
flow control strategies. The proposed framework presents a scalable solution
for dynamic data management in large-scale distributed systems, paving the way
for more resilient and efficient real-time processing architectures.
|
2502.03220
|
Mitigating Language Bias in Cross-Lingual Job Retrieval: A Recruitment
Platform Perspective
|
cs.CL
|
Understanding the textual components of resumes and job postings is critical
for improving job-matching accuracy and optimizing job search systems in online
recruitment platforms. However, existing works primarily focus on analyzing
individual components within this information, requiring multiple specialized
tools to analyze each aspect. Such disjointed methods could potentially hinder
overall generalizability in recruitment-related text processing. Therefore, we
propose a unified sentence encoder that utilized multi-task dual-encoder
framework for jointly learning multiple component into the unified sentence
encoder. The results show that our method outperforms other state-of-the-art
models, despite its smaller model size. Moreover, we propose a novel metric,
Language Bias Kullback-Leibler Divergence (LBKL), to evaluate language bias in
the encoder, demonstrating significant bias reduction and superior
cross-lingual performance.
|
2502.03221
|
Information Theoretic Analysis of PUF-Based Tamper Protection
|
cs.IT cs.CR math.IT
|
Physical Unclonable Functions (PUFs) enable physical tamper protection for
high-assurance devices without needing a continuous power supply that is active
over the entire lifetime of the device. Several methods for PUF-based tamper
protection have been proposed together with practical quantization and error
correction schemes. In this work we take a step back from the implementation to
analyze theoretical properties and limits. We apply zero leakage output
quantization to existing quantization schemes and minimize the reconstruction
error probability under zero leakage. We apply wiretap coding within a helper
data algorithm to enable a reliable key reconstruction for the legitimate user
while guaranteeing a selectable reconstruction complexity for an attacker,
analogously to the security level for a cryptographic algorithm for the
attacker models considered in this work. We present lower bounds on the
achievable key rates depending on the attacker's capabilities in the asymptotic
and finite blocklength regime to give fundamental security guarantees even if
the attacker gets partial information about the PUF response and the helper
data. Furthermore, we present converse bounds on the number of PUF cells. Our
results show for example that for a practical scenario one needs at least 459
PUF cells using 3 bit quantization to achieve a security level of 128 bit.
|
2502.03227
|
Adversarial Dependence Minimization
|
cs.LG
|
Many machine learning techniques rely on minimizing the covariance between
output feature dimensions to extract minimally redundant representations from
data. However, these methods do not eliminate all dependencies/redundancies, as
linearly uncorrelated variables can still exhibit nonlinear relationships. This
work provides a differentiable and scalable algorithm for dependence
minimization that goes beyond linear pairwise decorrelation. Our method employs
an adversarial game where small networks identify dependencies among feature
dimensions, while the encoder exploits this information to reduce dependencies.
We provide empirical evidence of the algorithm's convergence and demonstrate
its utility in three applications: extending PCA to nonlinear decorrelation,
improving the generalization of image classification methods, and preventing
dimensional collapse in self-supervised representation learning.
|
2502.03228
|
GARAD-SLAM: 3D GAussian splatting for Real-time Anti Dynamic SLAM
|
cs.RO cs.CV
|
The 3D Gaussian Splatting (3DGS)-based SLAM system has garnered widespread
attention due to its excellent performance in real-time high-fidelity
rendering. However, in real-world environments with dynamic objects, existing
3DGS-based SLAM systems often face mapping errors and tracking drift issues. To
address these problems, we propose GARAD-SLAM, a real-time 3DGS-based SLAM
system tailored for dynamic scenes. In terms of tracking, unlike traditional
methods, we directly perform dynamic segmentation on Gaussians and map them
back to the front-end to obtain dynamic point labels through a Gaussian pyramid
network, achieving precise dynamic removal and robust tracking. For mapping, we
impose rendering penalties on dynamically labeled Gaussians, which are updated
through the network, to avoid irreversible erroneous removal caused by simple
pruning. Our results on real-world datasets demonstrate that our method is
competitive in tracking compared to baseline methods, generating fewer
artifacts and higher-quality reconstructions in rendering.
|
2502.03229
|
A Unified Framework for Semi-Supervised Image Segmentation and
Registration
|
cs.CV
|
Semi-supervised learning, which leverages both annotated and unannotated
data, is an efficient approach for medical image segmentation, where obtaining
annotations for the whole dataset is time-consuming and costly. Traditional
semi-supervised methods primarily focus on extracting features and learning
data distributions from unannotated data to enhance model training. In this
paper, we introduce a novel approach incorporating an image registration model
to generate pseudo-labels for the unannotated data, producing more
geometrically correct pseudo-labels to improve the model training. Our method
was evaluated on a 2D brain data set, showing excellent performance even using
only 1\% of the annotated data. The results show that our approach outperforms
conventional semi-supervised segmentation methods (e.g. teacher-student model),
particularly in a low percentage of annotation scenario. GitHub:
https://github.com/ruizhe-l/UniSegReg.
|
2502.03230
|
Efficient Vision Language Model Fine-tuning for Text-based Person
Anomaly Search
|
cs.CV cs.MM
|
This paper presents the HFUT-LMC team's solution to the WWW 2025 challenge on
Text-based Person Anomaly Search (TPAS). The primary objective of this
challenge is to accurately identify pedestrians exhibiting either normal or
abnormal behavior within a large library of pedestrian images. Unlike
traditional video analysis tasks, TPAS significantly emphasizes understanding
and interpreting the subtle relationships between text descriptions and visual
data. The complexity of this task lies in the model's need to not only match
individuals to text descriptions in massive image datasets but also accurately
differentiate between search results when faced with similar descriptions. To
overcome these challenges, we introduce the Similarity Coverage Analysis (SCA)
strategy to address the recognition difficulty caused by similar text
descriptions. This strategy effectively enhances the model's capacity to manage
subtle differences, thus improving both the accuracy and reliability of the
search. Our proposed solution demonstrated excellent performance in this
challenge.
|
2502.03231
|
The Other Side of the Coin: Unveiling the Downsides of Model Aggregation
in Federated Learning from a Layer-peeled Perspective
|
cs.LG cs.AI
|
In federated learning (FL), model aggregation is a critical step by which
multiple clients share their knowledge with one another. However, it is also
widely recognized that the aggregated model, when sent back to each client,
performs poorly on local data until after several rounds of local training.
This temporary performance drop can potentially slow down the convergence of
the FL model. Most research in FL regards this performance drop as an inherent
cost of knowledge sharing among clients and does not give it special attention.
While some studies directly focus on designing techniques to alleviate the
issue, an in-depth investigation of the reasons behind this performance drop
has yet to be conducted.To address this gap, we conduct a layer-peeled analysis
of model aggregation across various datasets and model architectures. Our
findings reveal that the performance drop can be attributed to two major
consequences of the aggregation process: (1) it disrupts feature variability
suppression in deep neural networks (DNNs), and (2) it weakens the coupling
between features and subsequent parameters.Based on these findings, we propose
several simple yet effective strategies to mitigate the negative impacts of
model aggregation while still enjoying the benefit it brings. To the best of
our knowledge, our work is the first to conduct a layer-peeled analysis of
model aggregation, potentially paving the way for the development of more
effective FL algorithms.
|
2502.03232
|
JAMMit! Monolithic 3D-Printing of a Bead Jamming Soft Pneumatic Arm
|
cs.RO
|
3D-printed bellow soft pneumatic arms are widely adopted for their flexible
design, ease of fabrication, and large deformation capabilities. However, their
low stiffness limits their real-world applications. Although several methods
exist to enhance the stiffness of soft actuators, many involve complex
manufacturing processes not in line with modern goals of monolithic and
automated additive manufacturing. With its simplicity, bead-jamming represents
a simple and effective solution to these challenges. This work introduces a
method for monolithic printing of a bellow soft pneumatic arm, integrating a
tendon-driven central spine of bowl-shaped beads. We experimentally
characterized the arm's range of motion in both unjammed and jammed states, as
well as its stiffness under various actuation and jamming conditions. As a
result, we provide an optimal jamming policy as a trade-off between preserving
the range of motion and maximizing stiffness. The proposed design was further
demonstrated in a switch-toggling task, showing its potential for practical
applications.
|
2502.03236
|
Pioneer: Physics-informed Riemannian Graph ODE for Entropy-increasing
Dynamics
|
cs.LG
|
Dynamic interacting system modeling is important for understanding and
simulating real world systems. The system is typically described as a graph,
where multiple objects dynamically interact with each other and evolve over
time. In recent years, graph Ordinary Differential Equations (ODE) receive
increasing research attentions. While achieving encouraging results, existing
solutions prioritize the traditional Euclidean space, and neglect the intrinsic
geometry of the system and physics laws, e.g., the principle of entropy
increasing. The limitations above motivate us to rethink the system dynamics
from a fresh perspective of Riemannian geometry, and pose a more realistic
problem of physics-informed dynamic system modeling, considering the underlying
geometry and physics law for the first time. In this paper, we present a novel
physics-informed Riemannian graph ODE for a wide range of entropy-increasing
dynamic systems (termed as Pioneer). In particular, we formulate a differential
system on the Riemannian manifold, where a manifold-valued graph ODE is
governed by the proposed constrained Ricci flow, and a manifold preserving
Gyro-transform aware of system geometry. Theoretically, we report the provable
entropy non-decreasing of our formulation, obeying the physics laws. Empirical
results show the superiority of Pioneer on real datasets.
|
2502.03238
|
Long-tailed Medical Diagnosis with Relation-aware Representation
Learning and Iterative Classifier Calibration
|
cs.CV cs.AI cs.LG cs.MM
|
Recently computer-aided diagnosis has demonstrated promising performance,
effectively alleviating the workload of clinicians. However, the inherent
sample imbalance among different diseases leads algorithms biased to the
majority categories, leading to poor performance for rare categories. Existing
works formulated this challenge as a long-tailed problem and attempted to
tackle it by decoupling the feature representation and classification. Yet, due
to the imbalanced distribution and limited samples from tail classes, these
works are prone to biased representation learning and insufficient classifier
calibration. To tackle these problems, we propose a new Long-tailed Medical
Diagnosis (LMD) framework for balanced medical image classification on
long-tailed datasets. In the initial stage, we develop a Relation-aware
Representation Learning (RRL) scheme to boost the representation ability by
encouraging the encoder to capture intrinsic semantic features through
different data augmentations. In the subsequent stage, we propose an Iterative
Classifier Calibration (ICC) scheme to calibrate the classifier iteratively.
This is achieved by generating a large number of balanced virtual features and
fine-tuning the encoder using an Expectation-Maximization manner. The proposed
ICC compensates for minority categories to facilitate unbiased classifier
optimization while maintaining the diagnostic knowledge in majority classes.
Comprehensive experiments on three public long-tailed medical datasets
demonstrate that our LMD framework significantly surpasses state-of-the-art
approaches. The source code can be accessed at
https://github.com/peterlipan/LMD.
|
2502.03244
|
Analysis of Value Iteration Through Absolute Probability Sequences
|
cs.LG
|
Value Iteration is a widely used algorithm for solving Markov Decision
Processes (MDPs). While previous studies have extensively analyzed its
convergence properties, they primarily focus on convergence with respect to the
infinity norm. In this work, we use absolute probability sequences to develop a
new line of analysis and examine the algorithm's convergence in terms of the
$L^2$ norm, offering a new perspective on its behavior and performance.
|
2502.03245
|
Calibrated Unsupervised Anomaly Detection in Multivariate Time-series
using Reinforcement Learning
|
cs.LG cs.SY eess.SP eess.SY
|
This paper investigates unsupervised anomaly detection in multivariate
time-series data using reinforcement learning (RL) in the latent space of an
autoencoder. A significant challenge is the limited availability of anomalous
data, often leading to misclassifying anomalies as normal events, thus raising
false negatives. RL can help overcome this limitation by promoting exploration
and balancing exploitation during training, effectively preventing overfitting.
Wavelet analysis is also utilized to enhance anomaly detection, enabling
time-series data decomposition into both time and frequency domains. This
approach captures anomalies at multiple resolutions, with wavelet coefficients
extracted to detect both sudden and subtle shifts in the data, thereby refining
the anomaly detection process. We calibrate the decision boundary by generating
synthetic anomalies and embedding a supervised framework within the model. This
supervised element aids the unsupervised learning process by fine-tuning the
decision boundary and increasing the model's capacity to distinguish between
normal and anomalous patterns effectively.
|
2502.03251
|
RiemannGFM: Learning a Graph Foundation Model from Riemannian Geometry
|
cs.LG
|
The foundation model has heralded a new era in artificial intelligence,
pretraining a single model to offer cross-domain transferability on different
datasets. Graph neural networks excel at learning graph data, the omnipresent
non-Euclidean structure, but often lack the generalization capacity. Hence,
graph foundation model is drawing increasing attention, and recent efforts have
been made to leverage Large Language Models. On the one hand, existing studies
primarily focus on text-attributed graphs, while a wider range of real graphs
do not contain fruitful textual attributes. On the other hand, the sequential
graph description tailored for the Large Language Model neglects the structural
complexity, which is a predominant characteristic of the graph. Such
limitations motivate an important question: Can we go beyond Large Language
Models, and pretrain a universal model to learn the structural knowledge for
any graph? The answer in the language or vision domain is a shared vocabulary.
We observe the fact that there also exist shared substructures underlying graph
domain, and thereby open a new opportunity of graph foundation model with
structural vocabulary. The key innovation is the discovery of a simple yet
effective structural vocabulary of trees and cycles, and we explore its
inherent connection to Riemannian geometry. Herein, we present a universal
pretraining model, RiemannGFM. Concretely, we first construct a novel product
bundle to incorporate the diverse geometries of the vocabulary. Then, on this
constructed space, we stack Riemannian layers where the structural vocabulary,
regardless of specific graph, is learned in Riemannian manifold offering
cross-domain transferability. Extensive experiments show the effectiveness of
RiemannGFM on a diversity of real graphs.
|
2502.03252
|
A scale of conceptual orality and literacy: Automatic text
categorization in the tradition of "N\"ahe und Distanz"
|
cs.CL
|
Koch and Oesterreicher's model of "N\"ahe und Distanz" (N\"ahe = immediacy,
conceptual orality; Distanz = distance, conceptual literacy) is constantly used
in German linguistics. However, there is no statistical foundation for use in
corpus linguistic analyzes, while it is increasingly moving into empirical
corpus linguistics. Theoretically, it is stipulated, among other things, that
written texts can be rated on a scale of conceptual orality and literacy by
linguistic features. This article establishes such a scale based on PCA and
combines it with automatic analysis. Two corpora of New High German serve as
examples. When evaluating established features, a central finding is that
features of conceptual orality and literacy must be distinguished in order to
rank texts in a differentiated manner. The scale is also discussed with a view
to its use in corpus compilation and as a guide for analyzes in larger corpora.
With a theory-driven starting point and as a "tailored" dimension, the approach
compared to Biber's Dimension 1 is particularly suitable for these supporting,
controlling tasks.
|
2502.03253
|
How do Humans and Language Models Reason About Creativity? A Comparative
Analysis
|
cs.CL
|
Creativity assessment in science and engineering is increasingly based on
both human and AI judgment, but the cognitive processes and biases behind these
evaluations remain poorly understood. We conducted two experiments examining
how including example solutions with ratings impact creativity evaluation,
using a finegrained annotation protocol where raters were tasked with
explaining their originality scores and rating for the facets of remoteness
(whether the response is "far" from everyday ideas), uncommonness (whether the
response is rare), and cleverness. In Study 1, we analyzed creativity ratings
from 72 experts with formal science or engineering training, comparing those
who received example solutions with ratings (example) to those who did not (no
example). Computational text analysis revealed that, compared to experts with
examples, no-example experts used more comparative language (e.g.,
"better/worse") and emphasized solution uncommonness, suggesting they may have
relied more on memory retrieval for comparisons. In Study 2, parallel analyses
with state-of-the-art LLMs revealed that models prioritized uncommonness and
remoteness of ideas when rating originality, suggesting an evaluative process
rooted around the semantic similarity of ideas. In the example condition, while
LLM accuracy in predicting the true originality scores improved, the
correlations of remoteness, uncommonness, and cleverness with originality also
increased substantially - to upwards of 0.99 - suggesting a homogenization in
the LLMs evaluation of the individual facets. These findings highlight
important implications for how humans and AI reason about creativity and
suggest diverging preferences for what different populations prioritize when
rating.
|
2502.03257
|
Efficient extraction of medication information from clinical notes: an
evaluation in two languages
|
cs.CL cs.IR
|
Objective: To evaluate the accuracy, computational cost and portability of a
new Natural Language Processing (NLP) method for extracting medication
information from clinical narratives. Materials and Methods: We propose an
original transformer-based architecture for the extraction of entities and
their relations pertaining to patients' medication regimen. First, we used this
approach to train and evaluate a model on French clinical notes, using a newly
annotated corpus from H\^opitaux Universitaires de Strasbourg. Second, the
portability of the approach was assessed by conducting an evaluation on
clinical documents in English from the 2018 n2c2 shared task. Information
extraction accuracy and computational cost were assessed by comparison with an
available method using transformers. Results: The proposed architecture
achieves on the task of relation extraction itself performance that are
competitive with the state-of-the-art on both French and English (F-measures
0.82 and 0.96 vs 0.81 and 0.95), but reduce the computational cost by 10.
End-to-end (Named Entity recognition and Relation Extraction) F1 performance is
0.69 and 0.82 for French and English corpus. Discussion: While an existing
system developed for English notes was deployed in a French hospital setting
with reasonable effort, we found that an alternative architecture offered
end-to-end drug information extraction with comparable extraction performance
and lower computational impact for both French and English clinical text
processing, respectively. Conclusion: The proposed architecture can be used to
extract medication information from clinical text with high performance and low
computational cost and consequently suits with usually limited hospital IT
resources
|
2502.03261
|
CARROT: A Cost Aware Rate Optimal Router
|
stat.ML cs.LG cs.NI math.ST stat.TH
|
With the rapid growth in the number of Large Language Models (LLMs), there
has been a recent interest in LLM routing, or directing queries to the cheapest
LLM that can deliver a suitable response. Following this line of work, we
introduce CARROT, a Cost AwaRe Rate Optimal rouTer that can select models based
on any desired trade-off between performance and cost. Given a query, CARROT
selects a model based on estimates of models' cost and performance. Its
simplicity lends CARROT computational efficiency, while our theoretical
analysis demonstrates minimax rate-optimality in its routing performance.
Alongside CARROT, we also introduce the Smart Price-aware Routing (SPROUT)
dataset to facilitate routing on a wide spectrum of queries with the latest
state-of-the-art LLMs. Using SPROUT and prior benchmarks such as Routerbench
and open-LLM-leaderboard-v2 we empirically validate CARROT's performance
against several alternative routers.
|
2502.03263
|
Model Reference-Based Control with Guaranteed Predefined Performance for
Uncertain Strict-Feedback Systems
|
eess.SY cs.SY
|
To address the complexities posed by time- and state-varying uncertainties
and the computation of analytic derivatives in strict-feedback form (SFF)
systems, this study introduces a novel model reference-based control (MRBC)
framework which applies locally to each subsystem (SS), to ensure output
tracking performance within the specified transient and steady-state response
criteria. This framework includes 1) novel homogeneous adaptive estimators
(HAEs) designed to match the uncertain nonlinear SFF system to a reference
model, enabling easier analysis and control design at the $SS$ level, and 2)
model-based homogeneous adaptive controllers enhanced by logarithmic barrier
Lyapunov functions (HAC-BLFs), intended to control the reference model provided
by HAEs in each SS, while ensuring the prescribed tracking responses under
control amplitude saturation. The inherently robust MRBC achieves uniformly
exponential stability using a generic stability connector term, which addresses
dynamic interactions between the adjacent SSs. The parameter sensitivities of
HAEs and HAC-BLFs in the MRBC framework are analyzed, focusing on the system's
robustness and responsiveness. The proposed MRBC framework is experimentally
validated through several scenarios involving an electromechanical linear
actuator system with an uncertain SFF, subjected loading disturbance forces
challenging 0-95% of its capacity.
|
2502.03264
|
General Time-series Model for Universal Knowledge Representation of
Multivariate Time-Series data
|
cs.LG
|
Universal knowledge representation is a central problem for multivariate time
series(MTS) foundation models and yet remains open. This paper investigates
this problem from the first principle and it makes four folds of contributions.
First, a new empirical finding is revealed: time series with different time
granularities (or corresponding frequency resolutions) exhibit distinct joint
distributions in the frequency domain. This implies a crucial aspect of
learning universal knowledge, one that has been overlooked by previous studies.
Second, a novel Fourier knowledge attention mechanism is proposed to enable
learning time granularity-aware representations from both the temporal and
frequency domains. Third, an autoregressive blank infilling pre-training
framework is incorporated to time series analysis for the first time, leading
to a generative tasks agnostic pre-training strategy. To this end, we develop
the General Time-series Model (GTM), a unified MTS foundation model that
addresses the limitation of contemporary time series models, which often
require token, pre-training, or model-level customizations for downstream tasks
adaption. Fourth, extensive experiments show that GTM outperforms
state-of-the-art (SOTA) methods across all generative tasks, including
long-term forecasting, anomaly detection, and imputation.
|
2502.03266
|
ZISVFM: Zero-Shot Object Instance Segmentation in Indoor Robotic
Environments with Vision Foundation Models
|
cs.CV cs.RO
|
Service robots operating in unstructured environments must effectively
recognize and segment unknown objects to enhance their functionality.
Traditional supervised learningbased segmentation techniques require extensive
annotated datasets, which are impractical for the diversity of objects
encountered in real-world scenarios. Unseen Object Instance Segmentation (UOIS)
methods aim to address this by training models on synthetic data to generalize
to novel objects, but they often suffer from the simulation-to-reality gap.
This paper proposes a novel approach (ZISVFM) for solving UOIS by leveraging
the powerful zero-shot capability of the segment anything model (SAM) and
explicit visual representations from a selfsupervised vision transformer (ViT).
The proposed framework operates in three stages: (1) generating object-agnostic
mask proposals from colorized depth images using SAM, (2) refining these
proposals using attention-based features from the selfsupervised ViT to filter
non-object masks, and (3) applying K-Medoids clustering to generate point
prompts that guide SAM towards precise object segmentation. Experimental
validation on two benchmark datasets and a self-collected dataset demonstrates
the superior performance of ZISVFM in complex environments, including
hierarchical settings such as cabinets, drawers, and handheld objects. Our
source code is available at https://github.com/Yinmlmaoliang/zisvfm.
|
2502.03270
|
When Pre-trained Visual Representations Fall Short: Limitations in
Visuo-Motor Robot Learning
|
cs.RO cs.AI cs.CV cs.LG
|
The integration of pre-trained visual representations (PVRs) into visuo-motor
robot learning has emerged as a promising alternative to training visual
encoders from scratch. However, PVRs face critical challenges in the context of
policy learning, including temporal entanglement and an inability to generalise
even in the presence of minor scene perturbations. These limitations hinder
performance in tasks requiring temporal awareness and robustness to scene
changes. This work identifies these shortcomings and proposes solutions to
address them. First, we augment PVR features with temporal perception and a
sense of task completion, effectively disentangling them in time. Second, we
introduce a module that learns to selectively attend to task-relevant local
features, enhancing robustness when evaluated on out-of-distribution scenes.
Our experiments demonstrate significant performance improvements, particularly
in PVRs trained with masking objectives, and validate the effectiveness of our
enhancements in addressing PVR-specific limitations.
|
2502.03272
|
Deep Learning Pipeline for Fully Automated Myocardial Infarct
Segmentation from Clinical Cardiac MR Scans
|
eess.IV cs.AI cs.CV
|
Purpose: To develop and evaluate a deep learning-based method that allows to
perform myocardial infarct segmentation in a fully-automated way.
Materials and Methods: For this retrospective study, a cascaded framework of
two and three-dimensional convolutional neural networks (CNNs), specialized on
identifying ischemic myocardial scars on late gadolinium enhancement (LGE)
cardiac magnetic resonance (CMR) images, was trained on an in-house training
dataset consisting of 144 examinations. On a separate test dataset from the
same institution, including images from 152 examinations obtained between 2021
and 2023, a quantitative comparison between artificial intelligence (AI)-based
segmentations and manual segmentations was performed. Further, qualitative
assessment of segmentation accuracy was evaluated for both human and
AI-generated contours by two CMR experts in a blinded experiment.
Results: Excellent agreement could be found between manually and
automatically calculated infarct volumes ($\rho_c$ = 0.9). The qualitative
evaluation showed that compared to human-based measurements, the experts rated
the AI-based segmentations to better represent the actual extent of infarction
significantly (p < 0.001) more often (33.4% AI, 25.1% human, 41.5% equal). On
the contrary, for segmentation of microvascular obstruction (MVO), manual
measurements were still preferred (11.3% AI, 55.6% human, 33.1% equal).
Conclusion: This fully-automated segmentation pipeline enables CMR infarct
size to be calculated in a very short time and without requiring any
pre-processing of the input images while matching the segmentation quality of
trained human observers. In a blinded experiment, experts preferred automated
infarct segmentations more often than manual segmentations, paving the way for
a potential clinical application.
|
2502.03274
|
A Scalable Approach to Probabilistic Neuro-Symbolic Verification
|
cs.AI
|
Neuro-Symbolic Artificial Intelligence (NeSy AI) has emerged as a promising
direction for integrating neural learning with symbolic reasoning. In the
probabilistic variant of such systems, a neural network first extracts a set of
symbols from sub-symbolic input, which are then used by a symbolic component to
reason in a probabilistic manner towards answering a query. In this work, we
address the problem of formally verifying the robustness of such NeSy
probabilistic reasoning systems, therefore paving the way for their safe
deployment in critical domains. We analyze the complexity of solving this
problem exactly, and show that it is $\mathrm{NP}^{\# \mathrm{P}}$-hard. To
overcome this issue, we propose the first approach for approximate,
relaxation-based verification of probabilistic NeSy systems. We demonstrate
experimentally that the proposed method scales exponentially better than
solver-based solutions and apply our technique to a real-world autonomous
driving dataset, where we verify a safety property under large input
dimensionalities and network sizes.
|
2502.03275
|
Token Assorted: Mixing Latent and Text Tokens for Improved Language
Model Reasoning
|
cs.CL cs.AI cs.LG cs.LO
|
Large Language Models (LLMs) excel at reasoning and planning when trained on
chainof-thought (CoT) data, where the step-by-step thought process is
explicitly outlined by text tokens. However, this results in lengthy inputs
where many words support textual coherence rather than core reasoning
information, and processing these inputs consumes substantial computation
resources. In this work, we propose a hybrid representation of the reasoning
process, where we partially abstract away the initial reasoning steps using
latent discrete tokens generated by VQ-VAE, significantly reducing the length
of reasoning traces. We explore the use of latent trace abstractions in two
scenarios: 1) training the model from scratch for the Keys-Finding Maze
problem, 2) fine-tuning LLMs on this hybrid data with an extended vocabulary
including unseen latent tokens, for both logical and mathematical reasoning
problems. To facilitate effective learning, we introduce a simple training
procedure that randomly mixes latent and text tokens, which enables fast
adaptation to new latent tokens. Our approach consistently outperforms the
baselines methods in various benchmarks.
|
2502.03278
|
Fault-Tolerant Control for System Availability and Continuous Operation
in Heavy-Duty Wheeled Mobile Robots
|
eess.SY cs.SY
|
When the control system in a heavy-duty wheeled mobile robot (HD-WMR)
malfunctions, deviations from ideal motion occur, significantly heightening the
risks of off-road instability and costly damage. To meet the demands for
safety, reliability, and controllability in HD-WMRs, the control system must
tolerate faults to a certain extent, ensuring continuous operation. To this
end, this paper introduces a model-free hierarchical control with fault
accommodation (MFHCA) framework designed to address sensor and actuator faults
in hydraulically powered HD-WMRs with independently controlled wheels. To
begin, a novel mathematical representation of the motion dynamics of HD-WMRs,
incorporating both sensor and actuator fault modes, is investigated.
Subsequently, the MFHCA framework is proposed to manage all wheels under
various fault modes, ensuring that each wheel tracks the reference driving
velocities and steering angles, which are inverse kinematically mapped from the
angular and linear velocities commanded in the HD-WMR's base frame. To do so,
this framework generates appropriate power efforts in independently
valve-regulated wheels to accommodate the adaptively isolated faults, thereby
ensuring exponential stability. The experimental analysis of a 6,500-kg
hydraulic-powered HD-WMR under various fault modes and rough terrains
demonstrates the validity of the MFHCA framework.
|
2502.03283
|
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex
Reasoning over Knowledge Graphs
|
cs.AI cs.CL cs.LG
|
Recent advancements have highlighted that Large Language Models (LLMs) are
prone to hallucinations when solving complex reasoning problems, leading to
erroneous results. To tackle this issue, researchers incorporate Knowledge
Graphs (KGs) to improve the reasoning ability of LLMs. However, existing
methods face two limitations: 1) they typically assume that all answers to the
questions are contained in KGs, neglecting the incompleteness issue of KGs, and
2) they treat the KG as a static repository and overlook the implicit logical
reasoning structures inherent in KGs. In this paper, we introduce SymAgent, an
innovative neural-symbolic agent framework that achieves collaborative
augmentation between KGs and LLMs. We conceptualize KGs as dynamic environments
and transform complex reasoning tasks into a multi-step interactive process,
enabling KGs to participate deeply in the reasoning process. SymAgent consists
of two modules: Agent-Planner and Agent-Executor. The Agent-Planner leverages
LLM's inductive reasoning capability to extract symbolic rules from KGs,
guiding efficient question decomposition. The Agent-Executor autonomously
invokes predefined action tools to integrate information from KGs and external
documents, addressing the issues of KG incompleteness. Furthermore, we design a
self-learning framework comprising online exploration and offline iterative
policy updating phases, enabling the agent to automatically synthesize
reasoning trajectories and improve performance. Experimental results
demonstrate that SymAgent with weak LLM backbones (i.e., 7B series) yields
better or comparable performance compared to various strong baselines. Further
analysis reveals that our agent can identify missing triples, facilitating
automatic KG updates.
|
2502.03285
|
Deep Learning-based Event Data Coding: A Joint Spatiotemporal and
Polarity Solution
|
cs.CV eess.IV
|
Neuromorphic vision sensors, commonly referred to as event cameras, have
recently gained relevance for applications requiring high-speed, high dynamic
range and low-latency data acquisition. Unlike traditional frame-based cameras
that capture 2D images, event cameras generate a massive number of pixel-level
events, composed by spatiotemporal and polarity information, with very high
temporal resolution, thus demanding highly efficient coding solutions. Existing
solutions focus on lossless coding of event data, assuming that no distortion
is acceptable for the target use cases, mostly including computer vision tasks.
One promising coding approach exploits the similarity between event data and
point clouds, thus allowing to use current point cloud coding solutions to code
event data, typically adopting a two-point clouds representation, one for each
event polarity. This paper proposes a novel lossy Deep Learning-based Joint
Event data Coding (DL-JEC) solution adopting a single-point cloud
representation, thus enabling to exploit the correlation between the
spatiotemporal and polarity event information. DL-JEC can achieve significant
compression performance gains when compared with relevant conventional and
DL-based state-of-the-art event data coding solutions. Moreover, it is shown
that it is possible to use lossy event data coding with its reduced rate
regarding lossless coding without compromising the target computer vision task
performance, notably for event classification. The use of novel adaptive voxel
binarization strategies, adapted to the target task, further enables DL-JEC to
reach a superior performance.
|
2502.03286
|
Conditional Prediction by Simulation for Automated Driving
|
cs.RO cs.CV
|
Modular automated driving systems commonly handle prediction and planning as
sequential, separate tasks, thereby prohibiting cooperative maneuvers. To
enable cooperative planning, this work introduces a prediction model that
models the conditional dependencies between trajectories. For this, predictions
are generated by a microscopic traffic simulation, with the individual traffic
participants being controlled by a realistic behavior model trained via
Adversarial Inverse Reinforcement Learning. By assuming various candidate
trajectories for the automated vehicle, we generate predictions conditioned on
each of them. Furthermore, our approach allows the candidate trajectories to
adapt dynamically during the prediction rollout. Several example scenarios are
available at https://conditionalpredictionbysimulation.github.io/.
|
2502.03287
|
STEMS: Spatial-Temporal Mapping Tool For Spiking Neural Networks
|
cs.NE cs.AI cs.AR cs.DC
|
Spiking Neural Networks (SNNs) are promising bio-inspired third-generation
neural networks. Recent research has trained deep SNN models with accuracy on
par with Artificial Neural Networks (ANNs). Although the event-driven and
sparse nature of SNNs show potential for more energy efficient computation than
ANNs, SNN neurons have internal states which evolve over time. Keeping track of
SNN states can significantly increase data movement and storage requirements,
potentially losing its advantages with respect to ANNs. This paper investigates
the energy effects of having neuron states, and how it is influenced by the
chosen mapping to realistic hardware architectures with advanced memory
hierarchies. Therefore, we develop STEMS, a mapping design space exploration
tool for SNNs. STEMS models SNN's stateful behavior and explores intra-layer
and inter-layer mapping optimizations to minimize data movement, considering
both spatial and temporal SNN dimensions. Using STEMS, we show up to 12x
reduction in off-chip data movement and 5x reduction in energy (on top of
intra-layer optimizations), on two event-based vision SNN benchmarks. Finally,
neuron states may not be needed for all SNN layers. By optimizing neuron states
for one of our benchmarks, we show 20x reduction in neuron states and 1.4x
better performance without accuracy loss.
|
2502.03292
|
ALPET: Active Few-shot Learning for Citation Worthiness Detection in
Low-Resource Wikipedia Languages
|
cs.CL cs.AI cs.LG
|
Citation Worthiness Detection (CWD) consists in determining which sentences,
within an article or collection, should be backed up with a citation to
validate the information it provides. This study, introduces ALPET, a framework
combining Active Learning (AL) and Pattern-Exploiting Training (PET), to
enhance CWD for languages with limited data resources. Applied to Catalan,
Basque, and Albanian Wikipedia datasets, ALPET outperforms the existing CCW
baseline while reducing the amount of labeled data in some cases above 80\%.
ALPET's performance plateaus after 300 labeled samples, showing it suitability
for low-resource scenarios where large, labeled datasets are not common. While
specific active learning query strategies, like those employing K-Means
clustering, can offer advantages, their effectiveness is not universal and
often yields marginal gains over random sampling, particularly with smaller
datasets. This suggests that random sampling, despite its simplicity, remains a
strong baseline for CWD in constraint resource environments. Overall, ALPET's
ability to achieve high performance with fewer labeled samples makes it a
promising tool for enhancing the verifiability of online content in
low-resource language settings.
|
2502.03297
|
IRIS: An Immersive Robot Interaction System
|
cs.RO cs.LG
|
This paper introduces IRIS, an immersive Robot Interaction System leveraging
Extended Reality (XR), designed for robot data collection and interaction
across multiple simulators, benchmarks, and real-world scenarios. While
existing XR-based data collection systems provide efficient and intuitive
solutions for large-scale data collection, they are often challenging to
reproduce and reuse. This limitation arises because current systems are highly
tailored to simulator-specific use cases and environments. IRIS is a novel,
easily extendable framework that already supports multiple simulators,
benchmarks, and even headsets. Furthermore, IRIS is able to include additional
information from real-world sensors, such as point clouds captured through
depth cameras. A unified scene specification is generated directly from
simulators or real-world sensors and transmitted to XR headsets, creating
identical scenes in XR. This specification allows IRIS to support any of the
objects, assets, and robots provided by the simulators. In addition, IRIS
introduces shared spatial anchors and a robust communication protocol that
links simulations between multiple XR headsets. This feature enables multiple
XR headsets to share a synchronized scene, facilitating collaborative and
multi-user data collection. IRIS can be deployed on any device that supports
the Unity Framework, encompassing the vast majority of commercially available
headsets. In this work, IRIS was deployed and tested on the Meta Quest 3 and
the HoloLens 2. IRIS showcased its versatility across a wide range of
real-world and simulated scenarios, using current popular robot simulators such
as MuJoCo, IsaacSim, CoppeliaSim, and Genesis. In addition, a user study
evaluates IRIS on a data collection task for the LIBERO benchmark. The study
shows that IRIS significantly outperforms the baseline in both objective and
subjective metrics.
|
2502.03298
|
MeDiSumQA: Patient-Oriented Question-Answer Generation from Discharge
Letters
|
cs.CL cs.AI cs.LG
|
While increasing patients' access to medical documents improves medical care,
this benefit is limited by varying health literacy levels and complex medical
terminology. Large language models (LLMs) offer solutions by simplifying
medical information. However, evaluating LLMs for safe and patient-friendly
text generation is difficult due to the lack of standardized evaluation
resources. To fill this gap, we developed MeDiSumQA. MeDiSumQA is a dataset
created from MIMIC-IV discharge summaries through an automated pipeline
combining LLM-based question-answer generation with manual quality checks. We
use this dataset to evaluate various LLMs on patient-oriented
question-answering. Our findings reveal that general-purpose LLMs frequently
surpass biomedical-adapted models, while automated metrics correlate with human
judgment. By releasing MeDiSumQA on PhysioNet, we aim to advance the
development of LLMs to enhance patient understanding and ultimately improve
care outcomes.
|
2502.03302
|
MAP Image Recovery with Guarantees using Locally Convex Multi-Scale
Energy (LC-MUSE) Model
|
cs.LG cs.CV eess.IV
|
We propose a multi-scale deep energy model that is strongly convex in the
local neighbourhood around the data manifold to represent its probability
density, with application in inverse problems. In particular, we represent the
negative log-prior as a multi-scale energy model parameterized by a
Convolutional Neural Network (CNN). We restrict the gradient of the CNN to be
locally monotone, which constrains the model as a Locally Convex Multi-Scale
Energy (LC-MuSE). We use the learned energy model in image-based inverse
problems, where the formulation offers several desirable properties: i)
uniqueness of the solution, ii) convergence guarantees to a minimum of the
inverse problem, and iii) robustness to input perturbations. In the context of
parallel Magnetic Resonance (MR) image reconstruction, we show that the
proposed method performs better than the state-of-the-art convex regularizers,
while the performance is comparable to plug-and-play regularizers and
end-to-end trained methods.
|
2502.03304
|
Harmony in Divergence: Towards Fast, Accurate, and Memory-efficient
Zeroth-order LLM Fine-tuning
|
cs.LG cs.AI cs.CL
|
Large language models (LLMs) excel across various tasks, but standard
first-order (FO) fine-tuning demands considerable memory, significantly
limiting real-world deployment. Recently, zeroth-order (ZO) optimization stood
out as a promising memory-efficient training paradigm, avoiding backward passes
and relying solely on forward passes for gradient estimation, making it
attractive for resource-constrained scenarios. However, ZO method lags far
behind FO method in both convergence speed and accuracy. To bridge the gap, we
introduce a novel layer-wise divergence analysis that uncovers the distinct
update pattern of FO and ZO optimization. Aiming to resemble the learning
capacity of FO method from the findings, we propose \textbf{Di}vergence-driven
\textbf{Z}eroth-\textbf{O}rder (\textbf{DiZO}) optimization. DiZO conducts
divergence-driven layer adaptation by incorporating projections to ZO updates,
generating diverse-magnitude updates precisely scaled to layer-wise individual
optimization needs. Our results demonstrate that DiZO significantly reduces the
needed iterations for convergence without sacrificing throughput, cutting
training GPU hours by up to 48\% on various datasets. Moreover, DiZO
consistently outperforms the representative ZO baselines in fine-tuning
RoBERTa-large, OPT-series, and Llama-series on downstream tasks and, in some
cases, even surpasses memory-intensive FO fine-tuning.
|
2502.03307
|
Intent Alignment between Interaction and Language Spaces for
Recommendation
|
cs.IR
|
Intent-based recommender systems have garnered significant attention for
uncovering latent fine-grained preferences. Intents, as underlying factors of
interactions, are crucial for improving recommendation interpretability. Most
methods define intents as learnable parameters updated alongside interactions.
However, existing frameworks often overlook textual information (e.g., user
reviews, item descriptions), which is crucial for alleviating the sparsity of
interaction intents. Exploring these multimodal intents, especially the
inherent differences in representation spaces, poses two key challenges: i) How
to align multimodal intents and effectively mitigate noise issues; ii) How to
extract and match latent key intents across modalities. To tackle these
challenges, we propose a model-agnostic framework, Intent Representation
Learning with Large Language Model (IRLLRec), which leverages large language
models (LLMs) to construct multimodal intents and enhance recommendations.
Specifically, IRLLRec employs a dual-tower architecture to learn multimodal
intent representations. Next, we propose pairwise and translation alignment to
eliminate inter-modal differences and enhance robustness against noisy input
features. Finally, to better match textual and interaction-based intents, we
employ momentum distillation to perform teacher-student learning on fused
intent representations. Empirical evaluations on three datasets show that our
IRLLRec framework outperforms baselines.
|
2502.03317
|
Contact-Aware Motion Planning Among Movable Objects
|
cs.RO
|
Most existing methods for motion planning of mobile robots involve generating
collision-free trajectories. However, these methods focusing solely on contact
avoidance may limit the robots' locomotion and can not be applied to tasks
where contact is inevitable or intentional. To address these issues, we propose
a novel contact-aware motion planning (CAMP) paradigm for robotic systems. Our
approach incorporates contact between robots and movable objects as
complementarity constraints in optimization-based trajectory planning. By
leveraging augmented Lagrangian methods (ALMs), we efficiently solve the
optimization problem with complementarity constraints, producing
spatial-temporal optimal trajectories of the robots. Simulations demonstrate
that, compared to the state-of-the-art method, our proposed CAMP method expands
the reachable space of mobile robots, resulting in a significant improvement in
the success rate of two types of fundamental tasks: navigation among movable
objects (NAMO) and rearrangement of movable objects (RAMO). Real-world
experiments show that the trajectories generated by our proposed method are
feasible and quickly deployed in different tasks.
|
2502.03321
|
Simplifying Formal Proof-Generating Models with ChatGPT and Basic
Searching Techniques
|
cs.LO cs.AI
|
The challenge of formal proof generation has a rich history, but with modern
techniques, we may finally be at the stage of making actual progress in
real-life mathematical problems. This paper explores the integration of ChatGPT
and basic searching techniques to simplify generating formal proofs, with a
particular focus on the miniF2F dataset. We demonstrate how combining a large
language model like ChatGPT with a formal language such as Lean, which has the
added advantage of being verifiable, enhances the efficiency and accessibility
of formal proof generation. Despite its simplicity, our best-performing
Lean-based model surpasses all known benchmarks with a 31.15% pass rate. We
extend our experiments to include other datasets and employ alternative
language models, showcasing our models' comparable performance in diverse
settings and allowing for a more nuanced analysis of our results. Our findings
offer insights into AI-assisted formal proof generation, suggesting a promising
direction for future research in formal mathematical proof.
|
2502.03322
|
An efficient end-to-end computational framework for the generation of
ECG calibrated volumetric models of human atrial electrophysiology
|
math.NA cs.CE cs.NA q-bio.TO
|
Computational models of atrial electrophysiology (EP) are increasingly
utilized for applications such as the development of advanced mapping systems,
personalized clinical therapy planning, and the generation of virtual cohorts
and digital twins. These models have the potential to establish robust causal
links between simulated in silico behaviors and observed human atrial EP,
enabling safer, cost-effective, and comprehensive exploration of atrial
dynamics. However, current state-of-the-art approaches lack the fidelity and
scalability required for regulatory-grade applications, particularly in
creating high-quality virtual cohorts or patient-specific digital twins.
Challenges include anatomically accurate model generation, calibration to
sparse and uncertain clinical data, and computational efficiency within a
streamlined workflow. This study addresses these limitations by introducing
novel methodologies integrated into an automated end-to-end workflow for
generating high-fidelity digital twin snapshots and virtual cohorts of atrial
EP. These innovations include: (i) automated multi-scale generation of
volumetric biatrial models with detailed anatomical structures and fiber
architecture; (ii) a robust method for defining space-varying atrial parameter
fields; (iii) a parametric approach for modeling inter-atrial conduction
pathways; and (iv) an efficient forward EP model for high-fidelity
electrocardiogram computation. We evaluated this workflow on a cohort of 50
atrial fibrillation patients, producing high-quality meshes suitable for
reaction-eikonal and reaction-diffusion models and demonstrating the ability to
simulate atrial ECGs under parametrically controlled conditions. These
advancements represent a critical step toward scalable, precise, and clinically
applicable digital twin models and virtual cohorts, enabling enhanced
patient-specific predictions and therapeutic planning.
|
2502.03323
|
Out-of-Distribution Detection using Synthetic Data Generation
|
cs.CL cs.AI cs.LG
|
Distinguishing in- and out-of-distribution (OOD) inputs is crucial for
reliable deployment of classification systems. However, OOD data is typically
unavailable or difficult to collect, posing a significant challenge for
accurate OOD detection. In this work, we present a method that harnesses the
generative capabilities of Large Language Models (LLMs) to create high-quality
synthetic OOD proxies, eliminating the dependency on any external OOD data
source. We study the efficacy of our method on classical text classification
tasks such as toxicity detection and sentiment classification as well as
classification tasks arising in LLM development and deployment, such as
training a reward model for RLHF and detecting misaligned generations.
Extensive experiments on nine InD-OOD dataset pairs and various model sizes
show that our approach dramatically lowers false positive rates (achieving a
perfect zero in some cases) while maintaining high accuracy on in-distribution
tasks, outperforming baseline methods by a significant margin.
|
2502.03325
|
ECM: A Unified Electronic Circuit Model for Explaining the Emergence of
In-Context Learning and Chain-of-Thought in Large Language Model
|
cs.CL cs.AI
|
Recent advancements in large language models (LLMs) have led to significant
successes across various applications, where the most noticeable is to a series
of emerging capabilities, particularly in the areas of In-Context Learning
(ICL) and Chain-of-Thought (CoT). To better understand and control model
performance, many studies have begun investigating the underlying causes of
these phenomena and their impact on task outcomes. However, existing
explanatory frameworks predominantly focus on isolating and explaining ICL and
CoT independently, leading to an incomplete understanding of their combined
influence on model performance. To address this gap, we propose the Electronic
Circuit Model (ECM), which provides a foundation for developing scalable,
learnable policies and improving the management of AI-generated content.
Specifically, ECM conceptualizes model behavior as an electronic circuit: ICL
is represented as semantic magnetic field to providing an additional voltage
following Faraday's Law, while CoT is modeled as series resistors to constrain
the model output performance following Ohm's Law. Experimental results
demonstrate that the ECM effectively predicts and explains LLM performance
across a variety of prompting strategies. Furthermore, we apply ECM to advanced
reasoning strategy optimization on a series of tasks, such as the International
Olympiad in Informatics (IOI) and the International Mathematical Olympiad
(IMO), achieving competitive performance that surpasses nearly 80% of top human
competitors.
|
2502.03327
|
Is In-Context Universality Enough? MLPs are Also Universal In-Context
|
stat.ML cs.LG cs.NA cs.NE math.NA math.PR
|
The success of transformers is often linked to their ability to perform
in-context learning. Recent work shows that transformers are universal in
context, capable of approximating any real-valued continuous function of a
context (a probability measure over $\mathcal{X}\subseteq \mathbb{R}^d$) and a
query $x\in \mathcal{X}$. This raises the question: Does in-context
universality explain their advantage over classical models? We answer this in
the negative by proving that MLPs with trainable activation functions are also
universal in-context. This suggests the transformer's success is likely due to
other factors like inductive bias or training stability.
|
2502.03330
|
Controllable GUI Exploration
|
cs.HC cs.AI cs.CV cs.GR
|
During the early stages of interface design, designers need to produce
multiple sketches to explore a design space. Design tools often fail to support
this critical stage, because they insist on specifying more details than
necessary. Although recent advances in generative AI have raised hopes of
solving this issue, in practice they fail because expressing loose ideas in a
prompt is impractical. In this paper, we propose a diffusion-based approach to
the low-effort generation of interface sketches. It breaks new ground by
allowing flexible control of the generation process via three types of inputs:
A) prompts, B) wireframes, and C) visual flows. The designer can provide any
combination of these as input at any level of detail, and will get a diverse
gallery of low-fidelity solutions in response. The unique benefit is that large
design spaces can be explored rapidly with very little effort in
input-specification. We present qualitative results for various combinations of
input specifications. Additionally, we demonstrate that our model aligns more
accurately with these specifications than other models.
|
2502.03332
|
A Mixture-Based Framework for Guiding Diffusion Models
|
stat.ML cs.LG
|
Denoising diffusion models have driven significant progress in the field of
Bayesian inverse problems. Recent approaches use pre-trained diffusion models
as priors to solve a wide range of such problems, only leveraging
inference-time compute and thereby eliminating the need to retrain
task-specific models on the same dataset. To approximate the posterior of a
Bayesian inverse problem, a diffusion model samples from a sequence of
intermediate posterior distributions, each with an intractable likelihood
function. This work proposes a novel mixture approximation of these
intermediate distributions. Since direct gradient-based sampling of these
mixtures is infeasible due to intractable terms, we propose a practical method
based on Gibbs sampling. We validate our approach through extensive experiments
on image inverse problems, utilizing both pixel- and latent-space diffusion
priors, as well as on source separation with an audio diffusion model. The code
is available at https://www.github.com/badr-moufad/mgdm
|
2502.03333
|
RadVLM: A Multitask Conversational Vision-Language Model for Radiology
|
cs.CV cs.AI
|
The widespread use of chest X-rays (CXRs), coupled with a shortage of
radiologists, has driven growing interest in automated CXR analysis and
AI-assisted reporting. While existing vision-language models (VLMs) show
promise in specific tasks such as report generation or abnormality detection,
they often lack support for interactive diagnostic capabilities. In this work
we present RadVLM, a compact, multitask conversational foundation model
designed for CXR interpretation. To this end, we curate a large-scale
instruction dataset comprising over 1 million image-instruction pairs
containing both single-turn tasks -- such as report generation, abnormality
classification, and visual grounding -- and multi-turn, multi-task
conversational interactions. After fine-tuning RadVLM on this instruction
dataset, we evaluate it across different tasks along with re-implemented
baseline VLMs. Our results show that RadVLM achieves state-of-the-art
performance in conversational capabilities and visual grounding while remaining
competitive in other radiology tasks. Ablation studies further highlight the
benefit of joint training across multiple tasks, particularly for scenarios
with limited annotated data. Together, these findings highlight the potential
of RadVLM as a clinically relevant AI assistant, providing structured CXR
interpretation and conversational capabilities to support more effective and
accessible diagnostic workflows.
|
2502.03335
|
Actions Speak Louder Than Words: Rate-Reward Trade-off in Markov
Decision Processes
|
cs.IT math.IT
|
The impact of communication on decision-making systems has been extensively
studied under the assumption of dedicated communication channels. We instead
consider communicating through actions, where the message is embedded into the
actions of an agent which interacts with the environment in a Markov decision
process (MDP) framework. We conceptualize the MDP environment as a finite-state
channel (FSC), where the actions of the agent serve as the channel input, while
the states of the MDP observed by another agent (i.e., receiver) serve as the
channel output. Here, we treat the environment as a communication channel over
which the agent communicates through its actions, while at the same time,
trying to maximize its reward. We first characterize the optimal information
theoretic trade-off between the average reward and the rate of reliable
communication in the infinite-horizon regime. Then, we propose a novel
framework to design a joint control/coding policy, termed \textit{Act2Comm},
which seamlessly embeds messages into actions. From a communication
perspective, \textit{Act2Comm} functions as a learning-based channel coding
scheme for non-differentiable FSCs under input-output constraints. From a
control standpoint, \textit{Act2Comm} learns an MDP policy that incorporates
communication capabilities, though at the cost of some control performance.
Overall, \textit{Act2Comm} effectively balances the dual objectives of control
and communication in this environment. Experimental results validate
\textit{Act2Comm}'s capability to enable reliable communication while
maintaining a certain level of control performance.
|
2502.03338
|
Optimal PMU Placement for Kalman Filtering of DAE Power System Models
|
eess.SY cs.SY
|
Optimal sensor placement is essential for minimizing costs and ensuring
accurate state estimation in power systems. This paper introduces a novel
method for optimal sensor placement for dynamic state estimation of power
systems modeled by differential-algebraic equations. The method identifies
optimal sensor locations by minimizing the steady-state covariance matrix of
the Kalman filter, thus minimizing the error of joint differential and
algebraic state estimation. The problem is reformulated as a mixed-integer
semidefinite program and effectively solved using off-the-shelf numerical
solvers. Numerical results demonstrate the merits of the proposed approach by
benchmarking its performance in phasor measurement unit placement in comparison
to greedy algorithms.
|
2502.03340
|
Interaction-Aware Gaussian Weighting for Clustered Federated Learning
|
cs.LG
|
Federated Learning (FL) emerged as a decentralized paradigm to train models
while preserving privacy. However, conventional FL struggles with data
heterogeneity and class imbalance, which degrade model performance. Clustered
FL balances personalization and decentralized training by grouping clients with
analogous data distributions, enabling improved accuracy while adhering to
privacy constraints. This approach effectively mitigates the adverse impact of
heterogeneity in FL. In this work, we propose a novel clustered FL method,
FedGWC (Federated Gaussian Weighting Clustering), which groups clients based on
their data distribution, allowing training of a more robust and personalized
model on the identified clusters. FedGWC identifies homogeneous clusters by
transforming individual empirical losses to model client interactions with a
Gaussian reward mechanism. Additionally, we introduce the Wasserstein Adjusted
Score, a new clustering metric for FL to evaluate cluster cohesion with respect
to the individual class distribution. Our experiments on benchmark datasets
show that FedGWC outperforms existing FL algorithms in cluster quality and
classification accuracy, validating the efficacy of our approach.
|
2502.03341
|
Adaptive Variational Inference in Probabilistic Graphical Models: Beyond
Bethe, Tree-Reweighted, and Convex Free Energies
|
stat.ML cs.AI cs.LG
|
Variational inference in probabilistic graphical models aims to approximate
fundamental quantities such as marginal distributions and the partition
function. Popular approaches are the Bethe approximation, tree-reweighted, and
other types of convex free energies. These approximations are efficient but can
fail if the model is complex and highly interactive. In this work, we analyze
two classes of approximations that include the above methods as special cases:
first, if the model parameters are changed; and second, if the entropy
approximation is changed. We discuss benefits and drawbacks of either approach,
and deduce from this analysis how a free energy approximation should ideally be
constructed. Based on our observations, we propose approximations that
automatically adapt to a given model and demonstrate their effectiveness for a
range of difficult problems.
|
2502.03346
|
Implicit Communication in Human-Robot Collaborative Transport
|
cs.RO
|
We focus on human-robot collaborative transport, in which a robot and a user
collaboratively move an object to a goal pose. In the absence of explicit
communication, this problem is challenging because it demands tight implicit
coordination between two heterogeneous agents, who have very different sensing,
actuation, and reasoning capabilities. Our key insight is that the two agents
can coordinate fluently by encoding subtle, communicative signals into actions
that affect the state of the transported object. To this end, we design an
inference mechanism that probabilistically maps observations of joint actions
executed by the two agents to a set of joint strategies of workspace traversal.
Based on this mechanism, we define a cost representing the human's uncertainty
over the unfolding traversal strategy and introduce it into a model predictive
controller that balances between uncertainty minimization and efficiency
maximization. We deploy our framework on a mobile manipulator (Hello Robot
Stretch) and evaluate it in a within-subjects lab study (N=24). We show that
our framework enables greater team performance and empowers the robot to be
perceived as a significantly more fluent and competent partner compared to
baselines lacking a communicative mechanism.
|
2502.03347
|
DiversityOne: A Multi-Country Smartphone Sensor Dataset for Everyday
Life Behavior Modeling
|
cs.CY cs.SI
|
Understanding everyday life behavior of young adults through personal
devices, e.g., smartphones and smartwatches, is key for various applications,
from enhancing the user experience in mobile apps to enabling appropriate
interventions in digital health apps. Towards this goal, previous studies have
relied on datasets combining passive sensor data with human-provided
annotations or self-reports. However, many existing datasets are limited in
scope, often focusing on specific countries primarily in the Global North,
involving a small number of participants, or using a limited range of
pre-processed sensors. These limitations restrict the ability to capture
cross-country variations of human behavior, including the possibility of
studying model generalization, and robustness. To address this gap, we
introduce DiversityOne, a dataset which spans eight countries (China, Denmark,
India, Italy, Mexico, Mongolia, Paraguay, and the United Kingdom) and includes
data from 782 college students over four weeks. DiversityOne contains data from
26 smartphone sensor modalities and 350K+ self-reports. As of today, it is one
of the largest and most diverse publicly available datasets, while featuring
extensive demographic and psychosocial survey data. DiversityOne opens the
possibility of studying important research problems in ubiquitous computing,
particularly in domain adaptation and generalization across countries, all
research areas so far largely underexplored because of the lack of adequate
datasets.
|
2502.03349
|
Robust Autonomy Emerges from Self-Play
|
cs.LG cs.AI cs.RO
|
Self-play has powered breakthroughs in two-player and multi-player games.
Here we show that self-play is a surprisingly effective strategy in another
domain. We show that robust and naturalistic driving emerges entirely from
self-play in simulation at unprecedented scale -- 1.6~billion~km of driving.
This is enabled by Gigaflow, a batched simulator that can synthesize and train
on 42 years of subjective driving experience per hour on a single 8-GPU node.
The resulting policy achieves state-of-the-art performance on three independent
autonomous driving benchmarks. The policy outperforms the prior state of the
art when tested on recorded real-world scenarios, amidst human drivers, without
ever seeing human data during training. The policy is realistic when assessed
against human references and achieves unprecedented robustness, averaging 17.5
years of continuous driving between incidents in simulation.
|
2502.03350
|
Optimal Task Order for Continual Learning of Multiple Tasks
|
stat.ML cs.LG
|
Continual learning of multiple tasks remains a major challenge for neural
networks. Here, we investigate how task order influences continual learning and
propose a strategy for optimizing it. Leveraging a linear teacher-student model
with latent factors, we derive an analytical expression relating task
similarity and ordering to learning performance. Our analysis reveals two
principles that hold under a wide parameter range: (1) tasks should be arranged
from the least representative to the most typical, and (2) adjacent tasks
should be dissimilar. We validate these rules on both synthetic data and
real-world image classification datasets (Fashion-MNIST, CIFAR-10, CIFAR-100),
demonstrating consistent performance improvements in both multilayer
perceptrons and convolutional neural networks. Our work thus presents a
generalizable framework for task-order optimization in task-incremental
continual learning.
|
2502.03356
|
Inverse Mixed Strategy Games with Generative Trajectory Models
|
cs.RO
|
Game-theoretic models are effective tools for modeling multi-agent
interactions, especially when robots need to coordinate with humans. However,
applying these models requires inferring their specifications from observed
behaviors -- a challenging task known as the inverse game problem. Existing
inverse game approaches often struggle to account for behavioral uncertainty
and measurement noise, and leverage both offline and online data. To address
these limitations, we propose an inverse game method that integrates a
generative trajectory model into a differentiable mixed-strategy game
framework. By representing the mixed strategy with a conditional variational
autoencoder (CVAE), our method can infer high-dimensional, multi-modal behavior
distributions from noisy measurements while adapting in real-time to new
observations. We extensively evaluate our method in a simulated navigation
benchmark, where the observations are generated by an unknown game model.
Despite the model mismatch, our method can infer Nash-optimal actions
comparable to those of the ground-truth model and the oracle inverse game
baseline, even in the presence of uncertain agent objectives and noisy
measurements.
|
2502.03358
|
Minerva: A Programmable Memory Test Benchmark for Language Models
|
cs.CL
|
How effectively can LLM-based AI assistants utilize their memory (context) to
perform various tasks? Traditional data benchmarks, which are often manually
crafted, suffer from several limitations: they are static, susceptible to
overfitting, difficult to interpret, and lack actionable insights--failing to
pinpoint the specific capabilities a model lacks when it does not pass a test.
In this paper, we present a framework for automatically generating a
comprehensive set of tests to evaluate models' abilities to use their memory
effectively. Our framework extends the range of capability tests beyond the
commonly explored (passkey, key-value, needle in the haystack) search, a
dominant focus in the literature. Specifically, we evaluate models on atomic
tasks such as searching, recalling, editing, matching, comparing information in
context memory, and performing basic operations when inputs are structured into
distinct blocks, simulating real-world data. Additionally, we design composite
tests to investigate the models' ability to maintain state while operating on
memory. Our benchmark enables an interpretable, detailed assessment of memory
capabilities of LLMs.
|
2502.03359
|
GHOST: Gaussian Hypothesis Open-Set Technique
|
cs.CV cs.AI cs.LG
|
Evaluations of large-scale recognition methods typically focus on overall
performance. While this approach is common, it often fails to provide insights
into performance across individual classes, which can lead to fairness issues
and misrepresentation. Addressing these gaps is crucial for accurately
assessing how well methods handle novel or unseen classes and ensuring a fair
evaluation. To address fairness in Open-Set Recognition (OSR), we demonstrate
that per-class performance can vary dramatically. We introduce Gaussian
Hypothesis Open Set Technique (GHOST), a novel hyperparameter-free algorithm
that models deep features using class-wise multivariate Gaussian distributions
with diagonal covariance matrices. We apply Z-score normalization to logits to
mitigate the impact of feature magnitudes that deviate from the model's
expectations, thereby reducing the likelihood of the network assigning a high
score to an unknown sample. We evaluate GHOST across multiple ImageNet-1K
pre-trained deep networks and test it with four different unknown datasets.
Using standard metrics such as AUOSCR, AUROC and FPR95, we achieve
statistically significant improvements, advancing the state-of-the-art in
large-scale OSR. Source code is provided online.
|
2502.03360
|
A Beam's Eye View to Fluence Maps 3D Network for Ultra Fast VMAT
Radiotherapy Planning
|
eess.IV cs.AI physics.med-ph
|
Volumetric Modulated Arc Therapy (VMAT) revolutionizes cancer treatment by
precisely delivering radiation while sparing healthy tissues. Fluence maps
generation, crucial in VMAT planning, traditionally involves complex and
iterative, and thus time consuming processes. These fluence maps are
subsequently leveraged for leaf-sequence. The deep-learning approach presented
in this article aims to expedite this by directly predicting fluence maps from
patient data. We developed a 3D network which we trained in a supervised way
using a combination of L1 and L2 losses, and RT plans generated by Eclipse and
from the REQUITE dataset, taking the RT dose map as input and the fluence maps
computed from the corresponding RT plans as target. Our network predicts
jointly the 180 fluence maps corresponding to the 180 control points (CP) of
single arc VMAT plans. In order to help the network, we pre-process the input
dose by computing the projections of the 3D dose map to the beam's eye view
(BEV) of the 180 CPs, in the same coordinate system as the fluence maps. We
generated over 2000 VMAT plans using Eclipse to scale up the dataset size.
Additionally, we evaluated various network architectures and analyzed the
impact of increasing the dataset size. We are measuring the performance in the
2D fluence maps domain using image metrics (PSNR, SSIM), as well as in the 3D
dose domain using the dose-volume histogram (DVH) on a validation dataset. The
network inference, which does not include the data loading and processing, is
less than 20ms. Using our proposed 3D network architecture as well as
increasing the dataset size using Eclipse improved the fluence map
reconstruction performance by approximately 8 dB in PSNR compared to a U-Net
architecture trained on the original REQUITE dataset. The resulting DVHs are
very close to the one of the input target dose.
|
2502.03364
|
Scaling laws in wearable human activity recognition
|
cs.LG
|
Many deep architectures and self-supervised pre-training techniques have been
proposed for human activity recognition (HAR) from wearable multimodal sensors.
Scaling laws have the potential to help move towards more principled design by
linking model capacity with pre-training data volume. Yet, scaling laws have
not been established for HAR to the same extent as in language and vision. By
conducting an exhaustive grid search on both amount of pre-training data and
Transformer architectures, we establish the first known scaling laws for HAR.
We show that pre-training loss scales with a power law relationship to amount
of data and parameter count and that increasing the number of users in a
dataset results in a steeper improvement in performance than increasing data
per user, indicating that diversity of pre-training data is important, which
contrasts to some previously reported findings in self-supervised HAR. We show
that these scaling laws translate to downstream performance improvements on
three HAR benchmark datasets of postures, modes of locomotion and activities of
daily living: UCI HAR and WISDM Phone and WISDM Watch. Finally, we suggest some
previously published works should be revisited in light of these scaling laws
with more adequate model capacities.
|
2502.03365
|
A Match Made in Heaven? Matching Test Cases and Vulnerabilities With the
VUTECO Approach
|
cs.SE cs.CR cs.LG
|
Software vulnerabilities are commonly detected via static analysis,
penetration testing, and fuzzing. They can also be found by running unit tests
- so-called vulnerability-witnessing tests - that stimulate the
security-sensitive behavior with crafted inputs. Developing such tests is
difficult and time-consuming; thus, automated data-driven approaches could help
developers intercept vulnerabilities earlier. However, training and validating
such approaches require a lot of data, which is currently scarce. This paper
introduces VUTECO, a deep learning-based approach for collecting instances of
vulnerability-witnessing tests from Java repositories. VUTECO carries out two
tasks: (1) the "Finding" task to determine whether a test case is
security-related, and (2) the "Matching" task to relate a test case to the
exact vulnerability it is witnessing. VUTECO successfully addresses the Finding
task, achieving perfect precision and 0.83 F0.5 score on validated test cases
in VUL4J and returning 102 out of 145 (70%) correct security-related test cases
from 244 open-source Java projects. Despite showing sufficiently good
performance for the Matching task - i.e., 0.86 precision and 0.68 F0.5 score -
VUTECO failed to retrieve any valid match in the wild. Nevertheless, we
observed that in almost all of the matches, the test case was still
security-related despite being matched to the wrong vulnerability. In the end,
VUTECO can help find vulnerability-witnessing tests, though the matching with
the right vulnerability is yet to be solved; the findings obtained lay the
stepping stone for future research on the matter.
|
2502.03366
|
Rethinking Approximate Gaussian Inference in Classification
|
cs.LG stat.ML
|
In classification tasks, softmax functions are ubiquitously used as output
activations to produce predictive probabilities. Such outputs only capture
aleatoric uncertainty. To capture epistemic uncertainty, approximate Gaussian
inference methods have been proposed, which output Gaussian distributions over
the logit space. Predictives are then obtained as the expectations of the
Gaussian distributions pushed forward through the softmax. However, such
softmax Gaussian integrals cannot be solved analytically, and Monte Carlo (MC)
approximations can be costly and noisy. We propose a simple change in the
learning objective which allows the exact computation of predictives and enjoys
improved training dynamics, with no runtime or memory overhead. This framework
is compatible with a family of output activation functions that includes the
softmax, as well as element-wise normCDF and sigmoid. Moreover, it allows for
approximating the Gaussian pushforwards with Dirichlet distributions by
analytic moment matching. We evaluate our approach combined with several
approximate Gaussian inference methods (Laplace, HET, SNGP) on large- and
small-scale datasets (ImageNet, CIFAR-10), demonstrating improved uncertainty
quantification capabilities compared to softmax MC sampling. Code is available
at https://github.com/bmucsanyi/probit.
|
2502.03367
|
SyMANTIC: An Efficient Symbolic Regression Method for Interpretable and
Parsimonious Model Discovery in Science and Beyond
|
cs.LG
|
Symbolic regression (SR) is an emerging branch of machine learning focused on
discovering simple and interpretable mathematical expressions from data.
Although a wide-variety of SR methods have been developed, they often face
challenges such as high computational cost, poor scalability with respect to
the number of input dimensions, fragility to noise, and an inability to balance
accuracy and complexity. This work introduces SyMANTIC, a novel SR algorithm
that addresses these challenges. SyMANTIC efficiently identifies (potentially
several) low-dimensional descriptors from a large set of candidates (from $\sim
10^5$ to $\sim 10^{10}$ or more) through a unique combination of mutual
information-based feature selection, adaptive feature expansion, and
recursively applied $\ell_0$-based sparse regression. In addition, it employs
an information-theoretic measure to produce an approximate set of
Pareto-optimal equations, each offering the best-found accuracy for a given
complexity. Furthermore, our open-source implementation of SyMANTIC, built on
the PyTorch ecosystem, facilitates easy installation and GPU acceleration. We
demonstrate the effectiveness of SyMANTIC across a range of problems, including
synthetic examples, scientific benchmarks, real-world material property
predictions, and chaotic dynamical system identification from small datasets.
Extensive comparisons show that SyMANTIC uncovers similar or more accurate
models at a fraction of the cost of existing SR methods.
|
2502.03368
|
PalimpChat: Declarative and Interactive AI analytics
|
cs.AI cs.DB cs.IR
|
Thanks to the advances in generative architectures and large language models,
data scientists can now code pipelines of machine-learning operations to
process large collections of unstructured data. Recent progress has seen the
rise of declarative AI frameworks (e.g., Palimpzest, Lotus, and DocETL) to
build optimized and increasingly complex pipelines, but these systems often
remain accessible only to expert programmers. In this demonstration, we present
PalimpChat, a chat-based interface to Palimpzest that bridges this gap by
letting users create and run sophisticated AI pipelines through natural
language alone. By integrating Archytas, a ReAct-based reasoning agent, and
Palimpzest's suite of relational and LLM-based operators, PalimpChat provides a
practical illustration of how a chat interface can make declarative AI
frameworks truly accessible to non-experts.
Our demo system is publicly available online. At SIGMOD'25, participants can
explore three real-world scenarios--scientific discovery, legal discovery, and
real estate search--or apply PalimpChat to their own datasets. In this paper,
we focus on how PalimpChat, supported by the Palimpzest optimizer, simplifies
complex AI workflows such as extracting and analyzing biomedical data.
|
2502.03369
|
Learning from Active Human Involvement through Proxy Value Propagation
|
cs.AI cs.RO
|
Learning from active human involvement enables the human subject to actively
intervene and demonstrate to the AI agent during training. The interaction and
corrective feedback from human brings safety and AI alignment to the learning
process. In this work, we propose a new reward-free active human involvement
method called Proxy Value Propagation for policy optimization. Our key insight
is that a proxy value function can be designed to express human intents,
wherein state-action pairs in the human demonstration are labeled with high
values, while those agents' actions that are intervened receive low values.
Through the TD-learning framework, labeled values of demonstrated state-action
pairs are further propagated to other unlabeled data generated from agents'
exploration. The proxy value function thus induces a policy that faithfully
emulates human behaviors. Human-in-the-loop experiments show the generality and
efficiency of our method. With minimal modification to existing reinforcement
learning algorithms, our method can learn to solve continuous and discrete
control tasks with various human control devices, including the challenging
task of driving in Grand Theft Auto V. Demo video and code are available at:
https://metadriverse.github.io/pvp
|
2502.03370
|
Deep Learning-Based Approach for Identification of Potato Leaf Diseases
Using Wrapper Feature Selection and Feature Concatenation
|
cs.CV cs.LG
|
The potato is a widely grown crop in many regions of the world. In recent
decades, potato farming has gained incredible traction in the world. Potatoes
are susceptible to several illnesses that stunt their development. This plant
seems to have significant leaf disease. Early Blight and Late Blight are two
prevalent leaf diseases that affect potato plants. The early detection of these
diseases would be beneficial for enhancing the yield of this crop. The ideal
solution is to use image processing to identify and analyze these disorders.
Here, we present an autonomous method based on image processing and machine
learning to detect late blight disease affecting potato leaves. The proposed
method comprises four different phases: (1) Histogram Equalization is used to
improve the quality of the input image; (2) feature extraction is performed
using a Deep CNN model, then these extracted features are concatenated; (3)
feature selection is performed using wrapper-based feature selection; (4)
classification is performed using an SVM classifier and its variants. This
proposed method achieves the highest accuracy of 99% using SVM by selecting 550
features.
|
2502.03373
|
Demystifying Long Chain-of-Thought Reasoning in LLMs
|
cs.CL cs.LG
|
Scaling inference compute enhances reasoning in large language models (LLMs),
with long chains-of-thought (CoTs) enabling strategies like backtracking and
error correction. Reinforcement learning (RL) has emerged as a crucial method
for developing these capabilities, yet the conditions under which long CoTs
emerge remain unclear, and RL training requires careful design choices. In this
study, we systematically investigate the mechanics of long CoT reasoning,
identifying the key factors that enable models to generate long CoT
trajectories. Through extensive supervised fine-tuning (SFT) and RL
experiments, we present four main findings: (1) While SFT is not strictly
necessary, it simplifies training and improves efficiency; (2) Reasoning
capabilities tend to emerge with increased training compute, but their
development is not guaranteed, making reward shaping crucial for stabilizing
CoT length growth; (3) Scaling verifiable reward signals is critical for RL. We
find that leveraging noisy, web-extracted solutions with filtering mechanisms
shows strong potential, particularly for out-of-distribution (OOD) tasks such
as STEM reasoning; and (4) Core abilities like error correction are inherently
present in base models, but incentivizing these skills effectively for complex
tasks via RL demands significant compute, and measuring their emergence
requires a nuanced approach. These insights provide practical guidance for
optimizing training strategies to enhance long CoT reasoning in LLMs. Our code
is available at: https://github.com/eddycmu/demystify-long-cot.
|
2502.03375
|
Interactive Visualization Recommendation with Hier-SUCB
|
cs.IR
|
Visualization recommendation aims to enable rapid visual analysis of massive
datasets. In real-world scenarios, it is essential to quickly gather and
comprehend user preferences to cover users from diverse backgrounds, including
varying skill levels and analytical tasks. Previous approaches to personalized
visualization recommendations are non-interactive and rely on initial user data
for new users. As a result, these models cannot effectively explore options or
adapt to real-time feedback. To address this limitation, we propose an
interactive personalized visualization recommendation (PVisRec) system that
learns on user feedback from previous interactions. For more interactive and
accurate recommendations, we propose Hier-SUCB, a contextual combinatorial
semi-bandit in the PVisRec setting. Theoretically, we show an improved overall
regret bound with the same rank of time but an improved rank of action space.
We further demonstrate the effectiveness of Hier-SUCB through extensive
experiments where it is comparable to offline methods and outperforms other
bandit algorithms in the setting of visualization recommendation.
|
2502.03376
|
Ethical Considerations for the Military Use of Artificial Intelligence
in Visual Reconnaissance
|
cs.CY cs.CV
|
This white paper underscores the critical importance of responsibly deploying
Artificial Intelligence (AI) in military contexts, emphasizing a commitment to
ethical and legal standards. The evolving role of AI in the military goes
beyond mere technical applications, necessitating a framework grounded in
ethical principles. The discussion within the paper delves into ethical AI
principles, particularly focusing on the Fairness, Accountability,
Transparency, and Ethics (FATE) guidelines. Noteworthy considerations encompass
transparency, justice, non-maleficence, and responsibility. Importantly, the
paper extends its examination to military-specific ethical considerations,
drawing insights from the Just War theory and principles established by
prominent entities. In addition to the identified principles, the paper
introduces further ethical considerations specifically tailored for military AI
applications. These include traceability, proportionality, governability,
responsibility, and reliability. The application of these ethical principles is
discussed on the basis of three use cases in the domains of sea, air, and land.
Methods of automated sensor data analysis, eXplainable AI (XAI), and intuitive
user experience are utilized to specify the use cases close to real-world
scenarios. This comprehensive approach to ethical considerations in military AI
reflects a commitment to aligning technological advancements with established
ethical frameworks. It recognizes the need for a balance between leveraging
AI's potential benefits in military operations while upholding moral and legal
standards. The inclusion of these ethical principles serves as a foundation for
responsible and accountable use of AI in the complex and dynamic landscape of
military scenarios.
|
2502.03377
|
Energy-Efficient Flying LoRa Gateways: A Multi-Agent Reinforcement
Learning Approach
|
cs.NI cs.LG
|
With the rapid development of next-generation Internet of Things (NG-IoT)
networks, the increasing number of connected devices has led to a surge in
power consumption. This rise in energy demand poses significant challenges to
resource availability and raises sustainability concerns for large-scale IoT
deployments. Efficient energy utilization in communication networks,
particularly for power-constrained IoT devices, has thus become a critical area
of research. In this paper, we deployed flying LoRa gateways (GWs) mounted on
unmanned aerial vehicles (UAVs) to collect data from LoRa end devices (EDs) and
transmit it to a central server. Our primary objective is to maximize the
global system energy efficiency (EE) of wireless LoRa networks by joint
optimization of transmission power (TP), spreading factor (SF), bandwidth (W),
and ED association. To solve this challenging problem, we model the problem as
a partially observable Markov decision process (POMDP), where each flying LoRa
GW acts as a learning agent using a cooperative Multi-Agent Reinforcement
Learning (MARL) approach under centralized training and decentralized execution
(CTDE). Simulation results demonstrate that our proposed method, based on the
multi-agent proximal policy optimization (MAPPO) algorithm, significantly
improves the global system EE and surpasses the conventional MARL schemes.
|
2502.03381
|
Integrating automatic speech recognition into remote healthcare
interpreting: A pilot study of its impact on interpreting quality
|
cs.CL
|
This paper reports on the results from a pilot study investigating the impact
of automatic speech recognition (ASR) technology on interpreting quality in
remote healthcare interpreting settings. Employing a within-subjects experiment
design with four randomised conditions, this study utilises scripted medical
consultations to simulate dialogue interpreting tasks. It involves four trainee
interpreters with a language combination of Chinese and English. It also
gathers participants' experience and perceptions of ASR support through cued
retrospective reports and semi-structured interviews. Preliminary data suggest
that the availability of ASR, specifically the access to full ASR transcripts
and to ChatGPT-generated summaries based on ASR, effectively improved
interpreting quality. Varying types of ASR output had different impacts on the
distribution of interpreting error types. Participants reported similar
interactive experiences with the technology, expressing their preference for
full ASR transcripts. This pilot study shows encouraging results of applying
ASR to dialogue-based healthcare interpreting and offers insights into the
optimal ways to present ASR output to enhance interpreter experience and
performance. However, it should be emphasised that the main purpose of this
study was to validate the methodology and that further research with a larger
sample size is necessary to confirm these findings.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.