id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.14619
|
Reward Models Identify Consistency, Not Causality
|
cs.LG cs.AI cs.CL
|
Reward models (RMs) play a crucial role in aligning large language models
(LLMs) with human preferences and enhancing reasoning quality. Traditionally,
RMs are trained to rank candidate outputs based on their correctness and
coherence. However, in this work, we present several surprising findings that
challenge common assumptions about RM behavior. Our analysis reveals that
state-of-the-art reward models prioritize structural consistency over causal
correctness. Specifically, removing the problem statement has minimal impact on
reward scores, whereas altering numerical values or disrupting the reasoning
flow significantly affects RM outputs. Furthermore, RMs exhibit a strong
dependence on complete reasoning trajectories truncated or incomplete steps
lead to significant variations in reward assignments, indicating that RMs
primarily rely on learned reasoning patterns rather than explicit problem
comprehension. These findings hold across multiple architectures, datasets, and
tasks, leading to three key insights: (1) RMs primarily assess coherence rather
than true reasoning quality; (2) The role of explicit problem comprehension in
reward assignment is overstated; (3) Current RMs may be more effective at
ranking responses than verifying logical validity. Our results suggest a
fundamental limitation in existing reward modeling approaches, emphasizing the
need for a shift toward causality-aware reward models that go beyond
consistency-driven evaluation.
|
2502.14620
|
Exploring RWKV for Sentence Embeddings: Layer-wise Analysis and Baseline
Comparison for Semantic Similarity
|
cs.CL cs.AI
|
This paper investigates the efficacy of RWKV, a novel language model
architecture known for its linear attention mechanism, for generating sentence
embeddings in a zero-shot setting. I conduct a layer-wise analysis to evaluate
the semantic similarity captured by embeddings from different hidden layers of
a pre-trained RWKV model. The performance is assessed on the Microsoft Research
Paraphrase Corpus (MRPC) dataset using Spearman correlation and compared
against a GloVe-based baseline. My results indicate that while RWKV embeddings
capture some semantic relatedness, they underperform compared to the GloVe
baseline in terms of Spearman correlation. I also analyze the inference time
and GPU memory usage, highlighting the computational trade-offs associated with
RWKV embeddings. The findings suggest that while RWKV offers potential
advantages in terms of linear scaling, its zero-shot sentence embedding quality
for semantic similarity tasks requires further investigation and potential
task-specific fine-tuning to match or exceed simpler baselines.
|
2502.14625
|
Multi-Record Web Page Information Extraction From News Websites
|
cs.CL cs.IR
|
In this paper, we focused on the problem of extracting information from web
pages containing many records, a task of growing importance in the era of
massive web data. Recently, the development of neural network methods has
improved the quality of information extraction from web pages. Nevertheless,
most of the research and datasets are aimed at studying detailed pages. This
has left multi-record "list pages" relatively understudied, despite their
widespread presence and practical significance.
To address this gap, we created a large-scale, open-access dataset
specifically designed for list pages. This is the first dataset for this task
in the Russian language. Our dataset contains 13,120 web pages with news lists,
significantly exceeding existing datasets in both scale and complexity. Our
dataset contains attributes of various types, including optional and
multi-valued, providing a realistic representation of real-world list pages.
These features make our dataset a valuable resource for studying information
extraction from pages containing many records.
Furthermore, we proposed our own multi-stage information extraction methods.
In this work, we explore and demonstrate several strategies for applying
MarkupLM to the specific challenges of multi-record web pages. Our experiments
validate the advantages of our methods.
By releasing our dataset to the public, we aim to advance the field of
information extraction from multi-record pages.
|
2502.14627
|
ATRI: Mitigating Multilingual Audio Text Retrieval Inconsistencies by
Reducing Data Distribution Errors
|
cs.SD cs.AI eess.AS
|
Multilingual audio-text retrieval (ML-ATR) is a challenging task that aims to
retrieve audio clips or multilingual texts from databases. However, existing
ML-ATR schemes suffer from inconsistencies for instance similarity matching
across languages. We theoretically analyze the inconsistency in terms of both
multilingual modal alignment direction error and weight error, and propose the
theoretical weight error upper bound for quantifying the inconsistency. Based
on the analysis of the weight error upper bound, we find that the inconsistency
problem stems from the data distribution error caused by random sampling of
languages. We propose a consistent ML-ATR scheme using 1-to-k contrastive
learning and audio-English co-anchor contrastive learning, aiming to mitigate
the negative impact of data distribution error on recall and consistency in
ML-ATR. Experimental results on the translated AudioCaps and Clotho datasets
show that our scheme achieves state-of-the-art performance on recall and
consistency metrics for eight mainstream languages, including English. Our code
will be available at https://github.com/ATRI-ACL/ATRI-ACL.
|
2502.14628
|
PEARL: Towards Permutation-Resilient LLMs
|
cs.LG cs.CL
|
The in-context learning (ICL) capability of large language models (LLMs)
enables them to perform challenging tasks using provided demonstrations.
However, ICL is highly sensitive to the ordering of demonstrations, leading to
instability in predictions. This paper shows that this vulnerability can be
exploited to design a natural attack - difficult for model providers to detect
- that achieves nearly 80% success rate on LLaMA-3 by simply permuting the
demonstrations. Existing mitigation methods primarily rely on post-processing
and fail to enhance the model's inherent robustness to input permutations,
raising concerns about safety and reliability of LLMs. To address this issue,
we propose Permutation-resilient learning (PEARL), a novel framework based on
distributionally robust optimization (DRO), which optimizes model performance
against the worst-case input permutation. Specifically, PEARL consists of a
permutation-proposal network (P-Net) and the LLM. The P-Net generates the most
challenging permutations by treating it as an optimal transport problem, which
is solved using an entropy-constrained Sinkhorn algorithm. Through minimax
optimization, the P-Net and the LLM iteratively optimize against each other,
progressively improving the LLM's robustness. Experiments on synthetic
pre-training and real-world instruction tuning tasks demonstrate that PEARL
effectively mitigates permutation attacks and enhances performance. Notably,
despite being trained on fewer shots and shorter contexts, PEARL achieves
performance gains of up to 40% when scaled to many-shot and long-context
scenarios, highlighting its efficiency and generalization capabilities.
|
2502.14630
|
Understanding long-term energy use in off-grid solar home systems in
sub-Saharan Africa
|
eess.SY cs.SY
|
Solar home systems provide low-cost electricity access for rural off-grid
communities. As access to them increases, more long-term data becomes available
on how these systems are used throughout their lifetime. This work analyses a
dataset of 1,000 systems across sub-Saharan Africa. Dynamic time warping
clustering was applied to the load demand data from the systems, identifying
five distinct archetypal daily load profiles and their occurrence across the
dataset. Temporal analysis reveals a general decline in daily energy
consumption over time, with 57% of households reducing their usage after the
first year of ownership. On average, there is a 33% decrease in daily
consumption by the end of the second year compared to the peak demand, which
occurs on the 96th day. Combining the load demand analysis with payment data
shows that this decrease in energy consumption is observed even in households
that are not experiencing economic hardship, indicating there are reasons
beyond financial constraints for decreasing energy use once energy access is
obtained.
|
2502.14631
|
Synergistic Fusion of Multi-Source Knowledge via Evidence Theory for
High-Entropy Alloy Discovery
|
cs.LG
|
Discovering novel high-entropy alloys (HEAs) with desirable properties is
challenging due to the vast compositional space and complex phase formation
mechanisms. Efficient exploration of this space requires a strategic approach
that integrates heterogeneous knowledge sources. Here, we propose a framework
that systematically combines knowledge extracted from computational material
datasets with domain knowledge distilled from scientific literature using large
language models (LLMs). A central feature of this approach is the explicit
consideration of element substitutability, identifying chemically similar
elements that can be interchanged to potentially stabilize desired HEAs.
Dempster-Shafer theory, a mathematical framework for reasoning under
uncertainty, is employed to model and combine substitutabilities based on
aggregated evidence from multiple sources. The framework predicts the phase
stability of candidate HEA compositions and is systematically evaluated on both
quaternary alloy systems, demonstrating superior performance compared to
baseline machine learning models and methods reliant on single-source evidence
in cross-validation experiments. By leveraging multi-source knowledge, the
framework retains robust predictive power even when key elements are absent
from the training data, underscoring its potential for knowledge transfer and
extrapolation. Furthermore, the enhanced interpretability of the methodology
offers insights into the fundamental factors governing HEA formation. Overall,
this work provides a promising strategy for accelerating HEA discovery by
integrating computational and textual knowledge sources, enabling efficient
exploration of vast compositional spaces with improved generalization and
interpretability.
|
2502.14634
|
CER: Confidence Enhanced Reasoning in LLMs
|
cs.LG
|
Ensuring the reliability of Large Language Models (LLMs) in complex reasoning
tasks remains a formidable challenge, particularly in scenarios that demand
precise mathematical calculations and knowledge-intensive open-domain
generation. In this work, we introduce an uncertainty-aware framework designed
to enhance the accuracy of LLM responses by systematically incorporating model
confidence at critical decision points. We propose an approach that encourages
multi-step reasoning in LLMs and quantify the confidence of intermediate
answers such as numerical results in mathematical reasoning and proper nouns in
open-domain generation. Then, the overall confidence of each reasoning chain is
evaluated based on confidence of these critical intermediate steps. Finally, we
aggregate the answer of generated response paths in a way that reflects the
reliability of each generated content (as opposed to self-consistency in which
each generated chain contributes equally to majority voting). We conducted
extensive experiments in five datasets, three mathematical datasets and two
open-domain datasets, using four LLMs. The results consistently validate the
effectiveness of our novel confidence aggregation method, leading to an
accuracy improvement of up to 7.4% and 5.8% over baseline approaches in math
and open-domain generation tasks, respectively. Code is publicly available at
https://github.com/ Aquasar11/CER.
|
2502.14637
|
ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality
Protein Backbone Generation
|
cs.LG cs.AI
|
Protein backbone generation plays a central role in de novo protein design
and is significant for many biological and medical applications. Although
diffusion and flow-based generative models provide potential solutions to this
challenging task, they often generate proteins with undesired designability and
suffer computational inefficiency. In this study, we propose a novel rectified
quaternion flow (ReQFlow) matching method for fast and high-quality protein
backbone generation. In particular, our method generates a local translation
and a 3D rotation from random noise for each residue in a protein chain, which
represents each 3D rotation as a unit quaternion and constructs its flow by
spherical linear interpolation (SLERP) in an exponential format. We train the
model by quaternion flow (QFlow) matching with guaranteed numerical stability
and rectify the QFlow model to accelerate its inference and improve the
designability of generated protein backbones, leading to the proposed ReQFlow
model. Experiments show that ReQFlow achieves state-of-the-art performance in
protein backbone generation while requiring much fewer sampling steps and
significantly less inference time (e.g., being 37x faster than RFDiffusion and
62x faster than Genie2 when generating a backbone of length 300), demonstrating
its effectiveness and efficiency. The code is available at
https://github.com/AngxiaoYue/ReQFlow.
|
2502.14638
|
NAVIG: Natural Language-guided Analysis with Vision Language Models for
Image Geo-localization
|
cs.CL cs.CV
|
Image geo-localization is the task of predicting the specific location of an
image and requires complex reasoning across visual, geographical, and cultural
contexts. While prior Vision Language Models (VLMs) have the best accuracy at
this task, there is a dearth of high-quality datasets and models for analytical
reasoning. We first create NaviClues, a high-quality dataset derived from
GeoGuessr, a popular geography game, to supply examples of expert reasoning
from language. Using this dataset, we present Navig, a comprehensive image
geo-localization framework integrating global and fine-grained image
information. By reasoning with language, Navig reduces the average distance
error by 14% compared to previous state-of-the-art models while requiring fewer
than 1000 training samples. Our dataset and code are available at
https://github.com/SparrowZheyuan18/Navig/.
|
2502.14642
|
How Far are LLMs from Being Our Digital Twins? A Benchmark for
Persona-Based Behavior Chain Simulation
|
cs.CL
|
Recently, LLMs have garnered increasing attention across academic disciplines
for their potential as human digital twins, virtual proxies designed to
replicate individuals and autonomously perform tasks such as decision-making,
problem-solving, and reasoning on their behalf. However, current evaluations of
LLMs primarily emphasize dialogue simulation while overlooking human behavior
simulation, which is crucial for digital twins. To address this gap, we
introduce BehaviorChain, the first benchmark for evaluating LLMs' ability to
simulate continuous human behavior. BehaviorChain comprises diverse,
high-quality, persona-based behavior chains, totaling 15,846 distinct behaviors
across 1,001 unique personas, each with detailed history and profile metadata.
For evaluation, we integrate persona metadata into LLMs and employ them to
iteratively infer contextually appropriate behaviors within dynamic scenarios
provided by BehaviorChain. Comprehensive evaluation results demonstrated that
even state-of-the-art models struggle with accurately simulating continuous
human behavior.
|
2502.14643
|
Length-Controlled Margin-Based Preference Optimization without Reference
Model
|
cs.CL
|
Direct Preference Optimization (DPO) is a widely adopted offline algorithm
for preference-based reinforcement learning from human feedback (RLHF),
designed to improve training simplicity and stability by redefining reward
functions. However, DPO is hindered by several limitations, including length
bias, memory inefficiency, and probability degradation. To address these
challenges, we propose Length-Controlled Margin-Based Preference Optimization
(LMPO), a more efficient and robust alternative. LMPO introduces a uniform
reference model as an upper bound for the DPO loss, enabling a more accurate
approximation of the original optimization objective. Additionally, an average
log-probability optimization strategy is employed to minimize discrepancies
between training and inference phases. A key innovation of LMPO lies in its
Length-Controlled Margin-Based loss function, integrated within the
Bradley-Terry framework. This loss function regulates response length while
simultaneously widening the margin between preferred and rejected outputs. By
doing so, it mitigates probability degradation for both accepted and discarded
responses, addressing a significant limitation of existing methods. We evaluate
LMPO against state-of-the-art preference optimization techniques on two
open-ended large language models, Mistral and LLaMA3, across six conditional
benchmarks. Our experimental results demonstrate that LMPO effectively controls
response length, reduces probability degradation, and outperforms existing
approaches. The code is available at \url{https://github.com/gengxuli/LMPO}.
|
2502.14644
|
LIFT: Improving Long Context Understanding of Large Language Models
through Long Input Fine-Tuning
|
cs.CL
|
Long context understanding remains challenging for large language models due
to their limited context windows. This paper presents Long Input Fine-Tuning
(LIFT), a novel framework for long-context modeling that can improve the
long-context performance of arbitrary (short-context) LLMs by dynamically
adapting model parameters based on the long input. Importantly, LIFT, rather
than endlessly extending the context window size to accommodate increasingly
longer inputs in context, chooses to store and absorb the long input in
parameter. By fine-tuning the long input into model parameters, LIFT allows
short-context LLMs to answer questions even when the required information is
not provided in the context during inference. Furthermore, to enhance LIFT
performance while maintaining the original in-context learning (ICL)
capabilities, we introduce Gated Memory, a specialized attention adapter that
automatically balances long input memorization and ICL. We provide a
comprehensive analysis of the strengths and limitations of LIFT on long context
understanding, offering valuable directions for future research.
|
2502.14645
|
Edit Once, Update Everywhere: A Simple Framework for Cross-Lingual
Knowledge Synchronization in LLMs
|
cs.CL cs.AI
|
Knowledge editing allows for efficient adaptation of large language models
(LLMs) to new information or corrections without requiring full retraining.
However, prior methods typically focus on either single-language editing or
basic multilingual editing, failing to achieve true cross-linguistic knowledge
synchronization. To address this, we present a simple and practical
state-of-the-art (SOTA) recipe Cross-Lingual Knowledge Democracy Edit (X-KDE),
designed to propagate knowledge from a dominant language to other languages
effectively. Our X-KDE comprises two stages: (i) Cross-lingual Edition
Instruction Tuning (XE-IT), which fine-tunes the model on a curated parallel
dataset to modify in-scope knowledge while preserving unrelated information,
and (ii) Target-language Preference Optimization (TL-PO), which applies
advanced optimization techniques to ensure consistency across languages,
fostering the transfer of updates. Additionally, we contribute a high-quality,
cross-lingual dataset, specifically designed to enhance knowledge transfer
across languages. Extensive experiments on the Bi-ZsRE and MzsRE benchmarks
show that X-KDE significantly enhances cross-lingual performance, achieving an
average improvement of +8.19%, while maintaining high accuracy in monolingual
settings.
|
2502.14648
|
Variance Reduction Methods Do Not Need to Compute Full Gradients:
Improved Efficiency through Shuffling
|
cs.LG math.OC
|
In today's world, machine learning is hard to imagine without large training
datasets and models. This has led to the use of stochastic methods for
training, such as stochastic gradient descent (SGD). SGD provides weak
theoretical guarantees of convergence, but there are modifications, such as
Stochastic Variance Reduced Gradient (SVRG) and StochAstic Recursive grAdient
algoritHm (SARAH), that can reduce the variance. These methods require the
computation of the full gradient occasionally, which can be time consuming. In
this paper, we explore variants of variance reduction algorithms that eliminate
the need for full gradient computations. To make our approach memory-efficient
and avoid full gradient computations, we use two key techniques: the shuffling
heuristic and idea of SAG/SAGA methods. As a result, we improve existing
estimates for variance reduction algorithms without the full gradient
computations. Additionally, for the non-convex objective function, our estimate
matches that of classic shuffling methods, while for the strongly convex one,
it is an improvement. We conduct comprehensive theoretical analysis and provide
extensive experimental results to validate the efficiency and practicality of
our methods for large-scale machine learning problems.
|
2502.14659
|
MAGO-SP: Detection and Correction of Water-Fat Swaps in Magnitude-Only
VIBE MRI
|
cs.CV
|
Volume Interpolated Breath-Hold Examination (VIBE) MRI generates images
suitable for water and fat signal composition estimation. While the two-point
VIBE provides water-fat-separated images, the six-point VIBE allows estimation
of the effective transversal relaxation rate R2* and the proton density fat
fraction (PDFF), which are imaging markers for health and disease. Ambiguity
during signal reconstruction can lead to water-fat swaps. This shortcoming
challenges the application of VIBE-MRI for automated PDFF analyses of
large-scale clinical data and of population studies. This study develops an
automated pipeline to detect and correct water-fat swaps in
non-contrast-enhanced VIBE images. Our three-step pipeline begins with training
a segmentation network to classify volumes as "fat-like" or "water-like," using
synthetic water-fat swaps generated by merging fat and water volumes with
Perlin noise. Next, a denoising diffusion image-to-image network predicts water
volumes as signal priors for correction. Finally, we integrate this prior into
a physics-constrained model to recover accurate water and fat signals. Our
approach achieves a < 1% error rate in water-fat swap detection for a 6-point
VIBE. Notably, swaps disproportionately affect individuals in the Underweight
and Class 3 Obesity BMI categories. Our correction algorithm ensures accurate
solution selection in chemical phase MRIs, enabling reliable PDFF estimation.
This forms a solid technical foundation for automated large-scale population
imaging analysis.
|
2502.14660
|
Beyond the Surface: Uncovering Implicit Locations with LLMs for
Personalized Local News
|
cs.LG
|
News recommendation systems personalize homepage content to boost engagement,
but factors like content type, editorial stance, and geographic focus impact
recommendations. Local newspapers balance coverage across regions, yet
identifying local articles is challenging due to implicit location cues like
slang or landmarks.
Traditional methods, such as Named Entity Recognition (NER) and Knowledge
Graphs, infer locations, but Large Language Models (LLMs) offer new
possibilities while raising concerns about accuracy and explainability.
This paper explores LLMs for local article classification in Taboola's
"Homepage For You" system, comparing them to traditional techniques. Key
findings: (1) Knowledge Graphs enhance NER models' ability to detect implicit
locations, (2) LLMs outperform traditional methods, and (3) LLMs can
effectively identify local content without requiring Knowledge Graph
integration.
Offline evaluations showed LLMs excel at implicit location classification,
while online A/B tests showed a significant increased in local views. A
scalable pipeline integrating LLM-based location classification boosted local
article distribution by 27%, preserving newspapers' brand identity and
enhancing homepage personalization.
|
2502.14662
|
InstructAgent: Building User Controllable Recommender via LLM Agent
|
cs.CL cs.IR
|
Traditional recommender systems usually take the user-platform paradigm,
where users are directly exposed under the control of the platform's
recommendation algorithms. However, the defect of recommendation algorithms may
put users in very vulnerable positions under this paradigm. First, many
sophisticated models are often designed with commercial objectives in mind,
focusing on the platform's benefits, which may hinder their ability to protect
and capture users' true interests. Second, these models are typically optimized
using data from all users, which may overlook individual user's preferences.
Due to these shortcomings, users may experience several disadvantages under the
traditional user-platform direct exposure paradigm, such as lack of control
over the recommender system, potential manipulation by the platform, echo
chamber effects, or lack of personalization for less active users due to the
dominance of active users during collaborative learning. Therefore, there is an
urgent need to develop a new paradigm to protect user interests and alleviate
these issues. Recently, some researchers have introduced LLM agents to simulate
user behaviors, these approaches primarily aim to optimize platform-side
performance, leaving core issues in recommender systems unresolved. To address
these limitations, we propose a new user-agent-platform paradigm, where agent
serves as the protective shield between user and recommender system that
enables indirect exposure. To this end, we first construct four recommendation
datasets, denoted as $\dataset$, along with user instructions for each record.
|
2502.14663
|
The Restricted Isometry Property for Measurements from Group Orbits
|
cs.IT math.IT
|
It is known that sparse recovery by measurements from random circulant
matrices provides good recovery bounds. We generalize this to measurements that
arise as a random orbit of a group representation for some finite group G. We
derive estimates for the number of measurements required to guarantee the
restricted isometry property with high probability. Following this, we present
several examples highlighting the role of appropriate representation-theoretic
assumptions.
|
2502.14669
|
AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via
GRPO
|
cs.CL
|
Large Language Models (LLMs) have demonstrated impressive capabilities in
language processing, yet they often struggle with tasks requiring genuine
visual spatial reasoning. In this paper, we introduce a novel two-stage
training framework designed to equip standard LLMs with visual reasoning
abilities for maze navigation. First, we leverage Supervised Fine Tuning (SFT)
on a curated dataset of tokenized maze representations to teach the model to
predict step-by-step movement commands. Next, we apply Group Relative Policy
Optimization (GRPO)-a technique used in DeepSeekR1-with a carefully crafted
reward function to refine the model's sequential decision-making and encourage
emergent chain-of-thought behaviors. Experimental results on synthetically
generated mazes show that while a baseline model fails to navigate the maze,
the SFT-trained model achieves 86% accuracy, and further GRPO fine-tuning
boosts accuracy to 93%. Qualitative analyses reveal that GRPO fosters more
robust and self-corrective reasoning, highlighting the potential of our
approach to bridge the gap between language models and visual spatial tasks.
These findings offer promising implications for applications in robotics,
autonomous navigation, and other domains that require integrated visual and
sequential reasoning.
|
2502.14671
|
Explanations of Deep Language Models Explain Language Representations in
the Brain
|
cs.CL cs.AI q-bio.NC
|
Recent advances in artificial intelligence have given rise to large language
models (LLMs) that not only achieve human-like performance but also share
computational principles with the brain's language processing mechanisms. While
previous research has primarily focused on aligning LLMs' internal
representations with neural activity, we introduce a novel approach that
leverages explainable AI (XAI) methods to forge deeper connections between the
two domains. Using attribution methods, we quantified how preceding words
contribute to an LLM's next-word predictions and employed these explanations to
predict fMRI recordings from participants listening to the same narratives. Our
findings demonstrate that attribution methods robustly predict brain activity
across the language network, surpassing traditional internal representations in
early language areas. This alignment is hierarchical: early-layer explanations
correspond to the initial stages of language processing in the brain, while
later layers align with more advanced stages. Moreover, the layers more
influential on LLM next-word prediction$\unicode{x2014}$those with higher
attribution scores$\unicode{x2014}$exhibited stronger alignment with neural
activity. This work establishes a bidirectional bridge between AI and
neuroscience. First, we demonstrate that attribution methods offer a powerful
lens for investigating the neural mechanisms of language comprehension,
revealing how meaning emerges from preceding context. Second, we propose using
brain alignment as a metric to evaluate the validity of attribution methods,
providing a framework for assessing their biological plausibility.
|
2502.14676
|
BP-SGCN: Behavioral Pseudo-Label Informed Sparse Graph Convolution
Network for Pedestrian and Heterogeneous Trajectory Prediction
|
cs.CV cs.AI
|
Trajectory prediction allows better decision-making in applications of
autonomous vehicles or surveillance by predicting the short-term future
movement of traffic agents. It is classified into pedestrian or heterogeneous
trajectory prediction. The former exploits the relatively consistent behavior
of pedestrians, but is limited in real-world scenarios with heterogeneous
traffic agents such as cyclists and vehicles. The latter typically relies on
extra class label information to distinguish the heterogeneous agents, but such
labels are costly to annotate and cannot be generalized to represent different
behaviors within the same class of agents. In this work, we introduce the
behavioral pseudo-labels that effectively capture the behavior distributions of
pedestrians and heterogeneous agents solely based on their motion features,
significantly improving the accuracy of trajectory prediction. To implement the
framework, we propose the Behavioral Pseudo-Label Informed Sparse Graph
Convolution Network (BP-SGCN) that learns pseudo-labels and informs to a
trajectory predictor. For optimization, we propose a cascaded training scheme,
in which we first learn the pseudo-labels in an unsupervised manner, and then
perform end-to-end fine-tuning on the labels in the direction of increasing the
trajectory prediction accuracy. Experiments show that our pseudo-labels
effectively model different behavior clusters and improve trajectory
prediction. Our proposed BP-SGCN outperforms existing methods using both
pedestrian (ETH/UCY, pedestrian-only SDD) and heterogeneous agent datasets
(SDD, Argoverse 1).
|
2502.14677
|
Data-Constrained Synthesis of Training Data for De-Identification
|
cs.CL cs.AI
|
Many sensitive domains -- such as the clinical domain -- lack widely
available datasets due to privacy risks. The increasing generative capabilities
of large language models (LLMs) have made synthetic datasets a viable path
forward. In this study, we domain-adapt LLMs to the clinical domain and
generate synthetic clinical texts that are machine-annotated with tags for
personally identifiable information using capable encoder-based NER models. The
synthetic corpora are then used to train synthetic NER models. The results show
that training NER models using synthetic corpora incurs only a small drop in
predictive performance. The limits of this process are investigated in a
systematic ablation study -- using both Swedish and Spanish data. Our analysis
shows that smaller datasets can be sufficient for domain-adapting LLMs for data
synthesis. Instead, the effectiveness of this process is almost entirely
contingent on the performance of the machine-annotating NER models trained
using the original data.
|
2502.14678
|
How to Get Your LLM to Generate Challenging Problems for Evaluation
|
cs.CL
|
The pace of evolution of Large Language Models (LLMs) necessitates new
approaches for rigorous and comprehensive evaluation. Traditional human
annotation is increasingly impracticable due to the complexities and costs
involved in generating high-quality, challenging problems. In this work, we
introduce CHASE, a unified framework to synthetically generate challenging
problems using LLMs without human involvement. For a given task, our approach
builds a hard problem in a bottom-up manner from simpler components. Moreover,
our framework decomposes the generation process into independently verifiable
sub-tasks, thereby ensuring a high level of quality and correctness. We
implement CHASE to create evaluation benchmarks across three diverse domains:
(1) document-based question answering, (2) repository-level code completion,
and (3) math reasoning. The performance of state-of-the-art LLMs on these
synthetic benchmarks lies in the range of 40-60% accuracy, thereby
demonstrating the effectiveness of our framework at generating challenging
problems. We publicly release our benchmarks and code.
|
2502.14679
|
Disentangled Latent Spaces for Reduced Order Models using Deterministic
Autoencoders
|
cs.LG
|
Data-driven reduced-order models based on autoencoders generally lack
interpretability compared to classical methods such as the proper orthogonal
decomposition. More interpretability can be gained by disentangling the latent
variables and analyzing the resulting modes. For this purpose, probabilistic
$\beta$-variational autoencoders ($\beta$-VAEs) are frequently used in
computational fluid dynamics and other simulation sciences. Using a benchmark
periodic flow dataset, we show that competitive results can be achieved using
non-probabilistic autoencoder approaches that either promote orthogonality or
penalize correlation between latent variables. Compared to probabilistic
autoencoders, these approaches offer more robustness with respect to the choice
of hyperparameters entering the loss function. We further demonstrate the
ability of a non-probabilistic approach to identify a reduced number of active
latent variables by introducing a correlation penalty, a function also known
from the use of $\beta$-VAE. The investigated probabilistic and
non-probabilistic autoencoder models are finally used for the dimensionality
reduction of aircraft ditching loads, which serves as an industrial application
in this work.
|
2502.14681
|
seqKAN: Sequence processing with Kolmogorov-Arnold Networks
|
cs.LG cs.AI
|
Kolmogorov-Arnold Networks (KANs) have been recently proposed as a machine
learning framework that is more interpretable and controllable than the
multi-layer perceptron. Various network architectures have been proposed within
the KAN framework targeting different tasks and application domains, including
sequence processing.
This paper proposes seqKAN, a new KAN architecture for sequence processing.
Although multiple sequence processing KAN architectures have already been
proposed, we argue that seqKAN is more faithful to the core concept of the KAN
framework. Furthermore, we empirically demonstrate that it achieves better
results.
The empirical evaluation is performed on generated data from a complex
physics problem on an interpolation and an extrapolation task. Using this
dataset we compared seqKAN against a prior KAN network for timeseries
prediction, recurrent deep networks, and symbolic regression. seqKAN
substantially outperforms all architectures, particularly on the extrapolation
dataset, while also being the most transparent.
|
2502.14682
|
Bridging the Gap: Transforming Natural Language Questions into SQL
Queries via Abstract Query Pattern and Contextual Schema Markup
|
cs.CL
|
Large language models have demonstrated excellent performance in many tasks,
including Text-to-SQL, due to their powerful in-context learning capabilities.
They are becoming the mainstream approach for Text-to-SQL. However, these
methods still have a significant gap compared to human performance, especially
on complex questions. As the complexity of questions increases, the gap between
questions and SQLs increases. We identify two important gaps: the structural
mapping gap and the lexical mapping gap. To tackle these two gaps, we propose
PAS-SQL, an efficient SQL generation pipeline based on LLMs, which alleviates
gaps through Abstract Query Pattern (AQP) and Contextual Schema Markup (CSM).
AQP aims to obtain the structural pattern of the question by removing
database-related information, which enables us to find structurally similar
demonstrations. CSM aims to associate database-related text span in the
question with specific tables or columns in the database, which alleviates the
lexical mapping gap. Experimental results on the Spider and BIRD datasets
demonstrate the effectiveness of our proposed method. Specifically, PAS-SQL +
GPT-4o sets a new state-of-the-art on the Spider benchmark with an execution
accuracy of 87.9\%, and achieves leading results on the BIRD dataset with an
execution accuracy of 64.67\%.
|
2502.14684
|
CDGS: Confidence-Aware Depth Regularization for 3D Gaussian Splatting
|
cs.GR cs.CV
|
3D Gaussian Splatting (3DGS) has shown significant advantages in novel view
synthesis (NVS), particularly in achieving high rendering speeds and
high-quality results. However, its geometric accuracy in 3D reconstruction
remains limited due to the lack of explicit geometric constraints during
optimization. This paper introduces CDGS, a confidence-aware depth
regularization approach developed to enhance 3DGS. We leverage multi-cue
confidence maps of monocular depth estimation and sparse Structure-from-Motion
depth to adaptively adjust depth supervision during the optimization process.
Our method demonstrates improved geometric detail preservation in early
training stages and achieves competitive performance in both NVS quality and
geometric accuracy. Experiments on the publicly available Tanks and Temples
benchmark dataset show that our method achieves more stable convergence
behavior and more accurate geometric reconstruction results, with improvements
of up to 2.31 dB in PSNR for NVS and consistently lower geometric errors in
M3C2 distance metrics. Notably, our method reaches comparable F-scores to the
original 3DGS with only 50% of the training iterations. We expect this work
will facilitate the development of efficient and accurate 3D reconstruction
systems for real-world applications such as digital twin creation, heritage
preservation, or forestry applications.
|
2502.14689
|
Confidence Estimation via Sequential Likelihood Mixing
|
stat.ML cs.LG
|
We present a universal framework for constructing confidence sets based on
sequential likelihood mixing. Building upon classical results from sequential
analysis, we provide a unifying perspective on several recent lines of work,
and establish fundamental connections between sequential mixing, Bayesian
inference and regret inequalities from online estimation. The framework applies
to any realizable family of likelihood functions and allows for non-i.i.d. data
and anytime validity. Moreover, the framework seamlessly integrates standard
approximate inference techniques, such as variational inference and
sampling-based methods, and extends to misspecified model classes, while
preserving provable coverage guarantees. We illustrate the power of the
framework by deriving tighter confidence sequences for classical settings,
including sequential linear regression and sparse estimation, with simplified
proofs.
|
2502.14693
|
I-MCTS: Enhancing Agentic AutoML via Introspective Monte Carlo Tree
Search
|
cs.CL
|
Recent advancements in large language models (LLMs) have shown remarkable
potential in automating machine learning tasks. However, existing LLM-based
agents often struggle with low-diversity and suboptimal code generation. While
recent work has introduced Monte Carlo Tree Search (MCTS) to address these
issues, limitations persist in the quality and diversity of thoughts generated,
as well as in the scalar value feedback mechanisms used for node selection. In
this study, we introduce Introspective Monte Carlo Tree Search (I-MCTS), a
novel approach that iteratively expands tree nodes through an introspective
process that meticulously analyzes solutions and results from parent and
sibling nodes. This facilitates a continuous refinement of the node in the
search tree, thereby enhancing the overall decision-making process.Furthermore,
we integrate a Large Language Model (LLM)-based value model to facilitate
direct evaluation of each node's solution prior to conducting comprehensive
computational rollouts. A hybrid rewarding mechanism is implemented to
seamlessly transition the Q-value from LLM-estimated scores to actual
performance scores. This allows higher-quality nodes to be traversed
earlier.Applied to the various ML tasks, our approach demonstrates a6\%
absolute improvement in performance compared to the strong open-source AutoML
agents, showcasing its effectiveness in enhancing agentic AutoML systems.
|
2502.14694
|
Revisiting Near-Far Field Boundary in Dual-Polarized XL-MIMO Systems
|
cs.IT math.IT
|
Extremely large-scale multiple-input multiple-output (XL-MIMO) is expected to
be an important technology in future sixth generation (6G) networks. Compared
with conventional single-polarized XL-MIMO, where signals are transmitted and
received in only one polarization direction, dual-polarized XL-MIMO systems
achieve higher data rate by improving multiplexing performances, and thus are
the focus of this paper. Due to enlarged aperture, near-field regions become
non-negligible in XL-MIMO communications, necessitating accurate near-far field
boundary characterizations. However, existing boundaries developed for
single-polarized systems only consider phase or power differences across array
elements while irrespective of cross-polarization discrimination (XPD)
variances in dual-polarized XL-MIMO systems, deteriorating transmit covariance
optimization performances. In this paper, we revisit near-far field boundaries
for dual-polarized XL-MIMO systems by taking XPD differences into account,
which faces the following challenge. Unlike existing near-far field boundaries,
which only need to consider co-polarized channel components, deriving
boundaries for dual-polarized XL-MIMO systems requires modeling joint effects
of co-polarized and cross-polarized components. To address this issue, we model
XPD variations across antennas and introduce a non-uniform XPD distance to
complement existing near-far field boundaries. Based on the new distance
criterion, we propose an efficient scheme to optimize transmit covariance.
Numerical results validate our analysis and demonstrate the proposed
algorithm's effectiveness.
|
2502.14698
|
General Uncertainty Estimation with Delta Variances
|
cs.LG cs.AI stat.AP stat.ML
|
Decision makers may suffer from uncertainty induced by limited data. This may
be mitigated by accounting for epistemic uncertainty, which is however
challenging to estimate efficiently for large neural networks. To this extent
we investigate Delta Variances, a family of algorithms for epistemic
uncertainty quantification, that is computationally efficient and convenient to
implement. It can be applied to neural networks and more general functions
composed of neural networks. As an example we consider a weather simulator with
a neural-network-based step function inside -- here Delta Variances empirically
obtain competitive results at the cost of a single gradient computation. The
approach is convenient as it requires no changes to the neural network
architecture or training procedure. We discuss multiple ways to derive Delta
Variances theoretically noting that special cases recover popular techniques
and present a unified perspective on multiple related methods. Finally we
observe that this general perspective gives rise to a natural extension and
empirically show its benefit.
|
2502.14704
|
Not All Data are Good Labels: On the Self-supervised Labeling for Time
Series Forecasting
|
cs.LG cs.AI
|
Time Series Forecasting (TSF) is a crucial task in various domains, yet
existing TSF models rely heavily on high-quality data and insufficiently
exploit all available data. This paper explores a novel self-supervised
approach to re-label time series datasets by inherently constructing candidate
datasets. During the optimization of a simple reconstruction network,
intermediates are used as pseudo labels in a self-supervised paradigm,
improving generalization for any predictor. We introduce the Self-Correction
with Adaptive Mask (SCAM), which discards overfitted components and selectively
replaces them with pseudo labels generated from reconstructions. Additionally,
we incorporate Spectral Norm Regularization (SNR) to further suppress
overfitting from a loss landscape perspective. Our experiments on eleven
real-world datasets demonstrate that SCAM consistently improves the performance
of various backbone models. This work offers a new perspective on constructing
datasets and enhancing the generalization of TSF models through self-supervised
learning.
|
2502.14706
|
Building reliable sim driving agents by scaling self-play
|
cs.AI cs.RO
|
Simulation agents are essential for designing and testing systems that
interact with humans, such as autonomous vehicles (AVs). These agents serve
various purposes, from benchmarking AV performance to stress-testing the
system's limits, but all use cases share a key requirement: reliability. A
simulation agent should behave as intended by the designer, minimizing
unintended actions like collisions that can compromise the signal-to-noise
ratio of analyses. As a foundation for reliable sim agents, we propose scaling
self-play to thousands of scenarios on the Waymo Open Motion Dataset under
semi-realistic limits on human perception and control. Training from scratch on
a single GPU, our agents nearly solve the full training set within a day. They
generalize effectively to unseen test scenes, achieving a 99.8% goal completion
rate with less than 0.8% combined collision and off-road incidents across
10,000 held-out scenarios. Beyond in-distribution generalization, our agents
show partial robustness to out-of-distribution scenes and can be fine-tuned in
minutes to reach near-perfect performance in those cases. Demonstrations of
agent behaviors can be found at this link. We open-source both the pre-trained
agents and the complete code base. Demonstrations of agent behaviors can be
found at \url{https://sites.google.com/view/reliable-sim-agents}.
|
2502.14707
|
TRUSWorthy: Toward Clinically Applicable Deep Learning for Confident
Detection of Prostate Cancer in Micro-Ultrasound
|
eess.IV cs.LG q-bio.TO
|
While deep learning methods have shown great promise in improving the
effectiveness of prostate cancer (PCa) diagnosis by detecting suspicious
lesions from trans-rectal ultrasound (TRUS), they must overcome multiple
simultaneous challenges. There is high heterogeneity in tissue appearance,
significant class imbalance in favor of benign examples, and scarcity in the
number and quality of ground truth annotations available to train models.
Failure to address even a single one of these problems can result in
unacceptable clinical outcomes.We propose TRUSWorthy, a carefully designed,
tuned, and integrated system for reliable PCa detection. Our pipeline
integrates self-supervised learning, multiple-instance learning aggregation
using transformers, random-undersampled boosting and ensembling: these address
label scarcity, weak labels, class imbalance, and overconfidence, respectively.
We train and rigorously evaluate our method using a large, multi-center dataset
of micro-ultrasound data. Our method outperforms previous state-of-the-art deep
learning methods in terms of accuracy and uncertainty calibration, with AUROC
and balanced accuracy scores of 79.9% and 71.5%, respectively. On the top 20%
of predictions with the highest confidence, we can achieve a balanced accuracy
of up to 91%. The success of TRUSWorthy demonstrates the potential of
integrated deep learning solutions to meet clinical needs in a highly
challenging deployment setting, and is a significant step towards creating a
trustworthy system for computer-assisted PCa diagnosis.
|
2502.14708
|
Human Misperception of Generative-AI Alignment: A Laboratory Experiment
|
econ.TH cs.AI cs.GT
|
We conduct an incentivized laboratory experiment to study people's perception
of generative artificial intelligence (GenAI) alignment in the context of
economic decision-making. Using a panel of economic problems spanning the
domains of risk, time preference, social preference, and strategic
interactions, we ask human subjects to make choices for themselves and to
predict the choices made by GenAI on behalf of a human user. We find that
people overestimate the degree of alignment between GenAI's choices and human
choices. In every problem, human subjects' average prediction about GenAI's
choice is substantially closer to the average human-subject choice than it is
to the GenAI choice. At the individual level, different subjects' predictions
about GenAI's choice in a given problem are highly correlated with their own
choices in the same problem. We explore the implications of people
overestimating GenAI alignment in a simple theoretical model.
|
2502.14709
|
Data-Efficient Pretraining with Group-Level Data Influence Modeling
|
cs.CL cs.LG
|
Data-efficient pretraining has shown tremendous potential to elevate scaling
laws. This paper argues that effective pretraining data should be curated at
the group level, treating a set of data points as a whole rather than as
independent contributors. To achieve that, we propose Group-Level Data
Influence Modeling (Group-MATES), a novel data-efficient pretraining method
that captures and optimizes group-level data utility. Specifically, Group-MATES
collects oracle group-level influences by locally probing the pretraining model
with data sets. It then fine-tunes a relational data influence model to
approximate oracles as relationship-weighted aggregations of individual
influences. The fine-tuned model selects the data subset by maximizing its
group-level influence prediction, with influence-aware clustering to enable
efficient inference. Experiments on the DCLM benchmark demonstrate that
Group-MATES achieves a 10% relative core score improvement on 22 downstream
tasks over DCLM-Baseline and 5% over individual-influence-based methods,
establishing a new state-of-the-art. Further analyses highlight the
effectiveness of relational data influence models in capturing intricate
interactions between data points.
|
2502.14714
|
From Knowledge Generation to Knowledge Verification: Examining the
BioMedical Generative Capabilities of ChatGPT
|
cs.AI cs.CL cs.IR
|
The generative capabilities of LLM models present opportunities in
accelerating tasks and concerns with the authenticity of the knowledge it
produces. To address the concerns, we present a computational approach that
systematically evaluates the factual accuracy of biomedical knowledge that an
LLM model has been prompted to generate. Our approach encompasses two
processes: the generation of disease-centric associations and the verification
of them using the semantic knowledge of the biomedical ontologies. Using
ChatGPT as the select LLM model, we designed a set of prompt-engineering
processes to generate linkages between diseases, drugs, symptoms, and genes to
establish grounds for assessments. Experimental results demonstrate high
accuracy in identifying disease terms (88%-97%), drug names (90%-91%), and
genetic information (88%-98%). The symptom term identification accuracy was
notably lower (49%-61%), as verified against the DOID, ChEBI, SYMPTOM, and GO
ontologies accordingly. The verification of associations reveals literature
coverage rates of (89%-91%) among disease-drug and disease-gene associations.
The low identification accuracy for symptom terms also contributed to the
verification of symptom-related associations (49%-62%).
|
2502.14718
|
Entity Framing and Role Portrayal in the News
|
cs.CL
|
We introduce a novel multilingual hierarchical corpus annotated for entity
framing and role portrayal in news articles. The dataset uses a unique taxonomy
inspired by storytelling elements, comprising 22 fine-grained roles, or
archetypes, nested within three main categories: protagonist, antagonist, and
innocent. Each archetype is carefully defined, capturing nuanced portrayals of
entities such as guardian, martyr, and underdog for protagonists; tyrant,
deceiver, and bigot for antagonists; and victim, scapegoat, and exploited for
innocents. The dataset includes 1,378 recent news articles in five languages
(Bulgarian, English, Hindi, European Portuguese, and Russian) focusing on two
critical domains of global significance: the Ukraine-Russia War and Climate
Change. Over 5,800 entity mentions have been annotated with role labels. This
dataset serves as a valuable resource for research into role portrayal and has
broader implications for news analysis. We describe the characteristics of the
dataset and the annotation process, and we report evaluation results on
fine-tuned state-of-the-art multilingual transformers and hierarchical
zero-shot learning using LLMs at the level of a document, a paragraph, and a
sentence.
|
2502.14719
|
Internal Incoherency Scores for Constraint-based Causal Discovery
Algorithms
|
stat.ML cs.LG
|
Causal discovery aims to infer causal graphs from observational or
experimental data. Methods such as the popular PC algorithm are based on
conditional independence testing and utilize enabling assumptions, such as the
faithfulness assumption, for their inferences. In practice, these assumptions,
as well as the functional assumptions inherited from the chosen conditional
independence test, are typically taken as a given and not further tested for
their validity on the data. In this work, we propose internal coherency scores
that allow testing for assumption violations and finite sample errors, whenever
detectable without requiring ground truth or further statistical tests. We
provide a complete classification of erroneous results, including a distinction
between detectable and undetectable errors, and prove that the detectable
erroneous results can be measured by our scores. We illustrate our coherency
scores on the PC algorithm with simulated and real-world datasets, and envision
that testing for internal coherency can become a standard tool in applying
constraint-based methods, much like a suite of tests is used to validate the
assumptions of classical regression analysis.
|
2502.14720
|
Advancing Measurement Capabilities in Lithium-Ion Batteries: Exploring
the Potential of Fiber Optic Sensors for Thermal Monitoring of Battery Cells
|
physics.app-ph cs.SY eess.SY
|
This work demonstrates the potential of fiber optic sensors for measuring
thermal effects in lithium-ion batteries, using a fiber optic measurement
method of Optical Frequency Domain Reflectometry (OFDR). The innovative
application of fiber sensors allows for spatially resolved temperature
measurement, particularly emphasizing the importance of monitoring not just the
exterior but also the internal conditions within battery cells. Utilizing inert
glass fibers as sensors, which exhibit minimal sensitivity to electric fields,
opens up new pathways for their implementation in a wide range of applications,
such as battery monitoring. The sensors used in this work provide real-time
information along the entire length of the fiber, unlike commonly used Fiber
Bragg Grating (FBG) sensors. It is shown that using the herein presented novel
sensors in a temperature range of 0 to 80 degree celsius reveals a linear
thermal dependency with high sensitivity and a local resolution of a few
centimeters. Furthermore, this study presents preliminary findings on the
potential application of fiber optic sensors in lithium-ion battery (LIB)
cells, demonstrating that the steps required for battery integration do not
impose any restrictive effects on thermal measurements.
|
2502.14721
|
Multi-dataset synergistic in supervised learning to pre-label structural
components in point clouds from shell construction scenes
|
cs.CV
|
The significant effort required to annotate data for new training datasets
hinders computer vision research and machine learning in the construction
industry. This work explores adapting standard datasets and the latest
transformer model architectures for point cloud semantic segmentation in the
context of shell construction sites. Unlike common approaches focused on object
segmentation of building interiors and furniture, this study addressed the
challenges of segmenting complex structural components in Architecture,
Engineering, and Construction (AEC). We establish a baseline through supervised
training and a custom validation dataset, evaluate the cross-domain inference
with large-scale indoor datasets, and utilize transfer learning to maximize
segmentation performance with minimal new data. The findings indicate that with
minimal fine-tuning, pre-trained transformer architectures offer an effective
strategy for building component segmentation. Our results are promising for
automating the annotation of new, previously unseen data when creating larger
training resources and for the segmentation of frequently recurring objects.
|
2502.14724
|
Ranking Joint Policies in Dynamic Games using Evolutionary Dynamics
|
cs.MA cs.AI cs.LG
|
Game-theoretic solution concepts, such as the Nash equilibrium, have been key
to finding stable joint actions in multi-player games. However, it has been
shown that the dynamics of agents' interactions, even in simple two-player
games with few strategies, are incapable of reaching Nash equilibria,
exhibiting complex and unpredictable behavior. Instead, evolutionary approaches
can describe the long-term persistence of strategies and filter out transient
ones, accounting for the long-term dynamics of agents' interactions. Our goal
is to identify agents' joint strategies that result in stable behavior, being
resistant to changes, while also accounting for agents' payoffs, in dynamic
games. Towards this goal, and building on previous results, this paper proposes
transforming dynamic games into their empirical forms by considering agents'
strategies instead of agents' actions, and applying the evolutionary
methodology $\alpha$-Rank to evaluate and rank strategy profiles according to
their long-term dynamics. This methodology not only allows us to identify joint
strategies that are strong through agents' long-term interactions, but also
provides a descriptive, transparent framework regarding the high ranking of
these strategies. Experiments report on agents that aim to collaboratively
solve a stochastic version of the graph coloring problem. We consider different
styles of play as strategies to define the empirical game, and train policies
realizing these strategies, using the DQN algorithm. Then we run simulations to
generate the payoff matrix required by $\alpha$-Rank to rank joint strategies.
|
2502.14727
|
WavRAG: Audio-Integrated Retrieval Augmented Generation for Spoken
Dialogue Models
|
cs.SD cs.AI eess.AS
|
Retrieval Augmented Generation (RAG) has gained widespread adoption owing to
its capacity to empower large language models (LLMs) to integrate external
knowledge. However, existing RAG frameworks are primarily designed for
text-based LLMs and rely on Automatic Speech Recognition to process speech
input, which discards crucial audio information, risks transcription errors,
and increases computational overhead. Therefore, we introduce WavRAG, the first
retrieval augmented generation framework with native, end-to-end audio support.
WavRAG offers two key features: 1) Bypassing ASR, WavRAG directly processes raw
audio for both embedding and retrieval. 2) WavRAG integrates audio and text
into a unified knowledge representation. Specifically, we propose the
WavRetriever to facilitate the retrieval from a text-audio hybrid knowledge
base, and further enhance the in-context capabilities of spoken dialogue models
through the integration of chain-of-thought reasoning. In comparison to
state-of-the-art ASR-Text RAG pipelines, WavRAG achieves comparable retrieval
performance while delivering a 10x acceleration. Furthermore, WavRAG's unique
text-audio hybrid retrieval capability extends the boundaries of RAG to the
audio modality.
|
2502.14731
|
Beyond Performance Scores: Directed Functional Connectivity as a
Brain-Based Biomarker for Motor Skill Learning and Retention
|
q-bio.NC cs.LG
|
Motor skill acquisition in fields like surgery, robotics, and sports involves
learning complex task sequences through extensive training. Traditional
performance metrics, like execution time and error rates, offer limited insight
as they fail to capture the neural mechanisms underlying skill learning and
retention. This study introduces directed functional connectivity (dFC),
derived from electroencephalography (EEG), as a novel brain-based biomarker for
assessing motor skill learning and retention. For the first time, dFC is
applied as a biomarker to map the stages of the Fitts and Posner motor learning
model, offering new insights into the neural mechanisms underlying skill
acquisition and retention. Unlike traditional measures, it captures both the
strength and direction of neural information flow, providing a comprehensive
understanding of neural adaptations across different learning stages. The
analysis demonstrates that dFC can effectively identify and track the
progression through various stages of the Fitts and Posner model. Furthermore,
its stability over a six-week washout period highlights its utility in
monitoring long-term retention. No significant changes in dFC were observed in
a control group, confirming that the observed neural adaptations were specific
to training and not due to external factors. By offering a granular view of the
learning process at the group and individual levels, dFC facilitates the
development of personalized, targeted training protocols aimed at enhancing
outcomes in fields where precision and long-term retention are critical, such
as surgical education. These findings underscore the value of dFC as a robust
biomarker that complements traditional performance metrics, providing a deeper
understanding of motor skill learning and retention.
|
2502.14734
|
Sentence Smith: Formally Controllable Text Transformation and its
Application to Evaluation of Text Embedding Models
|
cs.CL
|
We propose the Sentence Smith framework that enables controlled and specified
manipulation of text meaning. It consists of three main steps: 1. Parsing a
sentence into a semantic graph, 2. Applying human-designed semantic
manipulation rules, and 3. Generating text from the manipulated graph. A final
filtering step (4.) ensures the validity of the applied transformation. To
demonstrate the utility of Sentence Smith in an application study, we use it to
generate hard negative pairs that challenge text embedding models. Since the
controllable generation makes it possible to clearly isolate different types of
semantic shifts, we can gain deeper insights into the specific strengths and
weaknesses of widely used text embedding models, also addressing an issue in
current benchmarking where linguistic phenomena remain opaque. Human validation
confirms that the generations produced by Sentence Smith are highly accurate.
|
2502.14735
|
EAGER-LLM: Enhancing Large Language Models as Recommenders through
Exogenous Behavior-Semantic Integration
|
cs.IR cs.AI
|
Large language models (LLMs) are increasingly leveraged as foundational
backbones in the development of advanced recommender systems, offering enhanced
capabilities through their extensive knowledge and reasoning. Existing
llm-based recommender systems (RSs) often face challenges due to the
significant differences between the linguistic semantics of pre-trained LLMs
and the collaborative semantics essential for RSs. These systems use
pre-trained linguistic semantics but learn collaborative semantics from scratch
via the llm-Backbone. However, LLMs are not designed for recommendations,
leading to inefficient collaborative learning, weak result correlations, and
poor integration of traditional RS features. To address these challenges, we
propose EAGER-LLM, a decoder-only llm-based generative recommendation framework
that integrates endogenous and exogenous behavioral and semantic information in
a non-intrusive manner. Specifically, we propose 1)dual-source knowledge-rich
item indices that integrates indexing sequences for exogenous signals, enabling
efficient link-wide processing; 2)non-invasive multiscale alignment
reconstruction tasks guide the model toward a deeper understanding of both
collaborative and semantic signals; 3)an annealing adapter designed to finely
balance the model's recommendation performance with its comprehension
capabilities. We demonstrate EAGER-LLM's effectiveness through rigorous testing
on three public benchmarks.
|
2502.14738
|
Robust Information Selection for Hypothesis Testing with
Misclassification Penalties
|
stat.ML cs.SY eess.SP eess.SY math.CO math.OC
|
We study the problem of robust information selection for a Bayesian
hypothesis testing / classification task, where the goal is to identify the
true state of the world from a finite set of hypotheses based on observations
from the selected information sources. We introduce a novel misclassification
penalty framework, which enables non-uniform treatment of different
misclassification events. Extending the classical subset selection framework,
we study the problem of selecting a subset of sources that minimize the maximum
penalty of misclassification under a limited budget, despite deletions or
failures of a subset of the selected sources. We characterize the curvature
properties of the objective function and propose an efficient greedy algorithm
with performance guarantees. Next, we highlight certain limitations of
optimizing for the maximum penalty metric and propose a submodular surrogate
metric to guide the selection of the information set. We propose a greedy
algorithm with near-optimality guarantees for optimizing the surrogate metric.
Finally, we empirically demonstrate the performance of our proposed algorithms
in several instances of the information set selection problem.
|
2502.14739
|
SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines
|
cs.CL
|
Large language models (LLMs) have demonstrated remarkable proficiency in
mainstream academic disciplines such as mathematics, physics, and computer
science. However, human knowledge encompasses over 200 specialized disciplines,
far exceeding the scope of existing benchmarks. The capabilities of LLMs in
many of these specialized fields-particularly in light industry, agriculture,
and service-oriented disciplines-remain inadequately evaluated. To address this
gap, we present SuperGPQA, a comprehensive benchmark that evaluates
graduate-level knowledge and reasoning capabilities across 285 disciplines. Our
benchmark employs a novel Human-LLM collaborative filtering mechanism to
eliminate trivial or ambiguous questions through iterative refinement based on
both LLM responses and expert feedback. Our experimental results reveal
significant room for improvement in the performance of current state-of-the-art
LLMs across diverse knowledge domains (e.g., the reasoning-focused model
DeepSeek-R1 achieved the highest accuracy of 61.82% on SuperGPQA), highlighting
the considerable gap between current model capabilities and artificial general
intelligence. Additionally, we present comprehensive insights from our
management of a large-scale annotation process, involving over 80 expert
annotators and an interactive Human-LLM collaborative system, offering valuable
methodological guidance for future research initiatives of comparable scope.
|
2502.14740
|
YOLOv12: A Breakdown of the Key Architectural Features
|
cs.CV cs.AI
|
This paper presents an architectural analysis of YOLOv12, a significant
advancement in single-stage, real-time object detection building upon the
strengths of its predecessors while introducing key improvements. The model
incorporates an optimised backbone (R-ELAN), 7x7 separable convolutions, and
FlashAttention-driven area-based attention, improving feature extraction,
enhanced efficiency, and robust detections. With multiple model variants,
similar to its predecessors, YOLOv12 offers scalable solutions for both
latency-sensitive and high-accuracy applications. Experimental results manifest
consistent gains in mean average precision (mAP) and inference speed, making
YOLOv12 a compelling choice for applications in autonomous systems, security,
and real-time analytics. By achieving an optimal balance between computational
efficiency and performance, YOLOv12 sets a new benchmark for real-time computer
vision, facilitating deployment across diverse hardware platforms, from edge
devices to high-performance clusters.
|
2502.14741
|
Reinforcement Learning with Graph Attention for Routing and Wavelength
Assignment with Lightpath Reuse
|
cs.NI cs.LG cs.SY eess.SY
|
Many works have investigated reinforcement learning (RL) for routing and
spectrum assignment on flex-grid networks but only one work to date has
examined RL for fixed-grid with flex-rate transponders, despite production
systems using this paradigm. Flex-rate transponders allow existing lightpaths
to accommodate new services, a task we term routing and wavelength assignment
with lightpath reuse (RWA-LR). We re-examine this problem and present a
thorough benchmarking of heuristic algorithms for RWA-LR, which are shown to
have 6% increased throughput when candidate paths are ordered by number of
hops, rather than total length. We train an RL agent for RWA-LR with graph
attention networks for the policy and value functions to exploit the
graph-structured data. We provide details of our methodology and open source
all of our code for reproduction. We outperform the previous state-of-the-art
RL approach by 2.5% (17.4 Tbps mean additional throughput) and the best
heuristic by 1.2% (8.5 Tbps mean additional throughput). This marginal gain
highlights the difficulty in learning effective RL policies on long horizon
resource allocation tasks.
|
2502.14743
|
Multi-Agent Coordination across Diverse Applications: A Survey
|
cs.MA cs.AI
|
Multi-agent coordination studies the underlying mechanism enabling the
trending spread of diverse multi-agent systems (MAS) and has received
increasing attention, driven by the expansion of emerging applications and
rapid AI advances. This survey outlines the current state of coordination
research across applications through a unified understanding that answers four
fundamental coordination questions: (1) what is coordination; (2) why
coordination; (3) who to coordinate with; and (4) how to coordinate. Our
purpose is to explore existing ideas and expertise in coordination and their
connections across diverse applications, while identifying and highlighting
emerging and promising research directions. First, general coordination
problems that are essential to varied applications are identified and analyzed.
Second, a number of MAS applications are surveyed, ranging from widely studied
domains, e.g., search and rescue, warehouse automation and logistics, and
transportation systems, to emerging fields including humanoid and
anthropomorphic robots, satellite systems, and large language models (LLMs).
Finally, open challenges about the scalability, heterogeneity, and learning
mechanisms of MAS are analyzed and discussed. In particular, we identify the
hybridization of hierarchical and decentralized coordination, human-MAS
coordination, and LLM-based MAS as promising future directions.
|
2502.14744
|
HiddenDetect: Detecting Jailbreak Attacks against Large Vision-Language
Models via Monitoring Hidden States
|
cs.CL
|
The integration of additional modalities increases the susceptibility of
large vision-language models (LVLMs) to safety risks, such as jailbreak
attacks, compared to their language-only counterparts. While existing research
primarily focuses on post-hoc alignment techniques, the underlying safety
mechanisms within LVLMs remain largely unexplored. In this work , we
investigate whether LVLMs inherently encode safety-relevant signals within
their internal activations during inference. Our findings reveal that LVLMs
exhibit distinct activation patterns when processing unsafe prompts, which can
be leveraged to detect and mitigate adversarial inputs without requiring
extensive fine-tuning. Building on this insight, we introduce HiddenDetect, a
novel tuning-free framework that harnesses internal model activations to
enhance safety. Experimental results show that {HiddenDetect} surpasses
state-of-the-art methods in detecting jailbreak attacks against LVLMs. By
utilizing intrinsic safety-aware patterns, our method provides an efficient and
scalable solution for strengthening LVLM robustness against multimodal threats.
Our code will be released publicly at
https://github.com/leigest519/HiddenDetect.
|
2502.14745
|
SQL4NN: Validation and expressive querying of models as data
|
cs.DB cs.LG
|
We consider machine learning models, learned from data, to be an important,
intensional, kind of data in themselves. As such, various analysis tasks on
models can be thought of as queries over this intensional data, often combined
with extensional data such as data for training or validation. We demonstrate
that relational database systems and SQL can actually be well suited for many
such tasks.
|
2502.14746
|
Classical and quantum Coxeter codes: Extending the Reed-Muller family
|
cs.IT math.CO math.IT quant-ph
|
We introduce a class of binary linear codes that generalizes the Reed-Muller
family by replacing the group $\mathbb{Z}_2^m$ with an arbitrary finite Coxeter
group. Similar to the Reed-Muller codes, this class is closed under duality and
has rate determined by a Gaussian distribution. We also construct quantum CSS
codes arising from the Coxeter codes, which admit transversal logical operators
outside of the Clifford group.
|
2502.14748
|
Large Language Models Struggle to Describe the Haystack without Human
Help: Human-in-the-loop Evaluation of LLMs
|
cs.CL
|
A common use of NLP is to facilitate the understanding of large document
collections, with a shift from using traditional topic models to Large Language
Models. Yet the effectiveness of using LLM for large corpus understanding in
real-world applications remains under-explored. This study measures the
knowledge users acquire with unsupervised, supervised LLM-based exploratory
approaches or traditional topic models on two datasets. While LLM-based methods
generate more human-readable topics and show higher average win probabilities
than traditional models for data exploration, they produce overly generic
topics for domain-specific datasets that do not easily allow users to learn
much about the documents. Adding human supervision to the LLM generation
process improves data exploration by mitigating hallucination and
over-genericity but requires greater human effort. In contrast, traditional.
models like Latent Dirichlet Allocation (LDA) remain effective for exploration
but are less user-friendly. We show that LLMs struggle to describe the haystack
of large corpora without human help, particularly domain-specific data, and
face scaling and hallucination limitations due to context length constraints.
Dataset available at https://huggingface. co/datasets/zli12321/Bills.
|
2502.14752
|
TritonBench: Benchmarking Large Language Model Capabilities for
Generating Triton Operators
|
cs.CL cs.LG
|
Triton, a high-level Python-like language designed for building efficient GPU
kernels, is widely adopted in deep learning frameworks due to its portability,
flexibility, and accessibility. However, programming and parallel optimization
still require considerable trial and error from Triton developers. Despite
advances in large language models (LLMs) for conventional code generation,
these models struggle to generate accurate, performance-optimized Triton code,
as they lack awareness of its specifications and the complexities of GPU
programming. More critically, there is an urgent need for systematic
evaluations tailored to Triton. In this work, we introduce TritonBench, the
first comprehensive benchmark for Triton operator generation. TritonBench
features two evaluation channels: a curated set of 184 real-world operators
from GitHub and a collection of operators aligned with PyTorch interfaces.
Unlike conventional code benchmarks prioritizing functional correctness,
TritonBench also profiles efficiency performance on widely deployed GPUs
aligned with industry applications. Our study reveals that current
state-of-the-art code LLMs struggle to generate efficient Triton operators,
highlighting a significant gap in high-performance code generation. TritonBench
will be available at https://github.com/thunlp/TritonBench.
|
2502.14753
|
MedVAE: Efficient Automated Interpretation of Medical Images with
Large-Scale Generalizable Autoencoders
|
eess.IV cs.AI cs.CV
|
Medical images are acquired at high resolutions with large fields of view in
order to capture fine-grained features necessary for clinical decision-making.
Consequently, training deep learning models on medical images can incur large
computational costs. In this work, we address the challenge of downsizing
medical images in order to improve downstream computational efficiency while
preserving clinically-relevant features. We introduce MedVAE, a family of six
large-scale 2D and 3D autoencoders capable of encoding medical images as
downsized latent representations and decoding latent representations back to
high-resolution images. We train MedVAE autoencoders using a novel two-stage
training approach with 1,052,730 medical images. Across diverse tasks obtained
from 20 medical image datasets, we demonstrate that (1) utilizing MedVAE latent
representations in place of high-resolution images when training downstream
models can lead to efficiency benefits (up to 70x improvement in throughput)
while simultaneously preserving clinically-relevant features and (2) MedVAE can
decode latent representations back to high-resolution images with high
fidelity. Our work demonstrates that large-scale, generalizable autoencoders
can help address critical efficiency challenges in the medical domain. Our code
is available at https://github.com/StanfordMIMI/MedVAE.
|
2502.14755
|
Multi-Objective Causal Bayesian Optimization
|
stat.ML cs.LG
|
In decision-making problems, the outcome of an intervention often depends on
the causal relationships between system components and is highly costly to
evaluate. In such settings, causal Bayesian optimization (CBO) can exploit the
causal relationships between the system variables and sequentially perform
interventions to approach the optimum with minimal data. Extending CBO to the
multi-outcome setting, we propose Multi-Objective Causal Bayesian Optimization
(MO-CBO), a paradigm for identifying Pareto-optimal interventions within a
known multi-target causal graph. We first derive a graphical characterization
for potentially optimal sets of variables to intervene upon. Showing that any
MO-CBO problem can be decomposed into several traditional multi-objective
optimization tasks, we then introduce an algorithm that sequentially balances
exploration across these tasks using relative hypervolume improvement. The
proposed method will be validated on both synthetic and real-world causal
graphs, demonstrating its superiority over traditional (non-causal)
multi-objective Bayesian optimization in settings where causal information is
available.
|
2502.14759
|
On the Influence of Context Size and Model Choice in Retrieval-Augmented
Generation Systems
|
cs.CL cs.AI
|
Retrieval-augmented generation (RAG) has emerged as an approach to augment
large language models (LLMs) by reducing their reliance on static knowledge and
improving answer factuality. RAG retrieves relevant context snippets and
generates an answer based on them. Despite its increasing industrial adoption,
systematic exploration of RAG components is lacking, particularly regarding the
ideal size of provided context, and the choice of base LLM and retrieval
method. To help guide development of robust RAG systems, we evaluate various
context sizes, BM25 and semantic search as retrievers, and eight base LLMs.
Moving away from the usual RAG evaluation with short answers, we explore the
more challenging long-form question answering in two domains, where a good
answer has to utilize the entire context. Our findings indicate that final QA
performance improves steadily with up to 15 snippets but stagnates or declines
beyond that. Finally, we show that different general-purpose LLMs excel in the
biomedical domain than the encyclopedic one, and that open-domain evidence
retrieval in large corpora is challenging.
|
2502.14760
|
EquivaMap: Leveraging LLMs for Automatic Equivalence Checking of
Optimization Formulations
|
cs.AI cs.LG math.OC
|
A fundamental problem in combinatorial optimization is identifying equivalent
formulations, which can lead to more efficient solution strategies and deeper
insights into a problem's computational complexity. The need to automatically
identify equivalence between problem formulations has grown as optimization
copilots--systems that generate problem formulations from natural language
descriptions--have proliferated. However, existing approaches to checking
formulation equivalence lack grounding, relying on simple heuristics which are
insufficient for rigorous validation. Inspired by Karp reductions, in this work
we introduce quasi-Karp equivalence, a formal criterion for determining when
two optimization formulations are equivalent based on the existence of a
mapping between their decision variables. We propose EquivaMap, a framework
that leverages large language models to automatically discover such mappings,
enabling scalable and reliable equivalence verification. To evaluate our
approach, we construct the first open-source dataset of equivalent optimization
formulations, generated by applying transformations such as adding slack
variables or valid inequalities to existing formulations. Empirically,
EquivaMap significantly outperforms existing methods, achieving substantial
improvements in correctly identifying formulation equivalence.
|
2502.14762
|
Sculpting [CLS] Features for Pre-Trained Model-Based Class-Incremental
Learning
|
cs.LG cs.CV
|
Class-incremental learning requires models to continually acquire knowledge
of new classes without forgetting old ones. Although pre-trained models have
demonstrated strong performance in class-incremental learning, they remain
susceptible to catastrophic forgetting when learning new concepts. Excessive
plasticity in the models breaks generalizability and causes forgetting, while
strong stability results in insufficient adaptation to new classes. This
necessitates effective adaptation with minimal modifications to preserve the
general knowledge of pre-trained models. To address this challenge, we first
introduce a new parameter-efficient fine-tuning module 'Learn and Calibrate',
or LuCA, designed to acquire knowledge through an adapter-calibrator couple,
enabling effective adaptation with well-refined feature representations.
Second, for each learning session, we deploy a sparse LuCA module on top of the
last token just before the classifier, which we refer to as 'Token-level Sparse
Calibration and Adaptation', or TOSCA. This strategic design improves the
orthogonality between the modules and significantly reduces both training and
inference complexity. By leaving the generalization capabilities of the
pre-trained models intact and adapting exclusively via the last token, our
approach achieves a harmonious balance between stability and plasticity.
Extensive experiments demonstrate TOSCA's state-of-the-art performance while
introducing ~8 times fewer parameters compared to prior methods.
|
2502.14764
|
The illusion of households as entities in social networks
|
cs.SI physics.soc-ph
|
Data recording connections between people in communities and villages are
collected and analyzed in various ways, most often as either networks of
individuals or as networks of households. These two networks can differ in
substantial ways. The methodological choice of which network to study,
therefore, is an important aspect in both study design and data analysis. In
this work, we consider various key differences between household and individual
social network structure, and ways in which the networks cannot be used
interchangeably. In addition to formalizing the choices for representing each
network, we explore the consequences of how the results of social network
analysis change depending on the choice between studying the individual and
household network -- from determining whether networks are assortative or
disassortative to the ranking of influence-maximizing nodes. As our main
contribution, we draw upon related work to propose a set of systematic
recommendations for determining the relevant network representation to study.
Our recommendations include assessing a series of entitativity criteria and
relating these criteria to theories and observations about patterns and norms
in social dynamics at the household level: notably, how information spreads
within households and how power structures and gender roles affect this spread.
We draw upon the definition of an illusion of entitativity to identify cases
wherein grouping people into households does not satisfy these criteria or
adequately represent given cultural or experimental contexts. Given the
widespread use of social network data for studying communities, there is broad
impact in understanding which network to study and the consequences of that
decision. We hope that this work gives guidance to practitioners and
researchers collecting and studying social network data.
|
2502.14765
|
Step-by-Step Fact Verification System for Medical Claims with
Explainable Reasoning
|
cs.CL cs.AI
|
Fact verification (FV) aims to assess the veracity of a claim based on
relevant evidence. The traditional approach for automated FV includes a
three-part pipeline relying on short evidence snippets and encoder-only
inference models. More recent approaches leverage the multi-turn nature of LLMs
to address FV as a step-by-step problem where questions inquiring additional
context are generated and answered until there is enough information to make a
decision. This iterative method makes the verification process rational and
explainable. While these methods have been tested for encyclopedic claims,
exploration on domain-specific and realistic claims is missing. In this work,
we apply an iterative FV system on three medical fact-checking datasets and
evaluate it with multiple settings, including different LLMs, external web
search, and structured reasoning using logic predicates. We demonstrate
improvements in the final performance over traditional approaches and the high
potential of step-by-step FV systems for domain-specific claims.
|
2502.14767
|
Tree-of-Debate: Multi-Persona Debate Trees Elicit Critical Thinking for
Scientific Comparative Analysis
|
cs.CL cs.AI
|
With the exponential growth of research facilitated by modern technology and
improved accessibility, scientific discoveries have become increasingly
fragmented within and across fields. This makes it challenging to assess the
significance, novelty, incremental findings, and equivalent ideas between
related works, particularly those from different research communities. Large
language models (LLMs) have recently demonstrated strong quantitative and
qualitative reasoning abilities, and multi-agent LLM debates have shown promise
in handling complex reasoning tasks by exploring diverse perspectives and
reasoning paths. Inspired by this, we introduce Tree-of-Debate (ToD), a
framework which converts scientific papers into LLM personas that debate their
respective novelties. To emphasize structured, critical reasoning rather than
focusing solely on outcomes, ToD dynamically constructs a debate tree, enabling
fine-grained analysis of independent novelty arguments within scholarly
articles. Through experiments on scientific literature across various domains,
evaluated by expert researchers, we demonstrate that ToD generates informative
arguments, effectively contrasts papers, and supports researchers in their
literature review.
|
2502.14768
|
Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement
Learning
|
cs.CL cs.AI
|
Inspired by the success of DeepSeek-R1, we explore the potential of
rule-based reinforcement learning (RL) in large reasoning models. To analyze
reasoning dynamics, we use synthetic logic puzzles as training data due to
their controllable complexity and straightforward answer verification. We make
some key technical contributions that lead to effective and stable RL training:
a system prompt that emphasizes the thinking and answering process, a stringent
format reward function that penalizes outputs for taking shortcuts, and a
straightforward training recipe that achieves stable convergence. Our 7B model
develops advanced reasoning skills-such as reflection, verification, and
summarization-that are absent from the logic corpus. Remarkably, after training
on just 5K logic problems, it demonstrates generalization abilities to the
challenging math benchmarks AIME and AMC.
|
2502.14770
|
Determining Layer-wise Sparsity for Large Language Models Through a
Theoretical Perspective
|
cs.LG
|
In this paper, we address the challenge of determining the layer-wise
sparsity rates of large language models (LLMs) through a theoretical
perspective. Specifically, we identify a critical issue of
''$\textbf{reconstruction error explosion}$'' in existing LLMs sparsification
methods. This refers to the cumulative effect of reconstruction errors
throughout the sparsification process, where errors from earlier layers
propagate and amplify in subsequent layers. As a result, the overall
reconstruction error increases significantly, leading to a substantial
degradation in model performance. Through theoretical analysis, we derive a
simple yet effective approach to layer-wise sparsity allocation that mitigates
this issue. Our method uses a monotonically increasing arithmetic progression,
reducing the process of determining sparsity rates for multiple layers to the
determination of a single common difference hyperparameter. Remarkably, this
allows for the optimal layer-wise sparsity rates to be identified with just a
few trials. Both our theoretical analysis and experimental results demonstrate
that this sparsity allocation scheme is near optimal. Extensive experiments
show that our method significantly improves the performance of sparse LLMs
across various architectures, outperforming existing layer-wise sparsity
methods. Furthermore, it enhances the performance of various compression
techniques and is applicable to vision and multimodal models. Notably, our
method achieves a reduction of 52.10 in perplexity for the 70$\%$ sparse
LLaMA2-7B model obtained via Wanda, improves average zero-shot accuracy by
10.50$\%$, and delivers speedups of 2.63$\times$ and 2.23$\times$ on CPU and
GPU, respectively.
|
2502.14772
|
Efficient Multivariate Robust Mean Estimation Under Mean-Shift
Contamination
|
cs.DS cs.LG math.ST stat.ML stat.TH
|
We study the algorithmic problem of robust mean estimation of an identity
covariance Gaussian in the presence of mean-shift contamination. In this
contamination model, we are given a set of points in $\mathbb{R}^d$ generated
i.i.d. via the following process. For a parameter $\alpha<1/2$, the $i$-th
sample $x_i$ is obtained as follows: with probability $1-\alpha$, $x_i$ is
drawn from $\mathcal{N}(\mu, I)$, where $\mu \in \mathbb{R}^d$ is the target
mean; and with probability $\alpha$, $x_i$ is drawn from $\mathcal{N}(z_i, I)$,
where $z_i$ is unknown and potentially arbitrary. Prior work characterized the
information-theoretic limits of this task. Specifically, it was shown that, in
contrast to Huber contamination, in the presence of mean-shift contamination
consistent estimation is possible. On the other hand, all known robust
estimators in the mean-shift model have running times exponential in the
dimension. Here we give the first computationally efficient algorithm for
high-dimensional robust mean estimation with mean-shift contamination that can
tolerate a constant fraction of outliers. In particular, our algorithm has
near-optimal sample complexity, runs in sample-polynomial time, and
approximates the target mean to any desired accuracy. Conceptually, our result
contributes to a growing body of work that studies inference with respect to
natural noise models lying in between fully adversarial and random settings.
|
2502.14773
|
Sparse Activations as Conformal Predictors
|
cs.LG
|
Conformal prediction is a distribution-free framework for uncertainty
quantification that replaces point predictions with sets, offering marginal
coverage guarantees (i.e., ensuring that the prediction sets contain the true
label with a specified probability, in expectation). In this paper, we uncover
a novel connection between conformal prediction and sparse softmax-like
transformations, such as sparsemax and $\gamma$-entmax (with $\gamma > 1$),
which may assign nonzero probability only to a subset of labels. We introduce
new non-conformity scores for classification that make the calibration process
correspond to the widely used temperature scaling method. At test time,
applying these sparse transformations with the calibrated temperature leads to
a support set (i.e., the set of labels with nonzero probability) that
automatically inherits the coverage guarantees of conformal prediction. Through
experiments on computer vision and text classification benchmarks, we
demonstrate that the proposed method achieves competitive results in terms of
coverage, efficiency, and adaptiveness compared to standard non-conformity
scores based on softmax.
|
2502.14776
|
SurveyX: Academic Survey Automation via Large Language Models
|
cs.CL
|
Large Language Models (LLMs) have demonstrated exceptional comprehension
capabilities and a vast knowledge base, suggesting that LLMs can serve as
efficient tools for automated survey generation. However, recent research
related to automated survey generation remains constrained by some critical
limitations like finite context window, lack of in-depth content discussion,
and absence of systematic evaluation frameworks. Inspired by human writing
processes, we propose SurveyX, an efficient and organized system for automated
survey generation that decomposes the survey composing process into two phases:
the Preparation and Generation phases. By innovatively introducing online
reference retrieval, a pre-processing method called AttributeTree, and a
re-polishing process, SurveyX significantly enhances the efficacy of survey
composition. Experimental evaluation results show that SurveyX outperforms
existing automated survey generation systems in content quality (0.259
improvement) and citation quality (1.76 enhancement), approaching human expert
performance across multiple evaluation dimensions. Examples of surveys
generated by SurveyX are available on www.surveyx.cn
|
2502.14777
|
Making Universal Policies Universal
|
cs.AI
|
The development of a generalist agent capable of solving a wide range of
sequential decision-making tasks remains a significant challenge. We address
this problem in a cross-agent setup where agents share the same observation
space but differ in their action spaces. Our approach builds on the universal
policy framework, which decouples policy learning into two stages: a
diffusion-based planner that generates observation sequences and an inverse
dynamics model that assigns actions to these plans. We propose a method for
training the planner on a joint dataset composed of trajectories from all
agents. This method offers the benefit of positive transfer by pooling data
from different agents, while the primary challenge lies in adapting shared
plans to each agent's unique constraints. We evaluate our approach on the
BabyAI environment, covering tasks of varying complexity, and demonstrate
positive transfer across agents. Additionally, we examine the planner's
generalisation ability to unseen agents and compare our method to traditional
imitation learning approaches. By training on a pooled dataset from multiple
agents, our universal policy achieves an improvement of up to $42.20\%$ in task
completion accuracy compared to a policy trained on a dataset from a single
agent.
|
2502.14778
|
Harnessing PDF Data for Improving Japanese Large Multimodal Models
|
cs.CL cs.AI cs.CV
|
Large Multimodal Models (LMMs) have demonstrated strong performance in
English, but their effectiveness in Japanese remains limited due to the lack of
high-quality training data. Current Japanese LMMs often rely on translated
English datasets, restricting their ability to capture Japan-specific cultural
knowledge. To address this, we explore the potential of Japanese PDF data as a
training resource, an area that remains largely underutilized. We introduce a
fully automated pipeline that leverages pretrained models to extract image-text
pairs from PDFs through layout analysis, OCR, and vision-language pairing,
removing the need for manual annotation. Additionally, we construct instruction
data from extracted image-text pairs to enrich the training data. To evaluate
the effectiveness of PDF-derived data, we train Japanese LMMs and assess their
performance on the Japanese LMM Benchmark. Our results demonstrate substantial
improvements, with performance gains ranging from 3.9% to 13.8% on Heron-Bench.
Further analysis highlights the impact of PDF-derived data on various factors,
such as model size and language models, reinforcing its value as a multimodal
resource for Japanese LMMs. We plan to make the source code and data publicly
available upon acceptance.
|
2502.14779
|
DC-ControlNet: Decoupling Inter- and Intra-Element Conditions in Image
Generation with Diffusion Models
|
cs.CV
|
In this paper, we introduce DC (Decouple)-ControlNet, a highly flexible and
precisely controllable framework for multi-condition image generation. The core
idea behind DC-ControlNet is to decouple control conditions, transforming
global control into a hierarchical system that integrates distinct elements,
contents, and layouts. This enables users to mix these individual conditions
with greater flexibility, leading to more efficient and accurate image
generation control. Previous ControlNet-based models rely solely on global
conditions, which affect the entire image and lack the ability of element- or
region-specific control. This limitation reduces flexibility and can cause
condition misunderstandings in multi-conditional image generation. To address
these challenges, we propose both intra-element and Inter-element Controllers
in DC-ControlNet. The Intra-Element Controller handles different types of
control signals within individual elements, accurately describing the content
and layout characteristics of the object. For interactions between elements, we
introduce the Inter-Element Controller, which accurately handles multi-element
interactions and occlusion based on user-defined relationships. Extensive
evaluations show that DC-ControlNet significantly outperforms existing
ControlNet models and Layout-to-Image generative models in terms of control
flexibility and precision in multi-condition control.
|
2502.14780
|
ReVision: A Dataset and Baseline VLM for Privacy-Preserving
Task-Oriented Visual Instruction Rewriting
|
cs.CL cs.AI cs.CV
|
Efficient and privacy-preserving multimodal interaction is essential as AR,
VR, and modern smartphones with powerful cameras become primary interfaces for
human-computer communication. Existing powerful large vision-language models
(VLMs) enabling multimodal interaction often rely on cloud-based processing,
raising significant concerns about (1) visual privacy by transmitting sensitive
vision data to servers, and (2) their limited real-time, on-device usability.
This paper explores Visual Instruction Rewriting, a novel approach that
transforms multimodal instructions into text-only commands, allowing seamless
integration of lightweight on-device instruction rewriter VLMs (250M
parameters) with existing conversational AI systems, enhancing vision data
privacy. To achieve this, we present a dataset of over 39,000 examples across
14 domains and develop a compact VLM, pretrained on image captioning datasets
and fine-tuned for instruction rewriting. Experimental results, evaluated
through NLG metrics such as BLEU, METEOR, and ROUGE, along with semantic
parsing analysis, demonstrate that even a quantized version of the model
(<500MB storage footprint) can achieve effective instruction rewriting, thus
enabling privacy-focused, multimodal AI applications.
|
2502.14782
|
A Neural Operator-Based Emulator for Regional Shallow Water Dynamics
|
cs.CE cs.LG physics.comp-ph physics.geo-ph
|
Coastal regions are particularly vulnerable to the impacts of rising sea
levels and extreme weather events. Accurate real-time forecasting of
hydrodynamic processes in these areas is essential for infrastructure planning
and climate adaptation. In this study, we present the Multiple-Input Temporal
Operator Network (MITONet), a novel autoregressive neural emulator that employs
dimensionality reduction to efficiently approximate high-dimensional numerical
solvers for complex, nonlinear problems that are governed by time-dependent,
parameterized partial differential equations. Although MITONet is applicable to
a wide range of problems, we showcase its capabilities by forecasting regional
tide-driven dynamics described by the two-dimensional shallow-water equations,
while incorporating initial conditions, boundary conditions, and a varying
domain parameter. We demonstrate MITONet's performance in a real-world
application, highlighting its ability to make accurate predictions by
extrapolating both in time and parametric space.
|
2502.14783
|
Tracking and Assigning Jobs to a Markov Machine
|
cs.IT cs.NI cs.SY eess.SY math.IT
|
We consider a time-slotted communication system with a machine, a cloud
server, and a sampler. Job requests from the users are queued on the server to
be completed by the machine. The machine has two states, namely, a busy state
and a free state. The server can assign a job to the machine in a
first-in-first-served manner. If the machine is free, it completes the job
request from the server; otherwise, it drops the request. Upon dropping a job
request, the server is penalized. When the machine is in the free state, the
machine can get into the busy state with an internal job. When the server does
not assign a job request to the machine, the state of the machine evolves as a
symmetric Markov chain. If the machine successfully accepts the job request
from the server, the state of the machine goes to the busy state and follows a
different dynamics compared to the dynamics when the machine goes to the busy
state due to an internal job. The sampler samples the state of the machine and
sends it to the server via an error-free channel. Thus, the server can estimate
the state of the machine, upon receiving an update from the source. If the
machine is in the free state but the estimated state at the server is busy, the
sampler pays a cost. We incorporate the concept of the age of incorrect
information to model the cost of the sampler. We aim to find an optimal
sampling policy such that the cost of the sampler plus the penalty on the
machine gets minimized. We formulate this problem in a Markov decision process
framework and find how an optimal policy changes with several associated
parameters. We show that a threshold policy is optimal for this problem. We
show a necessary and sufficient condition for a threshold policy to be optimal.
Finally, we find the optimal threshold without bounding the state space.
|
2502.14785
|
Real-Time Device Reach Forecasting Using HLL and MinHash Data Sketches
|
cs.DB cs.AI cs.LG
|
Predicting the right number of TVs (Device Reach) in real-time based on a
user-specified targeting attributes is imperative for running multi-million
dollar ADs business. The traditional approach of SQL queries to join billions
of records across multiple targeting dimensions is extremely slow. As a
workaround, many applications will have an offline process to crunch these
numbers and present the results after many hours. In our case, the solution was
an offline process taking 24 hours to onboard a customer resulting in a
potential loss of business. To solve this problem, we have built a new
real-time prediction system using MinHash and HyperLogLog (HLL) data sketches
to compute the device reach at runtime when a user makes a request. However,
existing MinHash implementations do not solve the complex problem of multilevel
aggregation and intersection. This work will show how we have solved this
problem, in addition, we have improved MinHash algorithm to run 4 times faster
using Single Instruction Multiple Data (SIMD) vectorized operations for high
speed and accuracy with constant space to process billions of records. Finally,
by experiments, we prove that the results are as accurate as traditional
offline prediction system with an acceptable error rate of 5%.
|
2502.14786
|
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic
Understanding, Localization, and Dense Features
|
cs.CV cs.AI
|
We introduce SigLIP 2, a family of new multilingual vision-language encoders
that build on the success of the original SigLIP. In this second iteration, we
extend the original image-text training objective with several prior,
independently developed techniques into a unified recipe -- this includes
captioning-based pretraining, self-supervised losses (self-distillation, masked
prediction) and online data curation. With these changes, SigLIP 2 models
outperform their SigLIP counterparts at all model scales in core capabilities,
including zero-shot classification, image-text retrieval, and transfer
performance when extracting visual representations for Vision-Language Models
(VLMs). Furthermore, the new training recipe leads to significant improvements
on localization and dense prediction tasks. We also train variants which
support multiple resolutions and preserve the input's native aspect ratio.
Finally, we train on a more diverse data-mixture that includes de-biasing
techniques, leading to much better multilingual understanding and improved
fairness. To allow users to trade off inference cost with performance, we
release model checkpoints at four sizes: ViT-B (86M), L (303M), So400m (400M),
and g (1B).
|
2502.14788
|
Ray-Tracing for Conditionally Activated Neural Networks
|
cs.LG cs.AI
|
In this paper, we introduce a novel architecture for conditionally activated
neural networks combining a hierarchical construction of multiple Mixture of
Experts (MoEs) layers with a sampling mechanism that progressively converges to
an optimized configuration of expert activation. This methodology enables the
dynamic unfolding of the network's architecture, facilitating efficient
path-specific training. Experimental results demonstrate that this approach
achieves competitive accuracy compared to conventional baselines while
significantly reducing the parameter count required for inference. Notably,
this parameter reduction correlates with the complexity of the input patterns,
a property naturally emerging from the network's operational dynamics without
necessitating explicit auxiliary penalty functions.
|
2502.14789
|
Structurally Disentangled Feature Fields Distillation for 3D
Understanding and Editing
|
cs.CV
|
Recent work has demonstrated the ability to leverage or distill pre-trained
2D features obtained using large pre-trained 2D models into 3D features,
enabling impressive 3D editing and understanding capabilities using only 2D
supervision. Although impressive, models assume that 3D features are captured
using a single feature field and often make a simplifying assumption that
features are view-independent. In this work, we propose instead to capture 3D
features using multiple disentangled feature fields that capture different
structural components of 3D features involving view-dependent and
view-independent components, which can be learned from 2D feature supervision
only. Subsequently, each element can be controlled in isolation, enabling
semantic and structural understanding and editing capabilities. For instance,
using a user click, one can segment 3D features corresponding to a given object
and then segment, edit, or remove their view-dependent (reflective) properties.
We evaluate our approach on the task of 3D segmentation and demonstrate a set
of novel understanding and editing tasks.
|
2502.14790
|
An Adversarial Analysis of Thompson Sampling for Full-information Online
Learning: from Finite to Infinite Action Spaces
|
cs.LG cs.GT math.ST stat.ML stat.TH
|
We develop an analysis of Thompson sampling for online learning under full
feedback - also known as prediction with expert advice - where the learner's
prior is defined over the space of an adversary's future actions, rather than
the space of experts. We show regret decomposes into regret the learner
expected a priori, plus a prior-robustness-type term we call excess regret. In
the classical finite-expert setting, this recovers optimal rates. As an initial
step towards practical online learning in settings with a
potentially-uncountably-infinite number of experts, we show that Thompson
sampling with a certain Gaussian process prior widely-used in the Bayesian
optimization literature has a $\mathcal{O}(\beta\sqrt{T\log(1+\lambda)})$ rate
against a $\beta$-bounded $\lambda$-Lipschitz~adversary.
|
2502.14791
|
Rapid Word Learning Through Meta In-Context Learning
|
cs.CL cs.AI cs.LG
|
Humans can quickly learn a new word from a few illustrative examples, and
then systematically and flexibly use it in novel contexts. Yet the abilities of
current language models for few-shot word learning, and methods for improving
these abilities, are underexplored. In this study, we introduce a novel method,
Meta-training for IN-context learNing Of Words (Minnow). This method trains
language models to generate new examples of a word's usage given a few
in-context examples, using a special placeholder token to represent the new
word. This training is repeated on many new words to develop a general
word-learning ability. We find that training models from scratch with Minnow on
human-scale child-directed language enables strong few-shot word learning,
comparable to a large language model (LLM) pre-trained on orders of magnitude
more data. Furthermore, through discriminative and generative evaluations, we
demonstrate that finetuning pre-trained LLMs with Minnow improves their ability
to discriminate between new words, identify syntactic categories of new words,
and generate reasonable new usages and definitions for new words, based on one
or a few in-context examples. These findings highlight the data efficiency of
Minnow and its potential to improve language model performance in word learning
tasks.
|
2502.14792
|
RendBEV: Semantic Novel View Synthesis for Self-Supervised Bird's Eye
View Segmentation
|
cs.CV
|
Bird's Eye View (BEV) semantic maps have recently garnered a lot of attention
as a useful representation of the environment to tackle assisted and autonomous
driving tasks. However, most of the existing work focuses on the fully
supervised setting, training networks on large annotated datasets. In this
work, we present RendBEV, a new method for the self-supervised training of BEV
semantic segmentation networks, leveraging differentiable volumetric rendering
to receive supervision from semantic perspective views computed by a 2D
semantic segmentation model. Our method enables zero-shot BEV semantic
segmentation, and already delivers competitive results in this challenging
setting. When used as pretraining to then fine-tune on labeled BEV
ground-truth, our method significantly boosts performance in low-annotation
regimes, and sets a new state of the art when fine-tuning on all available
labels.
|
2502.14795
|
Humanoid-VLA: Towards Universal Humanoid Control with Visual Integration
|
cs.RO cs.CV
|
This paper addresses the limitations of current humanoid robot control
frameworks, which primarily rely on reactive mechanisms and lack autonomous
interaction capabilities due to data scarcity. We propose Humanoid-VLA, a novel
framework that integrates language understanding, egocentric scene perception,
and motion control, enabling universal humanoid control. Humanoid-VLA begins
with language-motion pre-alignment using non-egocentric human motion datasets
paired with textual descriptions, allowing the model to learn universal motion
patterns and action semantics. We then incorporate egocentric visual context
through a parameter efficient video-conditioned fine-tuning, enabling
context-aware motion generation. Furthermore, we introduce a self-supervised
data augmentation strategy that automatically generates pseudoannotations
directly derived from motion data. This process converts raw motion sequences
into informative question-answer pairs, facilitating the effective use of
large-scale unlabeled video data. Built upon whole-body control architectures,
extensive experiments show that Humanoid-VLA achieves object interaction and
environment exploration tasks with enhanced contextual awareness, demonstrating
a more human-like capacity for adaptive and intelligent engagement.
|
2502.14796
|
A Multi-Agent Perspective on Modern Information Retrieval
|
cs.IR
|
The rise of large language models (LLMs) has introduced a new era in
information retrieval (IR), where queries and documents that were once assumed
to be generated exclusively by humans can now also be created by automated
agents. These agents can formulate queries, generate documents, and perform
ranking. This shift challenges some long-standing IR paradigms and calls for a
reassessment of both theoretical frameworks and practical methodologies. We
advocate for a multi-agent perspective to better capture the complex
interactions between query agents, document agents, and ranker agents. Through
empirical exploration of various multi-agent retrieval settings, we reveal the
significant impact of these interactions on system performance. Our findings
underscore the need to revisit classical IR paradigms and develop new
frameworks for more effective modeling and evaluation of modern retrieval
systems.
|
2502.14799
|
A Survey on Text-Driven 360-Degree Panorama Generation
|
cs.CV cs.AI
|
The advent of text-driven 360-degree panorama generation, enabling the
synthesis of 360-degree panoramic images directly from textual descriptions,
marks a transformative advancement in immersive visual content creation. This
innovation significantly simplifies the traditionally complex process of
producing such content. Recent progress in text-to-image diffusion models has
accelerated the rapid development in this emerging field. This survey presents
a comprehensive review of text-driven 360-degree panorama generation, offering
an in-depth analysis of state-of-the-art algorithms and their expanding
applications in 360-degree 3D scene generation. Furthermore, we critically
examine current limitations and propose promising directions for future
research. A curated project page with relevant resources and research papers is
available at https://littlewhitesea.github.io/Text-Driven-Pano-Gen/.
|
2502.14801
|
AVD2: Accident Video Diffusion for Accident Video Description
|
cs.CV
|
Traffic accidents present complex challenges for autonomous driving, often
featuring unpredictable scenarios that hinder accurate system interpretation
and responses.Nonetheless, prevailing methodologies fall short in elucidating
the causes of accidents and proposing preventive measures due to the paucity of
training data specific to accident scenarios.In this work, we introduce AVD2
(Accident Video Diffusion for Accident Video Description), a novel framework
that enhances accident scene understanding by generating accident videos that
aligned with detailed natural language descriptions and reasoning, resulting in
the contributed EMM-AU (Enhanced Multi-Modal Accident Video Understanding)
dataset. Empirical results reveal that the integration of the EMM-AU dataset
establishes state-of-the-art performance across both automated metrics and
human evaluations, markedly advancing the domains of accident analysis and
prevention. Project resources are available at https://an-answer-tree.github.io
|
2502.14802
|
From RAG to Memory: Non-Parametric Continual Learning for Large Language
Models
|
cs.CL cs.AI
|
Our ability to continuously acquire, organize, and leverage knowledge is a
key feature of human intelligence that AI systems must approximate to unlock
their full potential. Given the challenges in continual learning with large
language models (LLMs), retrieval-augmented generation (RAG) has become the
dominant way to introduce new information. However, its reliance on vector
retrieval hinders its ability to mimic the dynamic and interconnected nature of
human long-term memory. Recent RAG approaches augment vector embeddings with
various structures like knowledge graphs to address some of these gaps, namely
sense-making and associativity. However, their performance on more basic
factual memory tasks drops considerably below standard RAG. We address this
unintended deterioration and propose HippoRAG 2, a framework that outperforms
standard RAG comprehensively on factual, sense-making, and associative memory
tasks. HippoRAG 2 builds upon the Personalized PageRank algorithm used in
HippoRAG and enhances it with deeper passage integration and more effective
online use of an LLM. This combination pushes this RAG system closer to the
effectiveness of human long-term memory, achieving a 7% improvement in
associative memory tasks over the state-of-the-art embedding model while also
exhibiting superior factual knowledge and sense-making memory capabilities.
This work paves the way for non-parametric continual learning for LLMs. Our
code and data will be released at https://github.com/OSU-NLP-Group/HippoRAG.
|
2502.14803
|
Planning, scheduling, and execution on the Moon: the CADRE technology
demonstration mission
|
cs.RO cs.SY eess.SY
|
NASA's Cooperative Autonomous Distributed Robotic Exploration (CADRE)
mission, slated for flight to the Moon's Reiner Gamma region in 2025/2026, is
designed to demonstrate multi-agent autonomous exploration of the Lunar surface
and sub-surface. A team of three robots and a base station will autonomously
explore a region near the lander, collecting the data required for 3D
reconstruction of the surface with no human input; and then autonomously
perform distributed sensing with multi-static ground penetrating radars (GPR),
driving in formation while performing coordinated radar soundings to create a
map of the subsurface. At the core of CADRE's software architecture is a novel
autonomous, distributed planning, scheduling, and execution (PS&E) system. The
system coordinates the robots' activities, planning and executing tasks that
require multiple robots' participation while ensuring that each individual
robot's thermal and power resources stay within prescribed bounds, and
respecting ground-prescribed sleep-wake cycles. The system uses a
centralized-planning, distributed-execution paradigm, and a leader election
mechanism ensures robustness to failures of individual agents. In this paper,
we describe the architecture of CADRE's PS&E system; discuss its design
rationale; and report on verification and validation (V&V) testing of the
system on CADRE's hardware in preparation for deployment on the Moon.
|
2502.14807
|
FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image
Analysis
|
eess.IV cs.AI cs.CV
|
Foundation models are becoming increasingly effective in the medical domain,
offering pre-trained models on large datasets that can be readily adapted for
downstream tasks. Despite progress, fetal ultrasound images remain a
challenging domain for foundation models due to their inherent complexity,
often requiring substantial additional training and facing limitations due to
the scarcity of paired multimodal data. To overcome these challenges, here we
introduce FetalCLIP, a vision-language foundation model capable of generating
universal representation of fetal ultrasound images. FetalCLIP was pre-trained
using a multimodal learning approach on a diverse dataset of 210,035 fetal
ultrasound images paired with text. This represents the largest paired dataset
of its kind used for foundation model development to date. This unique training
approach allows FetalCLIP to effectively learn the intricate anatomical
features present in fetal ultrasound images, resulting in robust
representations that can be used for a variety of downstream applications. In
extensive benchmarking across a range of key fetal ultrasound applications,
including classification, gestational age estimation, congenital heart defect
(CHD) detection, and fetal structure segmentation, FetalCLIP outperformed all
baselines while demonstrating remarkable generalizability and strong
performance even with limited labeled data. We plan to release the FetalCLIP
model publicly for the benefit of the broader scientific community.
|
2502.14809
|
PREM: Privately Answering Statistical Queries with Relative Error
|
cs.LG
|
We introduce $\mathsf{PREM}$ (Private Relative Error Multiplicative weight
update), a new framework for generating synthetic data that achieves a relative
error guarantee for statistical queries under $(\varepsilon, \delta)$
differential privacy (DP). Namely, for a domain ${\cal X}$, a family ${\cal F}$
of queries $f : {\cal X} \to \{0, 1\}$, and $\zeta > 0$, our framework yields a
mechanism that on input dataset $D \in {\cal X}^n$ outputs a synthetic dataset
$\widehat{D} \in {\cal X}^n$ such that all statistical queries in ${\cal F}$ on
$D$, namely $\sum_{x \in D} f(x)$ for $f \in {\cal F}$, are within a $1 \pm
\zeta$ multiplicative factor of the corresponding value on $\widehat{D}$ up to
an additive error that is polynomial in $\log |{\cal F}|$, $\log |{\cal X}|$,
$\log n$, $\log(1/\delta)$, $1/\varepsilon$, and $1/\zeta$. In contrast, any
$(\varepsilon, \delta)$-DP mechanism is known to require worst-case additive
error that is polynomial in at least one of $n, |{\cal F}|$, or $|{\cal X}|$.
We complement our algorithm with nearly matching lower bounds.
|
2502.14814
|
VB-Com: Learning Vision-Blind Composite Humanoid Locomotion Against
Deficient Perception
|
cs.RO
|
The performance of legged locomotion is closely tied to the accuracy and
comprehensiveness of state observations. Blind policies, which rely solely on
proprioception, are considered highly robust due to the reliability of
proprioceptive observations. However, these policies significantly limit
locomotion speed and often require collisions with the terrain to adapt. In
contrast, Vision policies allows the robot to plan motions in advance and
respond proactively to unstructured terrains with an online perception module.
However, perception is often compromised by noisy real-world environments,
potential sensor failures, and the limitations of current simulations in
presenting dynamic or deformable terrains. Humanoid robots, with high degrees
of freedom and inherently unstable morphology, are particularly susceptible to
misguidance from deficient perception, which can result in falls or termination
on challenging dynamic terrains. To leverage the advantages of both vision and
blind policies, we propose VB-Com, a composite framework that enables humanoid
robots to determine when to rely on the vision policy and when to switch to the
blind policy under perceptual deficiency. We demonstrate that VB-Com
effectively enables humanoid robots to traverse challenging terrains and
obstacles despite perception deficiencies caused by dynamic terrains or
perceptual noise.
|
2502.14815
|
Optimizing Model Selection for Compound AI Systems
|
cs.AI cs.CL cs.LG cs.MA
|
Compound AI systems that combine multiple LLM calls, such as self-refine and
multi-agent-debate, achieve strong performance on many AI tasks. We address a
core question in optimizing compound systems: for each LLM call or module in
the system, how should one decide which LLM to use? We show that these LLM
choices have a large effect on quality, but the search space is exponential. We
propose LLMSelector, an efficient framework for model selection in compound
systems, which leverages two key empirical insights: (i) end-to-end performance
is often monotonic in how well each module performs, with all other modules
held fixed, and (ii) per-module performance can be estimated accurately by an
LLM. Building upon these insights, LLMSelector iteratively selects one module
and allocates to it the model with the highest module-wise performance, as
estimated by an LLM, until no further gain is possible. LLMSelector is
applicable to any compound system with a bounded number of modules, and its
number of API calls scales linearly with the number of modules, achieving
high-quality model allocation both empirically and theoretically. Experiments
with popular compound systems such as multi-agent debate and self-refine using
LLMs such as GPT-4o, Claude 3.5 Sonnet and Gemini 1.5 show that LLMSelector
confers 5%-70% accuracy gains compared to using the same LLM for all modules.
|
2502.14816
|
Dynamic Low-Rank Sparse Adaptation for Large Language Models
|
cs.LG
|
Despite the efficacy of network sparsity in alleviating the deployment strain
of Large Language Models (LLMs), it endures significant performance
degradation. Applying Low-Rank Adaptation (LoRA) to fine-tune the sparse LLMs
offers an intuitive approach to counter this predicament, while it holds
shortcomings include: 1) The inability to integrate LoRA weights into sparse
LLMs post-training, and 2) Insufficient performance recovery at high sparsity
ratios. In this paper, we introduce dynamic Low-rank Sparse Adaptation (LoSA),
a novel method that seamlessly integrates low-rank adaptation into LLM sparsity
within a unified framework, thereby enhancing the performance of sparse LLMs
without increasing the inference latency. In particular, LoSA dynamically
sparsifies the LoRA outcomes based on the corresponding sparse weights during
fine-tuning, thus guaranteeing that the LoRA module can be integrated into the
sparse LLMs post-training. Besides, LoSA leverages Representation Mutual
Information (RMI) as an indicator to determine the importance of layers,
thereby efficiently determining the layer-wise sparsity rates during
fine-tuning. Predicated on this, LoSA adjusts the rank of the LoRA module based
on the variability in layer-wise reconstruction errors, allocating an
appropriate fine-tuning for each layer to reduce the output discrepancies
between dense and sparse LLMs. Extensive experiments tell that LoSA can
efficiently boost the efficacy of sparse LLMs within a few hours, without
introducing any additional inferential burden. For example, LoSA reduced the
perplexity of sparse LLaMA-2-7B by 68.73 and increased zero-shot accuracy by
16.32$\%$, achieving a 2.60$\times$ speedup on CPU and 2.23$\times$ speedup on
GPU, requiring only 45 minutes of fine-tuning on a single NVIDIA A100 80GB GPU.
Code is available at https://github.com/wzhuang-xmu/LoSA.
|
2502.14819
|
Learning from Reward-Free Offline Data: A Case for Planning with Latent
Dynamics Models
|
cs.LG
|
A long-standing goal in AI is to build agents that can solve a variety of
tasks across different environments, including previously unseen ones. Two
dominant approaches tackle this challenge: (i) reinforcement learning (RL),
which learns policies through trial and error, and (ii) optimal control, which
plans actions using a learned or known dynamics model. However, their relative
strengths and weaknesses remain underexplored in the setting where agents must
learn from offline trajectories without reward annotations. In this work, we
systematically analyze the performance of different RL and control-based
methods under datasets of varying quality. On the RL side, we consider
goal-conditioned and zero-shot approaches. On the control side, we train a
latent dynamics model using the Joint Embedding Predictive Architecture (JEPA)
and use it for planning. We study how dataset properties-such as data
diversity, trajectory quality, and environment variability-affect the
performance of these approaches. Our results show that model-free RL excels
when abundant, high-quality data is available, while model-based planning
excels in generalization to novel environment layouts, trajectory stitching,
and data-efficiency. Notably, planning with a latent dynamics model emerges as
a promising approach for zero-shot generalization from suboptimal data.
|
2502.14820
|
eC-Tab2Text: Aspect-Based Text Generation from e-Commerce Product Tables
|
cs.CL cs.AI cs.DB cs.HC
|
Large Language Models (LLMs) have demonstrated exceptional versatility across
diverse domains, yet their application in e-commerce remains underexplored due
to a lack of domain-specific datasets. To address this gap, we introduce
eC-Tab2Text, a novel dataset designed to capture the intricacies of e-commerce,
including detailed product attributes and user-specific queries. Leveraging
eC-Tab2Text, we focus on text generation from product tables, enabling LLMs to
produce high-quality, attribute-specific product reviews from structured
tabular data. Fine-tuned models were rigorously evaluated using standard
Table2Text metrics, alongside correctness, faithfulness, and fluency
assessments. Our results demonstrate substantial improvements in generating
contextually accurate reviews, highlighting the transformative potential of
tailored datasets and fine-tuning methodologies in optimizing e-commerce
workflows. This work highlights the potential of LLMs in e-commerce workflows
and the essential role of domain-specific datasets in tailoring them to
industry-specific challenges.
|
2502.14821
|
Meshless Shape Optimization using Neural Networks and Partial
Differential Equations on Graphs
|
math.NA cs.LG cs.NA math.OC
|
Shape optimization involves the minimization of a cost function defined over
a set of shapes, often governed by a partial differential equation (PDE). In
the absence of closed-form solutions, one relies on numerical methods to
approximate the solution. The level set method -- when coupled with the finite
element method -- is one of the most versatile numerical shape optimization
approaches but still suffers from the limitations of most mesh-based methods.
In this work, we present a fully meshless level set framework that leverages
neural networks to parameterize the level set function and employs the graph
Laplacian to approximate the underlying PDE. Our approach enables precise
computations of geometric quantities such as surface normals and curvature, and
allows tackling optimization problems within the class of convex shapes.
|
2502.14822
|
A Survey of Model Architectures in Information Retrieval
|
cs.IR
|
This survey examines the evolution of model architectures in information
retrieval (IR), focusing on two key aspects: backbone models for feature
extraction and end-to-end system architectures for relevance estimation. The
review intentionally separates architectural considerations from training
methodologies to provide a focused analysis of structural innovations in IR
systems.We trace the development from traditional term-based methods to modern
neural approaches, particularly highlighting the impact of transformer-based
models and subsequent large language models (LLMs). We conclude by discussing
emerging challenges and future directions, including architectural
optimizations for performance and scalability, handling of multimodal,
multilingual data, and adaptation to novel application domains beyond
traditional search paradigms.
|
2502.14827
|
Exploring Advanced Techniques for Visual Question Answering: A
Comprehensive Comparison
|
cs.CV cs.AI cs.ET cs.LG
|
Visual Question Answering (VQA) has emerged as a pivotal task in the
intersection of computer vision and natural language processing, requiring
models to understand and reason about visual content in response to natural
language questions. Analyzing VQA datasets is essential for developing robust
models that can handle the complexities of multimodal reasoning. Several
approaches have been developed to examine these datasets, each offering
distinct perspectives on question diversity, answer distribution, and
visual-textual correlations. Despite significant progress, existing VQA models
face challenges related to dataset bias, limited model complexity, commonsense
reasoning gaps, rigid evaluation methods, and generalization to real world
scenarios. This paper presents a comprehensive comparative study of five
advanced VQA models: ABC-CNN, KICNLE, Masked Vision and Language Modeling,
BLIP-2, and OFA, each employing distinct methodologies to address these
challenges.
|
2502.14828
|
Fundamental Limitations in Defending LLM Finetuning APIs
|
cs.LG cs.CR
|
LLM developers have imposed technical interventions to prevent fine-tuning
misuse attacks, attacks where adversaries evade safeguards by fine-tuning the
model using a public API. Previous work has established several successful
attacks against specific fine-tuning API defences. In this work, we show that
defences of fine-tuning APIs that seek to detect individual harmful training or
inference samples ('pointwise' detection) are fundamentally limited in their
ability to prevent fine-tuning attacks. We construct 'pointwise-undetectable'
attacks that repurpose entropy in benign model outputs (e.g. semantic or
syntactic variations) to covertly transmit dangerous knowledge. Our attacks are
composed solely of unsuspicious benign samples that can be collected from the
model before fine-tuning, meaning training and inference samples are all
individually benign and low-perplexity. We test our attacks against the OpenAI
fine-tuning API, finding they succeed in eliciting answers to harmful
multiple-choice questions, and that they evade an enhanced monitoring system we
design that successfully detects other fine-tuning attacks. We encourage the
community to develop defences that tackle the fundamental limitations we
uncover in pointwise fine-tuning API defences.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.