id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.11913
|
PreAdaptFWI: Pretrained-Based Adaptive Residual Learning for
Full-Waveform Inversion Without Dataset Dependency
|
physics.geo-ph cs.LG
|
Full-waveform inversion (FWI) is a method that utilizes seismic data to
invert the physical parameters of subsurface media by minimizing the difference
between simulated and observed waveforms. Due to its ill-posed nature, FWI is
susceptible to getting trapped in local minima. Consequently, various research
efforts have attempted to combine neural networks with FWI to stabilize the
inversion process. This study presents a simple yet effective training
framework that is independent of dataset reliance and requires only moderate
pre-training on a simple initial model to stabilize network outputs. During the
transfer learning phase, the conventional FWI gradients will simultaneously
update both the neural network and the proposed adaptive residual learning
module, which learns the residual mapping of large-scale distribution features
in the network's output, rather than directly fitting the target mapping.
Through this synergistic training paradigm, the proposed algorithm effectively
infers the physically-informed prior knowledge into a global representation of
stratigraphic distribution, as well as capturing subtle variations in
inter-layer velocities within local details, thereby escaping local optima.
Evaluating the method on two benchmark models under various conditions,
including absent low-frequency data, noise interference, and differing initial
models, along with corresponding ablation experiments, consistently
demonstrates the superiority of the proposed approach.
|
2502.11915
|
On the robustness of ChatGPT in teaching Korean Mathematics
|
cs.AI math.HO
|
ChatGPT, an Artificial Intelligence model, has the potential to revolutionize
education. However, its effectiveness in solving non-English questions remains
uncertain. This study evaluates ChatGPT's robustness using 586 Korean
mathematics questions. ChatGPT achieves 66.72% accuracy, correctly answering
391 out of 586 questions. We also assess its ability to rate mathematics
questions based on eleven criteria and perform a topic analysis. Our findings
show that ChatGPT's ratings align with educational theory and test-taker
perspectives. While ChatGPT performs well in question classification, it
struggles with non-English contexts, highlighting areas for improvement. Future
research should address linguistic biases and enhance accuracy across diverse
languages. Domain-specific optimizations and multilingual training could
improve ChatGPT's role in personalized education.
|
2502.11916
|
EssayJudge: A Multi-Granular Benchmark for Assessing Automated Essay
Scoring Capabilities of Multimodal Large Language Models
|
cs.CL cs.AI
|
Automated Essay Scoring (AES) plays a crucial role in educational assessment
by providing scalable and consistent evaluations of writing tasks. However,
traditional AES systems face three major challenges: (1) reliance on
handcrafted features that limit generalizability, (2) difficulty in capturing
fine-grained traits like coherence and argumentation, and (3) inability to
handle multimodal contexts. In the era of Multimodal Large Language Models
(MLLMs), we propose EssayJudge, the first multimodal benchmark to evaluate AES
capabilities across lexical-, sentence-, and discourse-level traits. By
leveraging MLLMs' strengths in trait-specific scoring and multimodal context
understanding, EssayJudge aims to offer precise, context-rich evaluations
without manual feature engineering, addressing longstanding AES limitations.
Our experiments with 18 representative MLLMs reveal gaps in AES performance
compared to human evaluation, particularly in discourse-level traits,
highlighting the need for further advancements in MLLM-based AES research. Our
dataset and code will be available upon acceptance.
|
2502.11918
|
VLP: Vision-Language Preference Learning for Embodied Manipulation
|
cs.LG cs.RO
|
Reward engineering is one of the key challenges in Reinforcement Learning
(RL). Preference-based RL effectively addresses this issue by learning from
human feedback. However, it is both time-consuming and expensive to collect
human preference labels. In this paper, we propose a novel
\textbf{V}ision-\textbf{L}anguage \textbf{P}reference learning framework, named
\textbf{VLP}, which learns a vision-language preference model to provide
preference feedback for embodied manipulation tasks. To achieve this, we define
three types of language-conditioned preferences and construct a vision-language
preference dataset, which contains versatile implicit preference orders without
human annotations. The preference model learns to extract language-related
features, and then serves as a preference annotator in various downstream
tasks. The policy can be learned according to the annotated preferences via
reward learning or direct policy optimization. Extensive empirical results on
simulated embodied manipulation tasks demonstrate that our method provides
accurate preferences and generalizes to unseen tasks and unseen language
instructions, outperforming the baselines by a large margin.
|
2502.11919
|
From Text to Trust: Empowering AI-assisted Decision Making with Adaptive
LLM-powered Analysis
|
cs.HC cs.CL
|
AI-assisted decision making becomes increasingly prevalent, yet individuals
often fail to utilize AI-based decision aids appropriately especially when the
AI explanations are absent, potentially as they do not %understand reflect on
AI's decision recommendations critically. Large language models (LLMs), with
their exceptional conversational and analytical capabilities, present great
opportunities to enhance AI-assisted decision making in the absence of AI
explanations by providing natural-language-based analysis of AI's decision
recommendation, e.g., how each feature of a decision making task might
contribute to the AI recommendation. In this paper, via a randomized
experiment, we first show that presenting LLM-powered analysis of each task
feature, either sequentially or concurrently, does not significantly improve
people's AI-assisted decision performance. To enable decision makers to better
leverage LLM-powered analysis, we then propose an algorithmic framework to
characterize the effects of LLM-powered analysis on human decisions and
dynamically decide which analysis to present. Our evaluation with human
subjects shows that this approach effectively improves decision makers'
appropriate reliance on AI in AI-assisted decision making.
|
2502.11921
|
Joint Evaluation of Fairness and Relevance in Recommender Systems with
Pareto Frontier
|
cs.IR
|
Fairness and relevance are two important aspects of recommender systems
(RSs). Typically, they are evaluated either (i) separately by individual
measures of fairness and relevance, or (ii) jointly using a single measure that
accounts for fairness with respect to relevance. However, approach (i) often
does not provide a reliable joint estimate of the goodness of the models, as it
has two different best models: one for fairness and another for relevance.
Approach (ii) is also problematic because these measures tend to be ad-hoc and
do not relate well to traditional relevance measures, like NDCG. Motivated by
this, we present a new approach for jointly evaluating fairness and relevance
in RSs: Distance to Pareto Frontier (DPFR). Given some user-item interaction
data, we compute their Pareto frontier for a pair of existing relevance and
fairness measures, and then use the distance from the frontier as a measure of
the jointly achievable fairness and relevance. Our approach is modular and
intuitive as it can be computed with existing measures. Experiments with 4 RS
models, 3 re-ranking strategies, and 6 datasets show that existing metrics have
inconsistent associations with our Pareto-optimal solution, making DPFR a more
robust and theoretically well-founded joint measure for assessing fairness and
relevance. Our code: https://github.com/theresiavr/DPFR-recsys-evaluation
|
2502.11925
|
GRAPHGPT-O: Synergistic Multimodal Comprehension and Generation on
Graphs
|
cs.AI cs.CV cs.LG
|
The rapid development of Multimodal Large Language Models (MLLMs) has enabled
the integration of multiple modalities, including texts and images, within the
large language model (LLM) framework. However, texts and images are usually
interconnected, forming a multimodal attributed graph (MMAG). It is
underexplored how MLLMs can incorporate the relational information
(\textit{i.e.}, graph structure) and semantic information (\textit{i.e.,} texts
and images) on such graphs for multimodal comprehension and generation. In this
paper, we propose GraphGPT-o, which supports omni-multimodal understanding and
creation on MMAGs. We first comprehensively study linearization variants to
transform semantic and structural information as input for MLLMs. Then, we
propose a hierarchical aligner that enables deep graph encoding, bridging the
gap between MMAGs and MLLMs. Finally, we explore the inference choices,
adapting MLLM to interleaved text and image generation in graph scenarios.
Extensive experiments on three datasets from different domains demonstrate the
effectiveness of our proposed method. Datasets and codes will be open-sourced
upon acceptance.
|
2502.11926
|
BRIGHTER: BRIdging the Gap in Human-Annotated Textual Emotion
Recognition Datasets for 28 Languages
|
cs.CL
|
People worldwide use language in subtle and complex ways to express emotions.
While emotion recognition -- an umbrella term for several NLP tasks --
significantly impacts different applications in NLP and other fields, most work
in the area is focused on high-resource languages. Therefore, this has led to
major disparities in research and proposed solutions, especially for
low-resource languages that suffer from the lack of high-quality datasets. In
this paper, we present BRIGHTER -- a collection of multilabeled
emotion-annotated datasets in 28 different languages. BRIGHTER covers
predominantly low-resource languages from Africa, Asia, Eastern Europe, and
Latin America, with instances from various domains annotated by fluent
speakers. We describe the data collection and annotation processes and the
challenges of building these datasets. Then, we report different experimental
results for monolingual and crosslingual multi-label emotion identification, as
well as intensity-level emotion recognition. We investigate results with and
without using LLMs and analyse the large variability in performance across
languages and text domains. We show that BRIGHTER datasets are a step towards
bridging the gap in text-based emotion recognition and discuss their impact and
utility.
|
2502.11927
|
Continual Learning Should Move Beyond Incremental Classification
|
cs.LG
|
Continual learning (CL) is the sub-field of machine learning concerned with
accumulating knowledge in dynamic environments. So far, CL research has mainly
focused on incremental classification tasks, where models learn to classify new
categories while retaining knowledge of previously learned ones. Here, we argue
that maintaining such a focus limits both theoretical development and practical
applicability of CL methods. Through a detailed analysis of concrete examples -
including multi-target classification, robotics with constrained output spaces,
learning in continuous task domains, and higher-level concept memorization - we
demonstrate how current CL approaches often fail when applied beyond standard
classification. We identify three fundamental challenges: (C1) the nature of
continuity in learning problems, (C2) the choice of appropriate spaces and
metrics for measuring similarity, and (C3) the role of learning objectives
beyond classification. For each challenge, we provide specific recommendations
to help move the field forward, including formalizing temporal dynamics through
distribution processes, developing principled approaches for continuous task
spaces, and incorporating density estimation and generative objectives. In so
doing, this position paper aims to broaden the scope of CL research while
strengthening its theoretical foundations, making it more applicable to
real-world problems.
|
2502.11932
|
On Representational Dissociation of Language and Arithmetic in Large
Language Models
|
cs.CL
|
The association between language and (non-linguistic) thinking ability in
humans has long been debated, and recently, neuroscientific evidence of brain
activity patterns has been considered. Such a scientific context naturally
raises an interdisciplinary question -- what about such a language-thought
dissociation in large language models (LLMs)? In this paper, as an initial
foray, we explore this question by focusing on simple arithmetic skills (e.g.,
$1+2=$ ?) as a thinking ability and analyzing the geometry of their encoding in
LLMs' representation space. Our experiments with linear classifiers and cluster
separability tests demonstrate that simple arithmetic equations and general
language input are encoded in completely separated regions in LLMs' internal
representation space across all the layers, which is also supported with more
controlled stimuli (e.g., spelled-out equations). These tentatively suggest
that arithmetic reasoning is mapped into a distinct region from general
language input, which is in line with the neuroscientific observations of human
brain activations, while we also point out their somewhat cognitively
implausible geometric properties.
|
2502.11937
|
FitLight: Federated Imitation Learning for Plug-and-Play Autonomous
Traffic Signal Control
|
cs.LG cs.AI
|
Although Reinforcement Learning (RL)-based Traffic Signal Control (TSC)
methods have been extensively studied, their practical applications still raise
some serious issues such as high learning cost and poor generalizability. This
is because the ``trial-and-error'' training style makes RL agents extremely
dependent on the specific traffic environment, which also requires a long
convergence time. To address these issues, we propose a novel Federated
Imitation Learning (FIL)-based framework for multi-intersection TSC, named
FitLight, which allows RL agents to plug-and-play for any traffic environment
without additional pre-training cost. Unlike existing imitation learning
approaches that rely on pre-training RL agents with demonstrations, FitLight
allows real-time imitation learning and seamless transition to reinforcement
learning. Due to our proposed knowledge-sharing mechanism and novel hybrid
pressure-based agent design, RL agents can quickly find a best control policy
with only a few episodes. Moreover, for resource-constrained TSC scenarios,
FitLight supports model pruning and heterogeneous model aggregation, such that
RL agents can work on a micro-controller with merely 16{\it KB} RAM and 32{\it
KB} ROM. Extensive experiments demonstrate that, compared to state-of-the-art
methods, FitLight not only provides a superior starting point but also
converges to a better final solution on both real-world and synthetic datasets,
even under extreme resource limitations.
|
2502.11938
|
QoS based resource management for concurrent operation using MCTS
|
eess.SP cs.SY eess.SY
|
Modern AESA technology enables RF systems to not only perform various radar,
communication and electronic warfare tasks on a single aperture, but even to
execute multiple tasks concurrently. These capabilities increase system
complexity and require intelligent or cognitive resource management. This paper
introduces such a resource management framework based on quality of service
based resource allocation and Monte Carlo tree search allowing for optimal
system usage and profound decision-making. Furthermore, we present experimental
verification in a complex application scenario.
|
2502.11940
|
The Dynamic Model of the UR10 Robot and its ROS2 Integration
|
cs.RO
|
This paper presents the full dynamic model of the UR10 industrial robot. A
triple-stage identification approach is adopted to estimate the manipulator's
dynamic coefficients. First, linear parameters are computed using a standard
linear regression algorithm. Subsequently, nonlinear friction parameters are
estimated according to a sigmoidal model. Lastly, motor drive gains are devised
to map estimated joint currents to torques. The overall identified model can be
used for both control and planning purposes, as the accompanied ROS2 software
can be easily reconfigured to account for a generic payload. The estimated
robot model is experimentally validated against a set of exciting trajectories
and compared to the state-of-the-art model for the same manipulator, achieving
higher current prediction accuracy (up to a factor of 4.43) and more precise
motor gains. The related software is available at
https://codeocean.com/capsule/8515919/tree/v2.
|
2502.11941
|
Deep Spatio-Temporal Neural Network for Air Quality Reanalysis
|
cs.LG cs.AI
|
Air quality prediction is key to mitigating health impacts and guiding
decisions, yet existing models tend to focus on temporal trends while
overlooking spatial generalization. We propose AQ-Net, a spatiotemporal
reanalysis model for both observed and unobserved stations in the near future.
AQ-Net utilizes the LSTM and multi-head attention for the temporal regression.
We also propose a cyclic encoding technique to ensure continuous time
representation. To learn fine-grained spatial air quality estimation, we
incorporate AQ-Net with the neural kNN to explore feature-based interpolation,
such that we can fill the spatial gaps given coarse observation stations. To
demonstrate the efficiency of our model for spatiotemporal reanalysis, we use
data from 2013-2017 collected in northern China for PM2.5 analysis. Extensive
experiments show that AQ-Net excels in air quality reanalysis, highlighting the
potential of hybrid spatio-temporal models to better capture environmental
dynamics, especially in urban areas where both spatial and temporal variability
are critical.
|
2502.11942
|
Sharp-PINNs: staggered hard-constrained physics-informed neural networks
for phase field modelling of corrosion
|
cs.LG physics.comp-ph
|
Physics-informed neural networks have shown significant potential in solving
partial differential equations (PDEs) across diverse scientific fields.
However, their performance often deteriorates when addressing PDEs with
intricate and strongly coupled solutions. In this work, we present a novel
Sharp-PINN framework to tackle complex phase field corrosion problems. Instead
of minimizing all governing PDE residuals simultaneously, the Sharp-PINNs
introduce a staggered training scheme that alternately minimizes the residuals
of Allen-Cahn and Cahn-Hilliard equations, which govern the corrosion system.
To further enhance its efficiency and accuracy, we design an advanced neural
network architecture that integrates random Fourier features as coordinate
embeddings, employs a modified multi-layer perceptron as the primary backbone,
and enforces hard constraints in the output layer. This framework is
benchmarked through simulations of corrosion problems with multiple pits, where
the staggered training scheme and network architecture significantly improve
both the efficiency and accuracy of PINNs. Moreover, in three-dimensional
cases, our approach is 5-10 times faster than traditional finite element
methods while maintaining competitive accuracy, demonstrating its potential for
real-world engineering applications in corrosion prediction.
|
2502.11946
|
Step-Audio: Unified Understanding and Generation in Intelligent Speech
Interaction
|
cs.CL cs.AI cs.HC cs.SD eess.AS
|
Real-time speech interaction, serving as a fundamental interface for
human-machine collaboration, holds immense potential. However, current
open-source models face limitations such as high costs in voice data
collection, weakness in dynamic control, and limited intelligence. To address
these challenges, this paper introduces Step-Audio, the first production-ready
open-source solution. Key contributions include: 1) a 130B-parameter unified
speech-text multi-modal model that achieves unified understanding and
generation, with the Step-Audio-Chat version open-sourced; 2) a generative
speech data engine that establishes an affordable voice cloning framework and
produces the open-sourced lightweight Step-Audio-TTS-3B model through
distillation; 3) an instruction-driven fine control system enabling dynamic
adjustments across dialects, emotions, singing, and RAP; 4) an enhanced
cognitive architecture augmented with tool calling and role-playing abilities
to manage complex tasks effectively. Based on our new StepEval-Audio-360
evaluation benchmark, Step-Audio achieves state-of-the-art performance in human
evaluations, especially in terms of instruction following. On open-source
benchmarks like LLaMA Question, shows 9.3% average performance improvement,
demonstrating our commitment to advancing the development of open-source
multi-modal language technologies. Our code and models are available at
https://github.com/stepfun-ai/Step-Audio.
|
2502.11948
|
Can Your Uncertainty Scores Detect Hallucinated Entity?
|
cs.CL
|
To mitigate the impact of hallucination nature of LLMs, many studies propose
detecting hallucinated generation through uncertainty estimation. However,
these approaches predominantly operate at the sentence or paragraph level,
failing to pinpoint specific spans or entities responsible for hallucinated
content. This lack of granularity is especially problematic for long-form
outputs that mix accurate and fabricated information. To address this
limitation, we explore entity-level hallucination detection. We propose a new
data set, HalluEntity, which annotates hallucination at the entity level. Based
on the dataset, we comprehensively evaluate uncertainty-based hallucination
detection approaches across 17 modern LLMs. Our experimental results show that
uncertainty estimation approaches focusing on individual token probabilities
tend to over-predict hallucinations, while context-aware methods show better
but still suboptimal performance. Through an in-depth qualitative study, we
identify relationships between hallucination tendencies and linguistic
properties and highlight important directions for future research.
|
2502.11949
|
Massively Scaling Explicit Policy-conditioned Value Functions
|
cs.LG cs.AI
|
We introduce a scaling strategy for Explicit Policy-Conditioned Value
Functions (EPVFs) that significantly improves performance on challenging
continuous-control tasks. EPVFs learn a value function V({\theta}) that is
explicitly conditioned on the policy parameters, enabling direct gradient-based
updates to the parameters of any policy. However, EPVFs at scale struggle with
unrestricted parameter growth and efficient exploration in the policy parameter
space. To address these issues, we utilize massive parallelization with
GPU-based simulators, big batch sizes, weight clipping and scaled peturbations.
Our results show that EPVFs can be scaled to solve complex tasks, such as a
custom Ant environment, and can compete with state-of-the-art Deep
Reinforcement Learning (DRL) baselines like Proximal Policy Optimization (PPO)
and Soft Actor-Critic (SAC). We further explore action-based policy parameter
representations from previous work and specialized neural network architectures
to efficiently handle weight-space features, which have not been used in the
context of DRL before.
|
2502.11951
|
Qubit-Based Framework for Quantum Machine Learning: Bridging Classical
Data and Quantum Algorithms
|
cs.CE cs.LG quant-ph
|
This paper dives into the exciting and rapidly growing field of quantum
computing, explaining its core ideas, current progress, and how it could
revolutionize the way we solve complex problems. It starts by breaking down the
basics, like qubits, quantum circuits, and how principles like superposition
and entanglement make quantum computers fundamentally different-and far more
powerful for certain tasks-than the classical computers we use today. We also
explore how quantum computing deals with complex problems and why it is
uniquely suited for challenges classical systems struggle to handle. A big part
of this paper focuses on Quantum Machine Learning (QML), where the strengths of
quantum computing meet the world of artificial intelligence. By processing
massive datasets and optimizing intricate algorithms, quantum systems offer new
possibilities for machine learning. We highlight different approaches to
combining quantum and classical computing, showing how they can work together
to produce faster and more accurate results. Additionally, we explore the tools
and platforms available-like TensorFlow Quantum, Qiskit and PennyLane-that are
helping researchers and developers bring these theories to life. Of course,
quantum computing has its hurdles. Challenges like scaling up hardware,
correcting errors, and keeping qubits stable are significant roadblocks. Yet,
with rapid advancements in cloud-based platforms and innovative technologies,
the potential of quantum computing feels closer than ever. This paper aims to
offer readers a clear and comprehensive introduction to quantum computing, its
role in machine learning, and the immense possibilities it holds for the future
of technology.
|
2502.11953
|
Refined PAC-Bayes Bounds for Offline Bandits
|
stat.ML cs.LG
|
In this paper, we present refined probabilistic bounds on empirical reward
estimates for off-policy learning in bandit problems. We build on the
PAC-Bayesian bounds from Seldin et al. (2010) and improve on their results
using a new parameter optimization approach introduced by Rodr\'iguez et al.
(2024). This technique is based on a discretization of the space of possible
events to optimize the "in probability" parameter. We provide two
parameter-free PAC-Bayes bounds, one based on Hoeffding-Azuma's inequality and
the other based on Bernstein's inequality. We prove that our bounds are almost
optimal as they recover the same rate as would be obtained by setting the "in
probability" parameter after the realization of the data.
|
2502.11955
|
pySLAM: An Open-Source, Modular, and Extensible Framework for SLAM
|
cs.RO cs.CV
|
pySLAM is an open-source Python framework for Visual SLAM, supporting
monocular, stereo, and RGB-D cameras. It provides a flexible interface for
integrating both classical and modern local features, making it adaptable to
various SLAM tasks. The framework includes different loop closure methods, a
volumetric reconstruction pipeline, and support for depth prediction models.
Additionally, it offers a suite of tools for visual odometry and SLAM
applications. Designed for both beginners and experienced researchers, pySLAM
encourages community contributions, fostering collaborative development in the
field of Visual SLAM.
|
2502.11959
|
STRIVE: Structured Reasoning for Self-Improvement in Claim Verification
|
cs.AI
|
Claim verification is the task of determining whether a claim is supported or
refuted by evidence. Self-improvement methods, where reasoning chains are
generated and those leading to correct results are selected for training, have
succeeded in tasks like mathematical problem solving. However, in claim
verification, this approach struggles. Low-quality reasoning chains may falsely
match binary truth labels, introducing faulty reasoning into the
self-improvement process and ultimately degrading performance. To address this,
we propose STRIVE: Structured Reasoning for Self-Improved Verification. Our
method introduces a structured reasoning design with Claim Decomposition,
Entity Analysis, and Evidence Grounding Verification. These components improve
reasoning quality, reduce errors, and provide additional supervision signals
for self-improvement. STRIVE begins with a warm-up phase, where the base model
is fine-tuned on a small number of annotated examples to learn the structured
reasoning design. It is then applied to generate reasoning chains for all
training examples, selecting only those that are correct and structurally sound
for subsequent self-improvement training. We demonstrate that STRIVE achieves
significant improvements over baseline models, with a 31.4% performance gain
over the base model and 20.7% over Chain of Thought on the HOVER datasets,
highlighting its effectiveness.
|
2502.11962
|
Navigating the Helpfulness-Truthfulness Trade-Off with Uncertainty-Aware
Instruction Fine-Tuning
|
cs.CL cs.AI
|
Instruction Fine-tuning (IFT) can enhance the helpfulness of Large Language
Models (LLMs), but it may lower their truthfulness. This trade-off arises
because IFT steers LLMs to generate responses with long-tail knowledge that is
not well covered during pre-training, leading to more informative but less
truthful answers when generalizing to unseen tasks. In this paper, we
empirically demonstrate this helpfulness-truthfulness trade-off in IFT and
propose $\textbf{UNIT}$, a novel IFT paradigm to address it. UNIT teaches LLMs
to recognize their uncertainty and explicitly reflect it at the end of their
responses. Experimental results show that UNIT-tuned models maintain their
helpfulness while distinguishing between certain and uncertain claims, thereby
reducing hallucinations.
|
2502.11965
|
A MIMO Wireless Channel Foundation Model via CIR-CSI Consistency
|
eess.SP cs.AI
|
In the field of artificial intelligence, self-supervised learning has
demonstrated superior generalization capabilities by leveraging large-scale
unlabeled datasets for pretraining, which is especially critical for wireless
communication models to adapt to a variety of scenarios. This paper
innovatively treats Channel State Information (CSI) and Channel Impulse
Response (CIR) as naturally aligned multi-modal data and proposes the first
MIMO wireless channel foundation model, named CSI-CLIP. By effectively
capturing the joint representations of both CIR and CSI, CSI-CLIP exhibits
remarkable adaptability across scenarios and robust feature extraction
capabilities. Experimental results show that in positioning task, CSI-CLIP
reduces the mean error distance by 22%; in beam management task, it increases
accuracy by 1% compared to traditional supervised methods, as well as in the
channel identification task. These improvements not only highlight the
potential and value of CSI-CLIP in integrating sensing and communication but
also demonstrate its significant advantages over existing techniques. Moreover,
viewing CSI and CIR as multi-modal pairs and contrastive learning for wireless
channel foundation model open up new research directions in the domain of MIMO
wireless communications.
|
2502.11968
|
Theoretical Barriers in Bellman-Based Reinforcement Learning
|
cs.LG cs.AI
|
Reinforcement Learning algorithms designed for high-dimensional spaces often
enforce the Bellman equation on a sampled subset of states, relying on
generalization to propagate knowledge across the state space. In this paper, we
identify and formalize a fundamental limitation of this common approach.
Specifically, we construct counterexample problems with a simple structure that
this approach fails to exploit. Our findings reveal that such algorithms can
neglect critical information about the problems, leading to inefficiencies.
Furthermore, we extend this negative result to another approach from the
literature: Hindsight Experience Replay learning state-to-state reachability.
|
2502.11969
|
Learning Generalizable Prompt for CLIP with Class Similarity Knowledge
|
cs.AI cs.CV cs.LG
|
In vision-language models (VLMs), prompt tuning has shown its effectiveness
in adapting models to downstream tasks. However, learned prompts struggle to
generalize to unseen classes, as they tend to overfit to the classes that are
targeted during prompt tuning. Examining failure cases, we observed that
learned prompts disrupt the semantics of unseen classes, generating text
embeddings with incorrect semantic relationships among classes. To address
this, we propose Similarity Alignment Regularization (SAR), which regularizes
learnable prompts to preserve the semantic relationships among classes captured
by hand-crafted prompts. Specifically, we first obtain novel classes related to
base classes using ChatGPT-4o and utilize them as potential unseen classes
during prompt tuning. Then, by targeting both base and novel classes, SAR
aligns the similarity relationships among text embeddings generated by
learnable prompts with the similarity relationships from hand-crafted prompts.
Extensive experiments applying SAR to existing prompt tuning methods
demonstrate its effectiveness in improving generalization to unseen classes.
|
2502.11971
|
Robust 6DoF Pose Tracking Considering Contour and Interior
Correspondence Uncertainty for AR Assembly Guidance
|
cs.CV
|
Augmented reality assembly guidance is essential for intelligent
manufacturing and medical applications, requiring continuous measurement of the
6DoF poses of manipulated objects. Although current tracking methods have made
significant advancements in accuracy and efficiency, they still face challenges
in robustness when dealing with cluttered backgrounds, rotationally symmetric
objects, and noisy sequences. In this paper, we first propose a robust
contour-based pose tracking method that addresses error-prone contour
correspondences and improves noise tolerance. It utilizes a fan-shaped search
strategy to refine correspondences and models local contour shape and noise
uncertainty as mixed probability distribution, resulting in a highly robust
contour energy function. Secondly, we introduce a CPU-only strategy to better
track rotationally symmetric objects and assist the contour-based method in
overcoming local minima by exploring sparse interior correspondences. This is
achieved by pre-sampling interior points from sparse viewpoint templates
offline and using the DIS optical flow algorithm to compute their
correspondences during tracking. Finally, we formulate a unified energy
function to fuse contour and interior information, which is solvable using a
re-weighted least squares algorithm. Experiments on public datasets and real
scenarios demonstrate that our method significantly outperforms
state-of-the-art monocular tracking methods and can achieve more than 100 FPS
using only a CPU.
|
2502.11973
|
Generating Text from Uniform Meaning Representation
|
cs.CL
|
Uniform Meaning Representation (UMR) is a recently developed graph-based
semantic representation, which expands on Abstract Meaning Representation (AMR)
in a number of ways, in particular through the inclusion of document-level
information and multilingual flexibility. In order to effectively adopt and
leverage UMR for downstream tasks, efforts must be placed toward developing a
UMR technological ecosystem. Though still limited amounts of UMR annotations
have been produced to date, in this work, we investigate the first approaches
to producing text from multilingual UMR graphs: (1) a pipeline conversion of
UMR to AMR, then using AMR-to-text generation models, (2) fine-tuning large
language models with UMR data, and (3) fine-tuning existing AMR-to-text
generation models with UMR data. Our best performing model achieves a
multilingual BERTscore of 0.825 for English and 0.882 for Chinese when compared
to the reference, which is a promising indication of the effectiveness of
fine-tuning approaches for UMR-to-text generation with even limited amounts of
UMR data.
|
2502.11974
|
Image Inversion: A Survey from GANs to Diffusion and Beyond
|
cs.CV
|
Image inversion is a fundamental task in generative models, aiming to map
images back to their latent representations to enable downstream applications
such as editing, restoration, and style transfer. This paper provides a
comprehensive review of the latest advancements in image inversion techniques,
focusing on two main paradigms: Generative Adversarial Network (GAN) inversion
and diffusion model inversion. We categorize these techniques based on their
optimization methods. For GAN inversion, we systematically classify existing
methods into encoder-based approaches, latent optimization approaches, and
hybrid approaches, analyzing their theoretical foundations, technical
innovations, and practical trade-offs. For diffusion model inversion, we
explore training-free strategies, fine-tuning methods, and the design of
additional trainable modules, highlighting their unique advantages and
limitations. Additionally, we discuss several popular downstream applications
and emerging applications beyond image tasks, identifying current challenges
and future research directions. By synthesizing the latest developments, this
paper aims to provide researchers and practitioners with a valuable reference
resource, promoting further advancements in the field of image inversion. We
keep track of the latest works at https://github.com/RyanChenYN/ImageInversion
|
2502.11975
|
Spatial decay of perturbations in hyperbolic equations with optimal
boundary control
|
math.OC cs.SY eess.SY
|
Recently, domain-uniform stabilizability and detectability has been the
central assumption %in order robustness results on the to ensure robustness in
the sense of exponential decay of spatially localized perturbations in
optimally controlled evolution equations. In the present paper we analyze a
chain of transport equations with boundary and point controls with regard to
this property. Both for Dirichlet and Neumann boundary and coupling conditions,
we show a necessary and sufficient criterion on control domains which allow for
the domain-uniform stabilization of this equation. We illustrate the results by
means of a numerical example.
|
2502.11981
|
Machine Learning Should Maximize Welfare, Not (Only) Accuracy
|
cs.LG cs.AI cs.CY
|
Decades of research in machine learning have given us powerful tools for
making accurate predictions. But when used in social settings and on human
inputs, better accuracy does not immediately translate to better social
outcomes. This may not be surprising given that conventional learning
frameworks are not designed to express societal preferences -- let alone
promote them. This position paper argues that machine learning is currently
missing, and can gain much from incorporating, a proper notion of social
welfare. The field of welfare economics asks: how should we allocate limited
resources to self-interested agents in a way that maximizes social benefit? We
argue that this perspective applies to many modern applications of machine
learning in social contexts, and advocate for its adoption. Rather than
disposing of prediction, we aim to leverage this forte of machine learning for
promoting social welfare. We demonstrate this idea by proposing a conceptual
framework that gradually transitions from accuracy maximization (with awareness
to welfare) to welfare maximization (via accurate prediction). We detail
applications and use-cases for which our framework can be effective, identify
technical challenges and practical opportunities, and highlight future avenues
worth pursuing.
|
2502.11983
|
Design Considerations Based on Stability for a Class of TCP Algorithms
|
cs.NI cs.SY eess.SY
|
Transmission Control Protocol (TCP) continues to be the dominant transport
protocol on the Internet. The stability of fluid models has been a key
consideration in the design of TCP and the performance evaluation of TCP
algorithms. Based on local stability analysis, we formulate some design
considerations for a class of TCP algorithms. We begin with deriving sufficient
conditions for the local stability of a generalized TCP algorithm in the
presence of heterogeneous round-trip delays. Within this generalized model, we
consider three specific variants of TCP: TCP Reno, Compound TCP, and Scalable
TCP. The sufficient conditions we derive are scalable across network topologies
with one, two, and many bottleneck links. We are interested in networks with
intermediate and small drop-tail buffers as they offer smaller queuing delays.
The small buffer regime is more attractive as the conditions for stability are
decentralized. TCP algorithms that follow our design considerations can provide
stable operation on any network topology, irrespective of the number of
bottleneck links or delays in the network.
|
2502.11984
|
Blank Space: Adaptive Causal Coding for Streaming Communications Over
Multi-Hop Networks
|
cs.IT cs.NI math.IT
|
In this work, we introduce Blank Space AC-RLNC (BS), a novel Adaptive and
Causal Network Coding (AC-RLNC) solution designed to mitigate the triplet
trade-off between throughput-delay-efficiency in multi-hop networks. BS
leverages the network's physical limitations considering the bottleneck from
each node to the destination. In particular, BS introduces a
light-computational re-encoding algorithm, called Network AC-RLNC (NET),
implemented independently at intermediate nodes. NET adaptively adjusts the
Forward Error Correction (FEC) rates and schedules idle periods. It
incorporates two distinct suspension mechanisms: 1) Blank Space Period,
accounting for the forward-channels bottleneck, and 2) No-New No-FEC approach,
based on data availability. The experimental results achieve significant
improvements in resource efficiency, demonstrating a 20% reduction in channel
usage compared to baseline RLNC solutions. Notably, these efficiency gains are
achieved while maintaining competitive throughput and delay performance,
ensuring improved resource utilization does not compromise network performance.
|
2502.11986
|
Selective Task Group Updates for Multi-Task Optimization
|
cs.LG
|
Multi-task learning enables the acquisition of task-generic knowledge by
training multiple tasks within a unified architecture. However, training all
tasks together in a single architecture can lead to performance degradation,
known as negative transfer, which is a main concern in multi-task learning.
Previous works have addressed this issue by optimizing the multi-task network
through gradient manipulation or weighted loss adjustments. However, their
optimization strategy focuses on addressing task imbalance in shared
parameters, neglecting the learning of task-specific parameters. As a result,
they show limitations in mitigating negative transfer, since the learning of
shared space and task-specific information influences each other during
optimization. To address this, we propose a different approach to enhance
multi-task performance by selectively grouping tasks and updating them for each
batch during optimization. We introduce an algorithm that adaptively determines
how to effectively group tasks and update them during the learning process. To
track inter-task relations and optimize multi-task networks simultaneously, we
propose proximal inter-task affinity, which can be measured during the
optimization process. We provide a theoretical analysis on how dividing tasks
into multiple groups and updating them sequentially significantly affects
multi-task performance by enhancing the learning of task-specific parameters.
Our methods substantially outperform previous multi-task optimization
approaches and are scalable to different architectures and various numbers of
tasks.
|
2502.11989
|
Characterizing Photorealism and Artifacts in Diffusion Model-Generated
Images
|
cs.HC cs.AI cs.CV
|
Diffusion model-generated images can appear indistinguishable from authentic
photographs, but these images often contain artifacts and implausibilities that
reveal their AI-generated provenance. Given the challenge to public trust in
media posed by photorealistic AI-generated images, we conducted a large-scale
experiment measuring human detection accuracy on 450 diffusion-model generated
images and 149 real images. Based on collecting 749,828 observations and 34,675
comments from 50,444 participants, we find that scene complexity of an image,
artifact types within an image, display time of an image, and human curation of
AI-generated images all play significant roles in how accurately people
distinguish real from AI-generated images. Additionally, we propose a taxonomy
characterizing artifacts often appearing in images generated by diffusion
models. Our empirical observations and taxonomy offer nuanced insights into the
capabilities and limitations of diffusion models to generate photorealistic
images in 2024.
|
2502.11992
|
On the Logic Elements Associated with Round-Off Errors and Gaussian Blur
in Image Registration: A Simple Case of Commingling
|
cs.CV
|
Discrete image registration can be a strategy to reconstruct signals from
samples corrupted by blur and noise. We examine superresolution and discrete
image registration for one-dimensional spatially-limited piecewise constant
functions which are subject to blur which is Gaussian or a mixture of Gaussians
as well as to round-off errors. Previous approaches address the signal recovery
problem as an optimization problem. We focus on a regime with low blur and
suggest that the operations of blur, sampling, and quantization are not unlike
the operation of a computer program and have an abstraction that can be studied
with a type of logic. When the minimum distance between discontinuity points is
between $1.5$ and 2 times the sampling interval, we can encounter the simplest
form of a type of interference between discontinuity points that we call
``commingling.'' We describe a way to reason about two sets of samples of the
same signal that will often result in the correct recovery of signal
amplitudes. We also discuss ways to estimate bounds on the distances between
discontinuity points.
|
2502.11993
|
MultiFlow: A unified deep learning framework for multi-vessel
classification, segmentation and clustering of phase-contrast MRI validated
on a multi-site single ventricle patient cohort
|
cs.CV
|
This study presents a unified deep learning (DL) framework, MultiFlowSeg, for
classification and segmentation of velocity-encoded phase-contrast magnetic
resonance imaging data, and MultiFlowDTC for temporal clustering of flow
phenotypes. Applied to the FORCE registry of Fontan procedure patients,
MultiFlowSeg achieved 100% classification accuracy for the aorta, SVC, and IVC,
and 94% for the LPA and RPA. It demonstrated robust segmentation with a median
Dice score of 0.91 (IQR: 0.86-0.93). The automated pipeline processed registry
data, achieving high segmentation success despite challenges like poor image
quality and dextrocardia. Temporal clustering identified five distinct patient
subgroups, with significant differences in clinical outcomes, including
ejection fraction, exercise tolerance, liver disease, and mortality. These
results demonstrate the potential of combining DL and time-varying flow data
for improved CHD prognosis and personalized care.
|
2502.11995
|
Presumed Cultural Identity: How Names Shape LLM Responses
|
cs.CL cs.AI
|
Names are deeply tied to human identity. They can serve as markers of
individuality, cultural heritage, and personal history. However, using names as
a core indicator of identity can lead to over-simplification of complex
identities. When interacting with LLMs, user names are an important point of
information for personalisation. Names can enter chatbot conversations through
direct user input (requested by chatbots), as part of task contexts such as CV
reviews, or as built-in memory features that store user information for
personalisation. We study biases associated with names by measuring cultural
presumptions in the responses generated by LLMs when presented with common
suggestion-seeking queries, which might involve making assumptions about the
user. Our analyses demonstrate strong assumptions about cultural identity
associated with names present in LLM generations across multiple cultures. Our
work has implications for designing more nuanced personalisation systems that
avoid reinforcing stereotypes while maintaining meaningful customisation.
|
2502.12001
|
Merging Language and Domain Specific Models: The Impact on Technical
Vocabulary Acquisition
|
cs.CL cs.LG
|
This paper investigates the integration of technical vocabulary in merged
language models. We explore the knowledge transfer mechanisms involved when
combining a general-purpose language-specific model with a domain-specific
model, focusing on the resulting model's comprehension of technical jargon. Our
experiments analyze the impact of this merging process on the target model's
proficiency in handling specialized terminology. We present a quantitative
evaluation of the performance of the merged model, comparing it with that of
the individual constituent models. The findings offer insights into the
effectiveness of different model merging methods for enhancing domain-specific
knowledge and highlight potential challenges and future directions in
leveraging these methods for cross-lingual knowledge transfer in Natural
Language Processing.
|
2502.12002
|
NaturalL2S: End-to-End High-quality Multispeaker Lip-to-Speech Synthesis
with Differential Digital Signal Processing
|
cs.SD cs.CV eess.AS
|
Recent advancements in visual speech recognition (VSR) have promoted progress
in lip-to-speech synthesis, where pre-trained VSR models enhance the
intelligibility of synthesized speech by providing valuable semantic
information. The success achieved by cascade frameworks, which combine
pseudo-VSR with pseudo-text-to-speech (TTS) or implicitly utilize the
transcribed text, highlights the benefits of leveraging VSR models. However,
these methods typically rely on mel-spectrograms as an intermediate
representation, which may introduce a key bottleneck: the domain gap between
synthetic mel-spectrograms, generated from inherently error-prone lip-to-speech
mappings, and real mel-spectrograms used to train vocoders. This mismatch
inevitably degrades synthesis quality. To bridge this gap, we propose Natural
Lip-to-Speech (NaturalL2S), an end-to-end framework integrating acoustic
inductive biases with differentiable speech generation components.
Specifically, we introduce a fundamental frequency (F0) predictor to capture
prosodic variations in synthesized speech. The predicted F0 then drives a
Differentiable Digital Signal Processing (DDSP) synthesizer to generate a
coarse signal which serves as prior information for subsequent speech
synthesis. Additionally, instead of relying on a reference speaker embedding as
an auxiliary input, our approach achieves satisfactory performance on speaker
similarity without explicitly modelling speaker characteristics. Both objective
and subjective evaluation results demonstrate that NaturalL2S can effectively
enhance the quality of the synthesized speech when compared to state-of-the-art
methods. Our demonstration page is accessible at
https://yifan-liang.github.io/NaturalL2S/.
|
2502.12003
|
Predicting Next-Day Wildfire Spread with Time Series and Attention
|
cs.CV
|
Recent research has demonstrated the potential of deep neural networks (DNNs)
to accurately predict next-day wildfire spread, based upon the current extent
of a fire and geospatial rasters of influential environmental covariates e.g.,
vegetation, topography, climate, and weather. In this work, we investigate a
recent transformer-based model, termed the SwinUnet, for next-day wildfire
prediction. We benchmark Swin-based models against several current
state-of-the-art models on WildfireSpreadTS (WFTS), a large public benchmark
dataset of historical wildfire events. We consider two next-day fire prediction
scenarios: when the model is given input of (i) a single previous day of data,
or (ii) five previous days of data. We find that, with the proper
modifications, SwinUnet achieves state-of-the-art accuracy on next-day
prediction for both the single-day and multi-day scenarios. SwinUnet's success
depends heavily upon utilizing pre-trained weights from ImageNet. Consistent
with prior work, we also found that models with multi-day-input always
outperformed models with single-day input.
|
2502.12005
|
Feasibility Evaluation of Quadratic Programs for Constrained Control
|
math.OC cs.SY eess.SY
|
This paper presents a computationally-efficient method for evaluating the
feasibility of Quadratic Programs (QPs) for online constrained control. Based
on the duality principle, we first show that the feasibility of a QP can be
determined by the solution of a properly-defined Linear Program (LP). Our
analysis yields a LP that can be solved more efficiently compared to the
original QP problem, and more importantly, is simpler in form and can be solved
more efficiently compared to existing methods that assess feasibility via LPs.
The computational efficiency of the proposed method compared to existing
methods for feasibility evaluation is demonstrated in comparative case studies
as well as a feasible-constraint selection problem, indicating its promise for
online feasibility evaluation of optimization-based controllers.
|
2502.12007
|
Demographic Attributes Prediction from Speech Using WavLM Embeddings
|
cs.CL cs.AI
|
This paper introduces a general classifier based on WavLM features, to infer
demographic characteristics, such as age, gender, native language, education,
and country, from speech. Demographic feature prediction plays a crucial role
in applications like language learning, accessibility, and digital forensics,
enabling more personalized and inclusive technologies. Leveraging pretrained
models for embedding extraction, the proposed framework identifies key acoustic
and linguistic fea-tures associated with demographic attributes, achieving a
Mean Absolute Error (MAE) of 4.94 for age prediction and over 99.81% accuracy
for gender classification across various datasets. Our system improves upon
existing models by up to relative 30% in MAE and up to relative 10% in accuracy
and F1 scores across tasks, leveraging a diverse range of datasets and large
pretrained models to ensure robustness and generalizability. This study offers
new insights into speaker diversity and provides a strong foundation for future
research in speech-based demographic profiling.
|
2502.12009
|
Beyond Sentiment: Examining the Role of Moral Foundations in User
Engagement with News on Twitter
|
cs.SI
|
This study uses sentiment analysis and the Moral Foundations Theory (MFT) to
characterise news content in social media and examine its association with user
engagement. We employ Natural Language Processing to quantify the moral and
affective linguistic markers. At the same time, we automatically define
thematic macro areas of news from major U.S. news outlets and their Twitter
followers (Jan 2020 - Mar 2021). By applying Non-Negative Matrix Factorisation
to the obtained linguistic features we extract clusters of similar moral and
affective profiles, and we identify the emotional and moral characteristics
that mostly explain user engagement via regression modelling. We observe that
Surprise, Trust, and Harm are crucial elements explaining user engagement and
discussion length and that Twitter content from news media outlets has more
explanatory power than their linked articles. We contribute with actionable
findings evidencing the potential impact of employing specific moral and
affective nuances in public and journalistic discourse in today's communication
landscape. In particular, our results emphasise the need to balance engagement
strategies with potential priming risks in our evolving media landscape.
|
2502.12011
|
Reconfigurable Intelligent Surfaces-Assisted Integrated Access and
Backhaul
|
cs.IT cs.LG cs.NI math.IT
|
In this paper, we study the impact of reconfigurable intelligent surfaces
(RISs) on the coverage extension of integrated access and backhaul (IAB)
networks. Particularly, using a finite stochastic geometry model, with random
distributions of user equipments (UEs) in a finite region, and planned
hierachical architecture for IAB, we study the service coverage probability
defined as the probability of the event that the UEs' minimum rate requirements
are satisfied. We present comparisons between different cases including
IAB-only, IAB assisted with RIS for backhaul as well as IAB assisted by network
controlled repeaters (NCRs). Our investigations focus on wide-area IAB assisted
with RIS through the lens of different design architectures and deployments,
revealing both conflicts and synergies for minimizing the effect of tree
foliage over seasonal changes. Our simulation results reveal both opportunities
and challenges towards the implementation of RIS in IAB.
|
2502.12012
|
Evolving Hard Maximum Cut Instances for Quantum Approximate Optimization
Algorithms
|
cs.ET cs.AI cs.NE quant-ph
|
Variational quantum algorithms, such as the Recursive Quantum Approximate
Optimization Algorithm (RQAOA), have become increasingly popular, offering
promising avenues for employing Noisy Intermediate-Scale Quantum devices to
address challenging combinatorial optimization tasks like the maximum cut
problem. In this study, we utilize an evolutionary algorithm equipped with a
unique fitness function. This approach targets hard maximum cut instances
within the latent space of a Graph Autoencoder, identifying those that pose
significant challenges or are particularly tractable for RQAOA, in contrast to
the classic Goemans and Williamson algorithm. Our findings not only delineate
the distinct capabilities and limitations of each algorithm but also expand our
understanding of RQAOA's operational limits. Furthermore, the diverse set of
graphs we have generated serves as a crucial benchmarking asset, emphasizing
the need for more advanced algorithms to tackle combinatorial optimization
challenges. Additionally, our results pave the way for new avenues in graph
generation research, offering exciting opportunities for future explorations.
|
2502.12013
|
Unsupervised Structural-Counterfactual Generation under Domain Shift
|
cs.LG stat.ML
|
Motivated by the burgeoning interest in cross-domain learning, we present a
novel generative modeling challenge: generating counterfactual samples in a
target domain based on factual observations from a source domain. Our approach
operates within an unsupervised paradigm devoid of parallel or joint datasets,
relying exclusively on distinct observational samples and causal graphs for
each domain. This setting presents challenges that surpass those of
conventional counterfactual generation. Central to our methodology is the
disambiguation of exogenous causes into effect-intrinsic and domain-intrinsic
categories. This differentiation facilitates the integration of domain-specific
causal graphs into a unified joint causal graph via shared effect-intrinsic
exogenous variables. We propose leveraging Neural Causal models within this
joint framework to enable accurate counterfactual generation under standard
identifiability assumptions. Furthermore, we introduce a novel loss function
that effectively segregates effect-intrinsic from domain-intrinsic variables
during model training. Given a factual observation, our framework combines the
posterior distribution of effect-intrinsic variables from the source domain
with the prior distribution of domain-intrinsic variables from the target
domain to synthesize the desired counterfactuals, adhering to Pearl's causal
hierarchy. Intriguingly, when domain shifts are restricted to alterations in
causal mechanisms without accompanying covariate shifts, our training regimen
parallels the resolution of a conditional optimal transport problem. Empirical
evaluations on a synthetic dataset show that our framework generates
counterfactuals in the target domain that very closely resemble the ground
truth.
|
2502.12017
|
Scalable and Cost-Efficient ML Inference: Parallel Batch Processing with
Serverless Functions
|
cs.DC cs.LG
|
As data-intensive applications grow, batch processing in limited-resource
environments faces scalability and resource management challenges. Serverless
computing offers a flexible alternative, enabling dynamic resource allocation
and automatic scaling. This paper explores how serverless architectures can
make large-scale ML inference tasks faster and cost-effective by decomposing
monolithic processes into parallel functions. Through a case study on sentiment
analysis using the DistilBERT model and the IMDb dataset, we demonstrate that
serverless parallel processing can reduce execution time by over 95% compared
to monolithic approaches, at the same cost.
|
2502.12018
|
Atom of Thoughts for Markov LLM Test-Time Scaling
|
cs.CL cs.AI cs.LG
|
Large Language Models (LLMs) achieve superior performance through
training-time scaling, and test-time scaling further enhances their
capabilities by conducting effective reasoning during inference. However, as
the scale of reasoning increases, existing test-time scaling methods suffer
from accumulated historical information, which not only wastes computational
resources but also interferes with effective reasoning. To address this issue,
we observe that complex reasoning progress is often achieved by solving a
sequence of independent subquestions, each being self-contained and verifiable.
These subquestions are essentially atomic questions, relying primarily on their
current state rather than accumulated history, similar to the memoryless
transitions in a Markov process. Based on this observation, we propose Atom of
Thoughts (AoT), where each state transition in the reasoning process consists
of decomposing the current question into a dependency-based directed acyclic
graph and contracting its subquestions, forming a new atomic question state.
This iterative decomposition-contraction process continues until reaching
directly solvable atomic questions, naturally realizing Markov transitions
between question states. Furthermore, these atomic questions can be seamlessly
integrated into existing test-time scaling methods, enabling AoT to serve as a
plug-in enhancement for improving reasoning capabilities. Experiments across
six benchmarks demonstrate the effectiveness of AoT both as a standalone
framework and a plug-in enhancement. Notably, on HotpotQA, when applied to
gpt-4o-mini, AoT achieves an 80.6% F1 score, surpassing o3-mini by 3.4% and
DeepSeek-R1 by 10.6%. The code will be available at
https://github.com/qixucen/atom.
|
2502.12019
|
Robotic CBCT Meets Robotic Ultrasound
|
cs.RO eess.IV
|
The multi-modality imaging system offers optimal fused images for safe and
precise interventions in modern clinical practices, such as computed tomography
- ultrasound (CT-US) guidance for needle insertion. However, the limited
dexterity and mobility of current imaging devices hinder their integration into
standardized workflows and the advancement toward fully autonomous intervention
systems. In this paper, we present a novel clinical setup where robotic cone
beam computed tomography (CBCT) and robotic US are pre-calibrated and
dynamically co-registered, enabling new clinical applications. This setup
allows registration-free rigid registration, facilitating multi-modal guided
procedures in the absence of tissue deformation. First, a one-time
pre-calibration is performed between the systems. To ensure a safe insertion
path by highlighting critical vasculature on the 3D CBCT, SAM2 segments vessels
from B-mode images, using the Doppler signal as an autonomously generated
prompt. Based on the registration, the Doppler image or segmented vessel masks
are then mapped onto the CBCT, creating an optimally fused image with
comprehensive detail. To validate the system, we used a specially designed
phantom, featuring lesions covered by ribs and multiple vessels with simulated
moving flow. The mapping error between US and CBCT resulted in an average
deviation of 1.72+-0.62 mm. A user study demonstrated the effectiveness of
CBCT-US fusion for needle insertion guidance, showing significant improvements
in time efficiency, accuracy, and success rate. Needle intervention performance
improved by approximately 50% compared to the conventional US-guided workflow.
We present the first robotic dual-modality imaging system designed to guide
clinical applications. The results show significant performance improvements
compared to traditional manual interventions.
|
2502.12020
|
Learning in a Multifield Coherent Ising Machine
|
cond-mat.mes-hall cond-mat.dis-nn cs.ET cs.NE nlin.AO
|
Physical information processors can learn from examples if they are modified
according to an abstract parameter update equation, termed a learning rule. We
introduce a physical model for self-learning that encodes the learning rule in
the Hamiltonian of the system. The model consists of a network of multi-modal
resonators. One of the modes is driven parametrically into a bi-stable regime,
forming a coherent Ising machine (CIM) -- that provides the long-term memory
that stores learned responses (weights). The CIM is augmented with an
additional spinor field that acts as short-term (activation) memory. We
numerically demonstrate that, in the presence of suitable nonlinear
interactions between the long-term memory Ising machine and the short-term
memory auxiliary field, the system autonomously learns from examples.
|
2502.12022
|
Teaching LLMs According to Their Aptitude: Adaptive Reasoning for
Mathematical Problem Solving
|
cs.CL cs.AI
|
Existing approaches to mathematical reasoning with large language models
(LLMs) rely on Chain-of-Thought (CoT) for generalizability or Tool-Integrated
Reasoning (TIR) for precise computation. While efforts have been made to
combine these methods, they primarily rely on post-selection or predefined
strategies, leaving an open question: whether LLMs can autonomously adapt their
reasoning strategy based on their inherent capabilities. In this work, we
propose TATA (Teaching LLMs According to Their Aptitude), an adaptive framework
that enables LLMs to personalize their reasoning strategy spontaneously,
aligning it with their intrinsic aptitude. TATA incorporates base-LLM-aware
data selection during supervised fine-tuning (SFT) to tailor training data to
the model's unique abilities. This approach equips LLMs to autonomously
determine and apply the appropriate reasoning strategy at test time. We
evaluate TATA through extensive experiments on six mathematical reasoning
benchmarks, using both general-purpose and math-specialized LLMs. Empirical
results demonstrate that TATA effectively combines the complementary strengths
of CoT and TIR, achieving superior or comparable performance with improved
inference efficiency compared to TIR alone. Further analysis underscores the
critical role of aptitude-aware data selection in enabling LLMs to make
effective and adaptive reasoning decisions and align reasoning strategies with
model capabilities.
|
2502.12025
|
SafeChain: Safety of Language Models with Long Chain-of-Thought
Reasoning Capabilities
|
cs.AI cs.CL
|
Emerging large reasoning models (LRMs), such as DeepSeek-R1 models, leverage
long chain-of-thought (CoT) reasoning to generate structured intermediate
steps, enhancing their reasoning capabilities. However, long CoT does not
inherently guarantee safe outputs, potentially leading to harmful consequences
such as the introduction of security vulnerabilities in code or the spread of
misinformation. Current research on large language model (LLM) safety usually
focuses on short-answer responses, overlooking the long CoT style outputs of
LRMs. To bridge this gap, we conduct a systematic study of LRM safety. First,
we investigate safety evaluators calibrated against human annotations. Using
our newly developed metrics, we thoroughly assess the safety of 12
state-of-the-art LRMs on StrongReject and WildJailbreak datasets. Our results
show that LRMs are not safe compared to their reasoning advance. Further, we
perform a fine-grained analysis of the reasoning trace and final answer. We
find that three decoding strategies-ZeroThink, LessThink, and MoreThink-can
improve model safety without additional training. However, these strategies
either use constrained reasoning traces or incur high inference costs. To
better strengthen LRM safety, we introduce SafeChain, the first-of-its-kind
safety training dataset in CoT style. We fine-tune two LRMs with SafeChain,
showing that it not only enhances model safety but also preserves performance
across 6 reasoning benchmarks.
|
2502.12027
|
Enhancing Transparent Object Pose Estimation: A Fusion of GDR-Net and
Edge Detection
|
cs.CV
|
Object pose estimation of transparent objects remains a challenging task in
the field of robot vision due to the immense influence of lighting, background,
and reflections. However, the edges of clear objects have the highest contrast,
which leads to stable and prominent features. We propose a novel approach by
incorporating edge detection in a pre-processing step for the tasks of object
detection and object pose estimation. We conducted experiments to investigate
the effect of edge detectors on transparent objects. We examine the performance
of the state-of-the-art 6D object pose estimation pipeline GDR-Net and the
object detector YOLOX when applying different edge detectors as pre-processing
steps (i.e., Canny edge detection with and without color information, and
holistically-nested edges (HED)). We evaluate the physically-based rendered
dataset Trans6D-32 K of transparent objects with parameters proposed by the BOP
Challenge. Our results indicate that applying edge detection as a
pre-processing enhances performance for certain objects.
|
2502.12029
|
KnowPath: Knowledge-enhanced Reasoning via LLM-generated Inference Paths
over Knowledge Graphs
|
cs.AI
|
Large language models (LLMs) have demonstrated remarkable capabilities in
various complex tasks, yet they still suffer from hallucinations. Introducing
external knowledge, such as knowledge graph, can enhance the LLMs' ability to
provide factual answers. LLMs have the ability to interactively explore
knowledge graphs. However, most approaches have been affected by insufficient
internal knowledge excavation in LLMs, limited generation of trustworthy
knowledge reasoning paths, and a vague integration between internal and
external knowledge. Therefore, we propose KnowPath, a knowledge-enhanced large
model framework driven by the collaboration of internal and external knowledge.
It relies on the internal knowledge of the LLM to guide the exploration of
interpretable directed subgraphs in external knowledge graphs, better
integrating the two knowledge sources for more accurate reasoning. Extensive
experiments on multiple real-world datasets confirm the superiority of
KnowPath.
|
2502.12031
|
Masked Latent Prediction and Classification for Self-Supervised Audio
Representation Learning
|
cs.SD cs.AI
|
Recently, self-supervised learning methods based on masked latent prediction
have proven to encode input data into powerful representations. However, during
training, the learned latent space can be further transformed to extract
higher-level information that could be more suited for downstream
classification tasks. Therefore, we propose a new method: MAsked latenT
Prediction And Classification (MATPAC), which is trained with two pretext tasks
solved jointly. As in previous work, the first pretext task is a masked latent
prediction task, ensuring a robust input representation in the latent space.
The second one is unsupervised classification, which utilises the latent
representations of the first pretext task to match probability distributions
between a teacher and a student. We validate the MATPAC method by comparing it
to other state-of-the-art proposals and conducting ablations studies. MATPAC
reaches state-of-the-art self-supervised learning results on reference audio
classification datasets such as OpenMIC, GTZAN, ESC-50 and US8K and outperforms
comparable supervised methods results for musical auto-tagging on
Magna-tag-a-tune.
|
2502.12033
|
The geometry of BERT
|
cs.LG
|
Transformer neural networks, particularly Bidirectional Encoder
Representations from Transformers (BERT), have shown remarkable performance
across various tasks such as classification, text summarization, and question
answering. However, their internal mechanisms remain mathematically obscure,
highlighting the need for greater explainability and interpretability. In this
direction, this paper investigates the internal mechanisms of BERT proposing a
novel perspective on the attention mechanism of BERT from a theoretical
perspective. The analysis encompasses both local and global network behavior.
At the local level, the concept of directionality of subspace selection as well
as a comprehensive study of the patterns emerging from the self-attention
matrix are presented. Additionally, this work explores the semantic content of
the information stream through data distribution analysis and global
statistical measures including the novel concept of cone index. A case study on
the classification of SARS-CoV-2 variants using RNA which resulted in a very
high accuracy has been selected in order to observe these concepts in an
application. The insights gained from this analysis contribute to a deeper
understanding of BERT's classification process, offering potential avenues for
future architectural improvements in Transformer models and further analysis in
the training process.
|
2502.12037
|
Information geometry of tempered stable processes
|
math.DG cs.IT math.IT math.PR
|
We find information geometry of tempered stable processes. Starting with the
derivation of $\alpha$-divergence between two tempered stable processes, Fisher
information matrices of tempered stable processes and $\alpha$-connections of
their statistical manifolds are obtained. Additionally, we also provide
statistical applications for the information geometry of tempered stable
processes. Various tempered stable processes such as generalized tempered
stable processes, classical tempered stable processes, and rapidly decreasing
tempered stable processes are given as examples.
|
2502.12047
|
Quantum Byzantine Multiple Access Channels
|
cs.IT math.IT math.QA
|
In communication theory, attacks like eavesdropping or jamming are typically
assumed to occur at the channel level, while communication parties are expected
to follow established protocols. But what happens if one of the parties turns
malicious? In this work, we investigate a compelling scenario: a
multiple-access channel with two transmitters and one receiver, where one
transmitter deviates from the protocol and acts dishonestly. To address this
challenge, we introduce the Byzantine multiple-access classical-quantum channel
and derive an achievable communication rate for this adversarial setting.
|
2502.12048
|
A Survey on Bridging EEG Signals and Generative AI: From Image and Text
to Beyond
|
cs.AI cs.HC cs.LG
|
Integration of Brain-Computer Interfaces (BCIs) and Generative Artificial
Intelligence (GenAI) has opened new frontiers in brain signal decoding,
enabling assistive communication, neural representation learning, and
multimodal integration. BCIs, particularly those leveraging
Electroencephalography (EEG), provide a non-invasive means of translating
neural activity into meaningful outputs. Recent advances in deep learning,
including Generative Adversarial Networks (GANs) and Transformer-based Large
Language Models (LLMs), have significantly improved EEG-based generation of
images, text, and speech. This paper provides a literature review of the
state-of-the-art in EEG-based multimodal generation, focusing on (i)
EEG-to-image generation through GANs, Variational Autoencoders (VAEs), and
Diffusion Models, and (ii) EEG-to-text generation leveraging Transformer based
language models and contrastive learning methods. Additionally, we discuss the
emerging domain of EEG-to-speech synthesis, an evolving multimodal frontier. We
highlight key datasets, use cases, challenges, and EEG feature encoding methods
that underpin generative approaches. By providing a structured overview of
EEG-based generative AI, this survey aims to equip researchers and
practitioners with insights to advance neural decoding, enhance assistive
technologies, and expand the frontiers of brain-computer interaction.
|
2502.12049
|
Classifying the Stoichiometry of Virus-like Particles with Interpretable
Machine Learning
|
cs.LG q-bio.BM q-bio.QM
|
Virus-like particles (VLPs) are valuable for vaccine development due to their
immune-triggering properties. Understanding their stoichiometry, the number of
protein subunits to form a VLP, is critical for vaccine optimisation. However,
current experimental methods to determine stoichiometry are time-consuming and
require highly purified proteins. To efficiently classify stoichiometry classes
in proteins, we curate a new dataset and propose an interpretable, data-driven
pipeline leveraging linear machine learning models. We also explore the impact
of feature encoding on model performance and interpretability, as well as
methods to identify key protein sequence features influencing classification.
The evaluation of our pipeline demonstrates that it can classify stoichiometry
while revealing protein features that possibly influence VLP assembly. The data
and code used in this work are publicly available at
https://github.com/Shef-AIRE/StoicIML.
|
2502.12050
|
SpeechT: Findings of the First Mentorship in Speech Translation
|
cs.CL cs.SD
|
This work presents the details and findings of the first mentorship in speech
translation (SpeechT), which took place in December 2024 and January 2025. To
fulfil the requirements of the mentorship, the participants engaged in key
activities, including data preparation, modelling, and advanced research.
|
2502.12051
|
How to Upscale Neural Networks with Scaling Law? A Survey and Practical
Guidelines
|
cs.CL cs.LG
|
Neural scaling laws have revolutionized the design and optimization of
large-scale AI models by revealing predictable relationships between model
size, dataset volume, and computational resources. Early research established
power-law relationships in model performance, leading to compute-optimal
scaling strategies. However, recent studies highlighted their limitations
across architectures, modalities, and deployment contexts. Sparse models,
mixture-of-experts, retrieval-augmented learning, and multimodal models often
deviate from traditional scaling patterns. Moreover, scaling behaviors vary
across domains such as vision, reinforcement learning, and fine-tuning,
underscoring the need for more nuanced approaches. In this survey, we
synthesize insights from over 50 studies, examining the theoretical
foundations, empirical findings, and practical implications of scaling laws. We
also explore key challenges, including data efficiency, inference scaling, and
architecture-specific constraints, advocating for adaptive scaling strategies
tailored to real-world applications. We suggest that while scaling laws provide
a useful guide, they do not always generalize across all architectures and
training strategies.
|
2502.12052
|
A Dual-Perspective NLG Meta-Evaluation Framework with Automatic
Benchmark and Better Interpretability
|
cs.CL
|
In NLG meta-evaluation, evaluation metrics are typically assessed based on
their consistency with humans. However, we identify some limitations in
traditional NLG meta-evaluation approaches, such as issues in handling human
ratings and ambiguous selections of correlation measures, which undermine the
effectiveness of meta-evaluation. In this work, we propose a dual-perspective
NLG meta-evaluation framework that focuses on different evaluation
capabilities, thereby providing better interpretability. In addition, we
introduce a method of automatically constructing the corresponding benchmarks
without requiring new human annotations. Furthermore, we conduct experiments
with 16 representative LLMs as the evaluators based on our proposed framework,
comprehensively analyzing their evaluation performance from different
perspectives.
|
2502.12054
|
PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
|
cs.AI
|
Large language models demonstrate remarkable capabilities across various
domains, especially mathematics and logic reasoning. However, current
evaluations overlook physics-based reasoning - a complex task requiring physics
theorems and constraints. We present PhysReason, a 1,200-problem benchmark
comprising knowledge-based (25%) and reasoning-based (75%) problems, where the
latter are divided into three difficulty levels (easy, medium, hard). Notably,
problems require an average of 8.1 solution steps, with hard requiring 15.6,
reflecting the complexity of physics-based reasoning. We propose the Physics
Solution Auto Scoring Framework, incorporating efficient answer-level and
comprehensive step-level evaluations. Top-performing models like Deepseek-R1,
Gemini-2.0-Flash-Thinking, and o3-mini-high achieve less than 60% on
answer-level evaluation, with performance dropping from knowledge questions
(75.11%) to hard problems (31.95%). Through step-level evaluation, we
identified four key bottlenecks: Physics Theorem Application, Physics Process
Understanding, Calculation, and Physics Condition Analysis. These findings
position PhysReason as a novel and comprehensive benchmark for evaluating
physics-based reasoning capabilities in large language models. Our code and
data will be published at https:/dxzxy12138.github.io/PhysReason.
|
2502.12055
|
Designing Role Vectors to Improve LLM Inference Behaviour
|
cs.CL
|
The influence of personas on Large Language Models (LLMs) has been widely
studied, yet their direct impact on performance remains uncertain. This work
explores a novel approach to guiding LLM behaviour through role vectors, an
alternative to persona-based prompting. We construct 29 role vectors derived
from model activations and evaluate their impact on benchmark performance
across multiple domains. Our analysis investigates whether these vectors can
effectively steer models toward domain-specific expertise. We measure two key
interventions: (i) activation addition, which reinforces role-specific
directions, and (ii) directional ablation, which removes them. Results on
well-established benchmarks indicate that role vectors do, in fact, influence
model behaviour, improving task performance in relevant domains while
marginally affecting unrelated tasks. This, in turn, suggests that manipulating
internal model representations has a greater impact on outcomes than
persona-based prompting.
|
2502.12057
|
Culture is Not Trivia: Sociocultural Theory for Cultural NLP
|
cs.CL cs.CY
|
The field of cultural NLP has recently experienced rapid growth, driven by a
pressing need to ensure that language technologies are effective and safe
across a pluralistic user base. This work has largely progressed without a
shared conception of culture, instead choosing to rely on a wide array of
cultural proxies. However, this leads to a number of recurring limitations:
coarse national boundaries fail to capture nuanced differences that lay within
them, limited coverage restricts datasets to only a subset of usually
highly-represented cultures, and a lack of dynamicity results in static
cultural benchmarks that do not change as culture evolves. In this position
paper, we argue that these methodological limitations are symptomatic of a
theoretical gap. We draw on a well-developed theory of culture from
sociocultural linguistics to fill this gap by 1) demonstrating in a case study
how it can clarify methodological constraints and affordances, 2) offering
theoretically-motivated paths forward to achieving cultural competence, and 3)
arguing that localization is a more useful framing for the goals of much
current work in cultural NLP.
|
2502.12058
|
A survey about perceptions of mobility to inform an agent-based
simulator of subjective modal choice
|
cs.MA cs.CY
|
In order to adapt to the issues of climate change and public health, urban
policies are trying to encourage soft mobility, but the share of the car
remains significant. Beyond known constraints, we study here the impact of
perception biases on individual choices. We designed a multi-criteria decision
model, integrating the influence of habits and biases. We then conducted an
online survey, which received 650 responses. We used these to calculate
realistic mobility perception values, in order to initialise the environment
and the population of a modal choice simulator, implemented in Netlogo. This
allows us to visualize the adaptation of the modal distribution in reaction to
the evolution of urban planning, depending on whether or not we activate biases
and habits in individual reasoning.
This is an extended and translated version of a demo paper published in
French at JFSMA-JFMS 2024 "Un simulateur multi-agent de choix modal subjectif"
|
2502.12063
|
Low-Rank Thinning
|
stat.ML cs.LG math.OC math.ST stat.ME stat.TH
|
The goal in thinning is to summarize a dataset using a small set of
representative points. Remarkably, sub-Gaussian thinning algorithms like Kernel
Halving and Compress can match the quality of uniform subsampling while
substantially reducing the number of summary points. However, existing
guarantees cover only a restricted range of distributions and kernel-based
quality measures and suffer from pessimistic dimension dependence. To address
these deficiencies, we introduce a new low-rank analysis of sub-Gaussian
thinning that applies to any distribution and any kernel, guaranteeing
high-quality compression whenever the kernel or data matrix is approximately
low-rank. To demonstrate the broad applicability of the techniques, we design
practical sub-Gaussian thinning approaches that improve upon the best known
guarantees for approximating attention in transformers, accelerating stochastic
gradient training through reordering, and distinguishing distributions in
near-linear time.
|
2502.12064
|
AI-generated Text Detection with a GLTR-based Approach
|
cs.CL cs.AI
|
The rise of LLMs (Large Language Models) has contributed to the improved
performance and development of cutting-edge NLP applications. However, these
can also pose risks when used maliciously, such as spreading fake news, harmful
content, impersonating individuals, or facilitating school plagiarism, among
others. This is because LLMs can generate high-quality texts, which are
challenging to differentiate from those written by humans. GLTR, which stands
for Giant Language Model Test Room and was developed jointly by the MIT-IBM
Watson AI Lab and HarvardNLP, is a visual tool designed to help detect
machine-generated texts based on GPT-2, that highlights the words in text
depending on the probability that they were machine-generated. One limitation
of GLTR is that the results it returns can sometimes be ambiguous and lead to
confusion. This study aims to explore various ways to improve GLTR's
effectiveness for detecting AI-generated texts within the context of the
IberLef-AuTexTification 2023 shared task, in both English and Spanish
languages. Experiment results show that our GLTR-based GPT-2 model overcomes
the state-of-the-art models on the English dataset with a macro F1-score of
80.19%, except for the first ranking model (80.91%). However, for the Spanish
dataset, we obtained a macro F1-score of 66.20%, which differs by 4.57%
compared to the top-performing model.
|
2502.12065
|
Formalizing Complex Mathematical Statements with LLMs: A Study on
Mathematical Definitions
|
cs.CL cs.FL
|
Thanks to their linguistic capabilities, LLMs offer an opportunity to bridge
the gap between informal mathematics and formal languages through
autoformalization. However, it is still unclear how well LLMs generalize to
sophisticated and naturally occurring mathematical statements. To address this
gap, we investigate the task of autoformalizing real-world mathematical
definitions -- a critical component of mathematical discourse. Specifically, we
introduce two novel resources for autoformalisation, collecting definitions
from Wikipedia (Def_Wiki) and arXiv papers (Def_ArXiv). We then systematically
evaluate a range of LLMs, analyzing their ability to formalize definitions into
Isabelle/HOL. Furthermore, we investigate strategies to enhance LLMs'
performance including refinement through external feedback from Proof
Assistants, and formal definition grounding, where we guide LLMs through
relevant contextual elements from formal mathematical libraries. Our findings
reveal that definitions present a greater challenge compared to existing
benchmarks, such as miniF2F. In particular, we found that LLMs still struggle
with self-correction, and aligning with relevant mathematical libraries. At the
same time, structured refinement methods and definition grounding strategies
yield notable improvements of up to 16% on self-correction capabilities and 43%
on the reduction of undefined errors, highlighting promising directions for
enhancing LLM-based autoformalization in real-world scenarios.
|
2502.12066
|
CONSTRUCTA: Automating Commercial Construction Schedules in Fabrication
Facilities with Large Language Models
|
cs.AI cs.LG cs.SE
|
Automating planning with LLMs presents transformative opportunities for
traditional industries, yet remains underexplored. In commercial construction,
the complexity of automated scheduling often requires manual intervention to
ensure precision. We propose CONSTRUCTA, a novel framework leveraging LLMs to
optimize construction schedules in complex projects like semiconductor
fabrication. CONSTRUCTA addresses key challenges by: (1) integrating
construction-specific knowledge through static RAG; (2) employing
context-sampling techniques inspired by architectural expertise to provide
relevant input; and (3) deploying Construction DPO to align schedules with
expert preferences using RLHF. Experiments on proprietary data demonstrate
performance improvements of +42.3% in missing value prediction, +79.1% in
dependency analysis, and +28.9% in automated planning compared to baseline
methods, showcasing its potential to revolutionize construction workflows and
inspire domain-specific LLM advancements.
|
2502.12067
|
TokenSkip: Controllable Chain-of-Thought Compression in LLMs
|
cs.CL cs.AI
|
Chain-of-Thought (CoT) has been proven effective in enhancing the reasoning
capabilities of large language models (LLMs). Recent advancements, such as
OpenAI's o1 and DeepSeek-R1, suggest that scaling up the length of CoT
sequences during inference could further boost LLM reasoning performance.
However, due to the autoregressive nature of LLM decoding, longer CoT outputs
lead to a linear increase in inference latency, adversely affecting user
experience, particularly when the CoT exceeds 10,000 tokens. To address this
limitation, we analyze the semantic importance of tokens within CoT outputs and
reveal that their contributions to reasoning vary. Building on this insight, we
propose TokenSkip, a simple yet effective approach that enables LLMs to
selectively skip less important tokens, allowing for controllable CoT
compression. Extensive experiments across various models and tasks demonstrate
the effectiveness of TokenSkip in reducing CoT token usage while preserving
strong reasoning performance. Notably, when applied to Qwen2.5-14B-Instruct,
TokenSkip reduces reasoning tokens by 40% (from 313 to 181) on GSM8K, with less
than a 0.4% performance drop.
|
2502.12073
|
Can LLMs Simulate Social Media Engagement? A Study on Action-Guided
Response Generation
|
cs.CL
|
Social media enables dynamic user engagement with trending topics, and recent
research has explored the potential of large language models (LLMs) for
response generation. While some studies investigate LLMs as agents for
simulating user behavior on social media, their focus remains on practical
viability and scalability rather than a deeper understanding of how well LLM
aligns with human behavior. This paper analyzes LLMs' ability to simulate
social media engagement through action guided response generation, where a
model first predicts a user's most likely engagement action-retweet, quote, or
rewrite-towards a trending post before generating a personalized response
conditioned on the predicted action. We benchmark GPT-4o-mini, O1-mini, and
DeepSeek-R1 in social media engagement simulation regarding a major societal
event discussed on X. Our findings reveal that zero-shot LLMs underperform BERT
in action prediction, while few-shot prompting initially degrades the
prediction accuracy of LLMs with limited examples. However, in response
generation, few-shot LLMs achieve stronger semantic alignment with ground truth
posts.
|
2502.12080
|
HumanGif: Single-View Human Diffusion with Generative Prior
|
cs.CV
|
While previous single-view-based 3D human reconstruction methods made
significant progress in novel view synthesis, it remains a challenge to
synthesize both view-consistent and pose-consistent results for animatable
human avatars from a single image input. Motivated by the success of 2D
character animation, we propose <strong>HumanGif</strong>, a single-view human
diffusion model with generative prior. Specifically, we formulate the
single-view-based 3D human novel view and pose synthesis as a
single-view-conditioned human diffusion process, utilizing generative priors
from foundational diffusion models. To ensure fine-grained and consistent novel
view and pose synthesis, we introduce a Human NeRF module in HumanGif to learn
spatially aligned features from the input image, implicitly capturing the
relative camera and human pose transformation. Furthermore, we introduce an
image-level loss during optimization to bridge the gap between latent and image
spaces in diffusion models. Extensive experiments on RenderPeople and
DNA-Rendering datasets demonstrate that HumanGif achieves the best perceptual
performance, with better generalizability for novel view and pose synthesis.
|
2502.12081
|
Unhackable Temporal Rewarding for Scalable Video MLLMs
|
cs.CV cs.CL
|
In the pursuit of superior video-processing MLLMs, we have encountered a
perplexing paradox: the "anti-scaling law", where more data and larger models
lead to worse performance. This study unmasks the culprit: "temporal hacking",
a phenomenon where models shortcut by fixating on select frames, missing the
full video narrative. In this work, we systematically establish a comprehensive
theory of temporal hacking, defining it from a reinforcement learning
perspective, introducing the Temporal Perplexity (TPL) score to assess this
misalignment, and proposing the Unhackable Temporal Rewarding (UTR) framework
to mitigate the temporal hacking. Both theoretically and empirically, TPL
proves to be a reliable indicator of temporal modeling quality, correlating
strongly with frame activation patterns. Extensive experiments reveal that UTR
not only counters temporal hacking but significantly elevates video
comprehension capabilities. This work not only advances video-AI systems but
also illuminates the critical importance of aligning proxy rewards with true
objectives in MLLM development.
|
2502.12082
|
AdaSplash: Adaptive Sparse Flash Attention
|
cs.CL cs.LG
|
The computational cost of softmax-based attention in transformers limits
their applicability to long-context tasks. Adaptive sparsity, of which
$\alpha$-entmax attention is an example, offers a flexible data-dependent
alternative, but existing implementations are inefficient and do not leverage
the sparsity to obtain runtime and memory gains. In this work, we propose
AdaSplash, which combines the efficiency of GPU-optimized algorithms with the
sparsity benefits of $\alpha$-entmax. We first introduce a hybrid
Halley-bisection algorithm, resulting in a 7-fold reduction in the number of
iterations needed to compute the $\alpha$-entmax transformation. Then, we
implement custom Triton kernels to efficiently handle adaptive sparsity.
Experiments with RoBERTa and ModernBERT for text classification and
single-vector retrieval, along with GPT-2 for language modeling, show that our
method achieves substantial improvements in runtime and memory efficiency
compared to existing $\alpha$-entmax implementations. It approaches -- and in
some cases surpasses -- the efficiency of highly optimized softmax
implementations like FlashAttention-2, enabling long-context training while
maintaining strong task performance.
|
2502.12084
|
VLM$^2$-Bench: A Closer Look at How Well VLMs Implicitly Link Explicit
Matching Visual Cues
|
cs.CL
|
Visually linking matching cues is a crucial ability in daily life, such as
identifying the same person in multiple photos based on their cues, even
without knowing who they are. Despite the extensive knowledge that
vision-language models (VLMs) possess, it remains largely unexplored whether
they are capable of performing this fundamental task. To address this, we
introduce VLM$^2$-Bench, a benchmark designed to assess whether VLMs can
Visually Link Matching cues, with 9 subtasks and over 3,000 test cases.
Comprehensive evaluation across eight open-source VLMs and GPT-4o, along with
further analysis of various language-side and vision-side prompting methods,
leads to a total of eight key findings. We identify critical challenges in
models' ability to link visual cues, highlighting a significant performance gap
where even GPT-4o lags 34.80% behind humans. Based on these insights, we
advocate for (i) enhancing core visual capabilities to improve adaptability and
reduce reliance on prior knowledge, (ii) establishing clearer principles for
integrating language-based reasoning in vision-centric tasks to prevent
unnecessary biases, and (iii) shifting vision-text training paradigms toward
fostering models' ability to independently structure and infer relationships
among visual cues.
|
2502.12085
|
APB: Accelerating Distributed Long-Context Inference by Passing
Compressed Context Blocks across GPUs
|
cs.LG cs.CL
|
While long-context inference is crucial for advancing large language model
(LLM) applications, its prefill speed remains a significant bottleneck. Current
approaches, including sequence parallelism strategies and compute reduction
through approximate attention mechanisms, still fall short of delivering
optimal inference efficiency. This hinders scaling the inputs to longer
sequences and processing long-context queries in a timely manner. To address
this, we introduce APB, an efficient long-context inference framework that
leverages multi-host approximate attention to enhance prefill speed by reducing
compute and enhancing parallelism simultaneously. APB introduces a
communication mechanism for essential key-value pairs within a sequence
parallelism framework, enabling a faster inference speed while maintaining task
performance. We implement APB by incorporating a tailored FlashAttn kernel
alongside optimized distribution strategies, supporting diverse models and
parallelism configurations. APB achieves speedups of up to 9.2x, 4.2x, and 1.6x
compared with FlashAttn, RingAttn, and StarAttn, respectively, without any
observable task performance degradation. We provide the implementation and
experiment code of APB in https://github.com/thunlp/APB.
|
2502.12086
|
Unifying Explainable Anomaly Detection and Root Cause Analysis in
Dynamical Systems
|
cs.LG stat.ML
|
Dynamical systems, prevalent in various scientific and engineering domains,
are susceptible to anomalies that can significantly impact their performance
and reliability. This paper addresses the critical challenges of anomaly
detection, root cause localization, and anomaly type classification in
dynamical systems governed by ordinary differential equations (ODEs). We define
two categories of anomalies: cyber anomalies, which propagate through
interconnected variables, and measurement anomalies, which remain localized to
individual variables. To address these challenges, we propose the Interpretable
Causality Ordinary Differential Equation (ICODE) Networks, a model-intrinsic
explainable learning framework. ICODE leverages Neural ODEs for anomaly
detection while employing causality inference through an explanation channel to
perform root cause analysis (RCA), elucidating why specific time periods are
flagged as anomalous. ICODE is designed to simultaneously perform anomaly
detection, RCA, and anomaly type classification within a single, interpretable
framework. Our approach is grounded in the hypothesis that anomalies alter the
underlying ODEs of the system, manifesting as changes in causal relationships
between variables. We provide a theoretical analysis of how perturbations in
learned model parameters can be utilized to identify anomalies and their root
causes in time series data. Comprehensive experimental evaluations demonstrate
the efficacy of ICODE across various dynamical systems, showcasing its ability
to accurately detect anomalies, classify their types, and pinpoint their
origins.
|
2502.12088
|
Meta-Statistical Learning: Supervised Learning of Statistical Inference
|
cs.LG cs.AI
|
This work demonstrates that the tools and principles driving the success of
large language models (LLMs) can be repurposed to tackle distribution-level
tasks, where the goal is to predict properties of the data-generating
distribution rather than labels for individual datapoints. These tasks
encompass statistical inference problems such as parameter estimation,
hypothesis testing, or mutual information estimation. Framing these tasks
within traditional machine learning pipelines is challenging, as supervision is
typically tied to individual datapoint. We propose meta-statistical learning, a
framework inspired by multi-instance learning that reformulates statistical
inference tasks as supervised learning problems. In this approach, entire
datasets are treated as single inputs to neural networks, which predict
distribution-level parameters. Transformer-based architectures, without
positional encoding, provide a natural fit due to their permutation-invariance
properties. By training on large-scale synthetic datasets, meta-statistical
models can leverage the scalability and optimization infrastructure of
Transformer-based LLMs. We demonstrate the framework's versatility with
applications in hypothesis testing and mutual information estimation, showing
strong performance, particularly for small datasets where traditional neural
methods struggle.
|
2502.12089
|
How compositional generalization and creativity improve as diffusion
models are trained
|
stat.ML cs.LG
|
Natural data is often organized as a hierarchical composition of features.
How many samples do generative models need to learn the composition rules, so
as to produce a combinatorial number of novel data? What signal in the data is
exploited to learn? We investigate these questions both theoretically and
empirically. Theoretically, we consider diffusion models trained on simple
probabilistic context-free grammars - tree-like graphical models used to
represent the structure of data such as language and images. We demonstrate
that diffusion models learn compositional rules with the sample complexity
required for clustering features with statistically similar context, a process
similar to the word2vec algorithm. However, this clustering emerges
hierarchically: higher-level, more abstract features associated with longer
contexts require more data to be identified. This mechanism leads to a sample
complexity that scales polynomially with the said context size. As a result,
diffusion models trained on intermediate dataset size generate data coherent up
to a certain scale, but that lacks global coherence. We test these predictions
in different domains, and find remarkable agreement: both generated texts and
images achieve progressively larger coherence lengths as the training time or
dataset size grows. We discuss connections between the hierarchical clustering
mechanism we introduce here and the renormalization group in physics.
|
2502.12093
|
WeVibe: Weight Change Estimation Through Audio-Induced Shelf Vibrations
In Autonomous Stores
|
eess.SP cs.SY eess.SY
|
Weight change estimation is crucial in various applications, particularly for
detecting pick-up and put-back actions when people interact with the shelf
while shopping in autonomous stores. Moreover, accurate weight change
estimation allows autonomous stores to automatically identify items being
picked up or put back, ensuring precise cost estimation. However, the
conventional approach of estimating weight changes requires specialized
weight-sensing shelves, which are densely deployed weight scales, incurring
intensive sensor consumption and high costs. Prior works explored the
vibration-based weight sensing method, but they failed when the location of
weight change varies.
In response to these limitations, we made the following contributions: (1) We
propose WeVibe, a first item weight change estimation system through active
shelf vibration sensing. The main intuition of the system is that the weight
placed on the shelf influences the dynamic vibration response of the shelf,
thus altering the shelf vibration patterns. (2) We model a physics-informed
relationship between the shelf vibration response and item weight across
multiple locations on the shelf based on structural dynamics theory. This
relationship is linear and allows easy training of a weight estimation model at
a new location without heavy data collection. (3) We evaluate our system on a
gondola shelf organized as the real-store settings. WeVibe achieved a mean
absolute error down to 38.07g and a standard deviation of 31.2g with one sensor
and 10% samples from three weight classes on estimating weight change from 0g
to 450g, which can be leveraged for differentiating items with more than 100g
differences.
|
2502.12094
|
A Study on Leveraging Search and Self-Feedback for Agent Reasoning
|
cs.AI cs.CL
|
Recent works have demonstrated that incorporating search during inference can
significantly improve reasoning capabilities of language agents. Some
approaches may make use of the ground truth or rely on model's own generated
feedback. The search algorithm uses this feedback to then produce values that
will update its criterion for exploring and exploiting various reasoning paths.
In this study, we investigate how search and model's self-feedback can be
leveraged for reasoning tasks. First, we explore differences in ground-truth
feedback and self-feedback during search for math reasoning. Second, we observe
limitations in applying search techniques to more complex tasks like
tool-calling and design domain-specific approaches to address these gaps. Our
experiments reveal challenges related to generalization when solely relying on
self-feedback during search. For search to work effectively, either access to
the ground-truth is needed or feedback mechanisms need to be carefully designed
for the specific task.
|
2502.12095
|
Descriminative-Generative Custom Tokens for Vision-Language Models
|
cs.CV
|
This paper explores the possibility of learning custom tokens for
representing new concepts in Vision-Language Models (VLMs). Our aim is to learn
tokens that can be effective for both discriminative and generative tasks while
composing well with words to form new input queries. The targeted concept is
specified in terms of a small set of images and a parent concept described
using text. We operate on CLIP text features and propose to use a combination
of a textual inversion loss and a classification loss to ensure that text
features of the learned token are aligned with image features of the concept in
the CLIP embedding space. We restrict the learned token to a low-dimensional
subspace spanned by tokens for attributes that are appropriate for the given
super-class. These modifications improve the quality of compositions of the
learned token with natural language for generating new scenes. Further, we show
that learned custom tokens can be used to form queries for text-to-image
retrieval task, and also have the important benefit that composite queries can
be visualized to ensure that the desired concept is faithfully encoded. Based
on this, we introduce the method of Generation Aided Image Retrieval, where the
query is modified at inference time to better suit the search intent. On the
DeepFashion2 dataset, our method improves Mean Reciprocal Retrieval (MRR) over
relevant baselines by 7%.
|
2502.12096
|
Token Communications: A Unified Framework for Cross-modal Context-aware
Semantic Communications
|
cs.IT cs.CV cs.MM eess.SP math.IT
|
In this paper, we introduce token communications (TokCom), a unified
framework to leverage cross-modal context information in generative semantic
communications (GenSC). TokCom is a new paradigm, motivated by the recent
success of generative foundation models and multimodal large language models
(GFM/MLLMs), where the communication units are tokens, enabling efficient
transformer-based token processing at the transmitter and receiver. In this
paper, we introduce the potential opportunities and challenges of leveraging
context in GenSC, explore how to integrate GFM/MLLMs-based token processing
into semantic communication systems to leverage cross-modal context
effectively, present the key principles for efficient TokCom at various layers
in future wireless networks. We demonstrate the corresponding TokCom benefits
in a GenSC setup for image, leveraging cross-modal context information, which
increases the bandwidth efficiency by 70.8% with negligible loss of
semantic/perceptual quality. Finally, the potential research directions are
identified to facilitate adoption of TokCom in future wireless networks.
|
2502.12098
|
Bandwidth-Adaptive Spatiotemporal Correspondence Identification for
Collaborative Perception
|
cs.RO
|
Correspondence identification (CoID) is an essential capability in
multi-robot collaborative perception, which enables a group of robots to
consistently refer to the same objects within their respective fields of view.
In real-world applications, such as connected autonomous driving, vehicles face
challenges in directly sharing raw observations due to limited communication
bandwidth. In order to address this challenge, we propose a novel approach for
bandwidth-adaptive spatiotemporal CoID in collaborative perception. This
approach allows robots to progressively select partial spatiotemporal
observations and share with others, while adapting to communication constraints
that dynamically change over time. We evaluate our approach across various
scenarios in connected autonomous driving simulations. Experimental results
validate that our approach enables CoID and adapts to dynamic communication
bandwidth changes. In addition, our approach achieves 8%-56% overall
improvements in terms of covisible object retrieval for CoID and data sharing
efficiency, which outperforms previous techniques and achieves the
state-of-the-art performance. More information is available at:
https://gaopeng5.github.io/acoid.
|
2502.12102
|
Relational Norms for Human-AI Cooperation
|
cs.AI cs.ET
|
How we should design and interact with social artificial intelligence depends
on the socio-relational role the AI is meant to emulate or occupy. In human
society, relationships such as teacher-student, parent-child, neighbors,
siblings, or employer-employee are governed by specific norms that prescribe or
proscribe cooperative functions including hierarchy, care, transaction, and
mating. These norms shape our judgments of what is appropriate for each
partner. For example, workplace norms may allow a boss to give orders to an
employee, but not vice versa, reflecting hierarchical and transactional
expectations. As AI agents and chatbots powered by large language models are
increasingly designed to serve roles analogous to human positions - such as
assistant, mental health provider, tutor, or romantic partner - it is
imperative to examine whether and how human relational norms should extend to
human-AI interactions. Our analysis explores how differences between AI systems
and humans, such as the absence of conscious experience and immunity to
fatigue, may affect an AI's capacity to fulfill relationship-specific functions
and adhere to corresponding norms. This analysis, which is a collaborative
effort by philosophers, psychologists, relationship scientists, ethicists,
legal experts, and AI researchers, carries important implications for AI
systems design, user behavior, and regulation. While we accept that AI systems
can offer significant benefits such as increased availability and consistency
in certain socio-relational roles, they also risk fostering unhealthy
dependencies or unrealistic expectations that could spill over into human-human
relationships. We propose that understanding and thoughtfully shaping (or
implementing) suitable human-AI relational norms will be crucial for ensuring
that human-AI interactions are ethical, trustworthy, and favorable to human
well-being.
|
2502.12108
|
Using the Path of Least Resistance to Explain Deep Networks
|
cs.LG cs.AI stat.ML
|
Integrated Gradients (IG), a widely used axiomatic path-based attribution
method, assigns importance scores to input features by integrating model
gradients along a straight path from a baseline to the input. While effective
in some cases, we show that straight paths can lead to flawed attributions. In
this paper, we identify the cause of these misattributions and propose an
alternative approach that treats the input space as a Riemannian manifold,
computing attributions by integrating gradients along geodesics. We call this
method Geodesic Integrated Gradients (GIG). To approximate geodesic paths, we
introduce two techniques: a k-Nearest Neighbours-based approach for smaller
models and a Stochastic Variational Inference-based method for larger ones.
Additionally, we propose a new axiom, Strong Completeness, extending the axioms
satisfied by IG. We show that this property is desirable for attribution
methods and that GIG is the only method that satisfies it. Through experiments
on both synthetic and real-world data, we demonstrate that GIG outperforms
existing explainability methods, including IG.
|
2502.12109
|
Personality Structured Interview for Large Language Model Simulation in
Personality Research
|
cs.CL cs.AI
|
Although psychometrics researchers have recently explored the use of large
language models (LLMs) as proxies for human participants, LLMs often fail to
generate heterogeneous data with human-like diversity, which diminishes their
value in advancing social science research. To address these challenges, we
explored the potential of the theory-informed Personality Structured Interview
(PSI) as a tool for simulating human responses in personality research. In this
approach, the simulation is grounded in nuanced real-human interview
transcripts that target the personality construct of interest. We have provided
a growing set of 357 structured interview transcripts from a representative
sample, each containing an individual's response to 32 open-ended questions
carefully designed to gather theory-based personality evidence. Additionally,
grounded in psychometric research, we have summarized an evaluation framework
to systematically validate LLM-generated psychometric data. Results from three
experiments demonstrate that well-designed structured interviews could improve
human-like heterogeneity in LLM-simulated personality data and predict
personality-related behavioral outcomes (i.e., organizational citizenship
behaviors and counterproductive work behavior). We further discuss the role of
theory-informed structured interviews in LLM-based simulation and outline a
general framework for designing structured interviews to simulate human-like
data for psychometric research.
|
2502.12110
|
A-MEM: Agentic Memory for LLM Agents
|
cs.CL cs.HC
|
While large language model (LLM) agents can effectively use external tools
for complex real-world tasks, they require memory systems to leverage
historical experiences. Current memory systems enable basic storage and
retrieval but lack sophisticated memory organization, despite recent attempts
to incorporate graph databases. Moreover, these systems' fixed operations and
structures limit their adaptability across diverse tasks. To address this
limitation, this paper proposes a novel agentic memory system for LLM agents
that can dynamically organize memories in an agentic way. Following the basic
principles of the Zettelkasten method, we designed our memory system to create
interconnected knowledge networks through dynamic indexing and linking. When a
new memory is added, we generate a comprehensive note containing multiple
structured attributes, including contextual descriptions, keywords, and tags.
The system then analyzes historical memories to identify relevant connections,
establishing links where meaningful similarities exist. Additionally, this
process enables memory evolution - as new memories are integrated, they can
trigger updates to the contextual representations and attributes of existing
historical memories, allowing the memory network to continuously refine its
understanding. Our approach combines the structured organization principles of
Zettelkasten with the flexibility of agent-driven decision making, allowing for
more adaptive and context-aware memory management. Empirical experiments on six
foundation models show superior improvement against existing SOTA baselines.
The source code is available at https://github.com/WujiangXu/AgenticMemory.
|
2502.12113
|
A Monocular Event-Camera Motion Capture System
|
cs.RO cs.CV
|
Motion capture systems are a widespread tool in research to record
ground-truth poses of objects. Commercial systems use reflective markers
attached to the object and then triangulate pose of the object from multiple
camera views. Consequently, the object must be visible to multiple cameras
which makes such multi-view motion capture systems unsuited for deployments in
narrow, confined spaces (e.g. ballast tanks of ships). In this technical report
we describe a monocular event-camera motion capture system which overcomes this
limitation and is ideally suited for narrow spaces. Instead of passive markers
it relies on active, blinking LED markers such that each marker can be uniquely
identified from the blinking frequency. The markers are placed at known
locations on the tracking object. We then solve the PnP (perspective-n-points)
problem to obtain the position and orientation of the object. The developed
system has millimeter accuracy, millisecond latency and we demonstrate that its
state estimate can be used to fly a small, agile quadrotor.
|
2502.12115
|
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance
Software Engineering?
|
cs.LG cs.SE
|
We introduce SWE-Lancer, a benchmark of over 1,400 freelance software
engineering tasks from Upwork, valued at \$1 million USD total in real-world
payouts. SWE-Lancer encompasses both independent engineering tasks--ranging
from \$50 bug fixes to \$32,000 feature implementations--and managerial tasks,
where models choose between technical implementation proposals. Independent
tasks are graded with end-to-end tests triple-verified by experienced software
engineers, while managerial decisions are assessed against the choices of the
original hired engineering managers. We evaluate model performance and find
that frontier models are still unable to solve the majority of tasks. To
facilitate future research, we open-source a unified Docker image and a public
evaluation split, SWE-Lancer Diamond
(https://github.com/openai/SWELancer-Benchmark). By mapping model performance
to monetary value, we hope SWE-Lancer enables greater research into the
economic impact of AI model development.
|
2502.12118
|
Scaling Test-Time Compute Without Verification or RL is Suboptimal
|
cs.LG cs.CL
|
Despite substantial advances in scaling test-time compute, an ongoing debate
in the community is how it should be scaled up to enable continued and
efficient improvements with scaling. There are largely two approaches: first,
distilling successful search or thinking traces; and second, using verification
(e.g., 0/1 outcome rewards, reward models, or verifiers) to guide reinforcement
learning (RL) and search algorithms. In this paper, we prove that finetuning
LLMs with verifier-based (VB) methods based on RL or search is far superior to
verifier-free (VF) approaches based on distilling or cloning search traces,
given a fixed amount of compute/data budget. Further, we show that as we scale
test-time compute (measured as the output token length) and training data,
suboptimality of VF methods scales poorly compared to VB when the base
pre-trained LLM presents a heterogeneous distribution over correct solution
traces (e.g., different lengths, styles, etc.) and admits a non-sharp
distribution over rewards on traces sampled from it. We formalize this
condition using anti-concentration [Erd\H{o}s, 1945]. This implies a stronger
result that VB methods scale better asymptotically, with the performance gap
between VB and VF methods widening as test-time budget grows. We corroborate
our theory empirically on both didactic and math reasoning problems with
3/8/32B-sized pre-trained LLMs, where we find verification is crucial for
scaling test-time compute.
|
2502.12119
|
PRISM: Self-Pruning Intrinsic Selection Method for Training-Free
Multimodal Data Selection
|
cs.CV cs.AI cs.CL
|
Visual instruction tuning refines pre-trained Multimodal Large Language
Models (MLLMs) to enhance their real-world task performance. However, the rapid
expansion of visual instruction datasets introduces significant data
redundancy, leading to excessive computational costs. Existing data selection
methods predominantly rely on proxy models or loss-based metrics, both of which
impose substantial computational overheads due to the necessity of model
inference and backpropagation. To address this challenge, we propose PRISM, a
novel training-free approach for efficient multimodal data selection. Unlike
existing methods, PRISM eliminates the reliance on proxy models, warm-up
pretraining, and gradient-based optimization. Instead, it leverages Pearson
correlation analysis to quantify the intrinsic visual encoding properties of
MLLMs, computing a task-specific correlation score to identify high-value
instances. This not only enbles data-efficient selection,but maintains the
original performance. Empirical evaluations across multiple MLLMs demonstrate
that PRISM reduces the overall time required for visual instruction tuning and
data selection to just 30% of conventional methods, while surpassing fully
fine-tuned models across eight multimodal and three language understanding
benchmarks, achieving a 101.7% relative improvement in final performance.
|
2502.12120
|
LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws
|
cs.LG cs.AI cs.CL
|
Scaling laws guide the development of large language models (LLMs) by
offering estimates for the optimal balance of model size, tokens, and compute.
More recently, loss-to-loss scaling laws that relate losses across pretraining
datasets and downstream tasks have emerged as a powerful tool for understanding
and improving LLM performance. In this work, we investigate which factors most
strongly influence loss-to-loss scaling. Our experiments reveal that the
pretraining data and tokenizer determine the scaling trend. In contrast, model
size, optimization hyperparameters, and even significant architectural
differences, such as between transformer-based models like Llama and
state-space models like Mamba, have limited impact. Consequently, practitioners
should carefully curate suitable pretraining datasets for optimal downstream
performance, while architectures and other settings can be freely optimized for
training efficiency.
|
2502.12122
|
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty
Quantification for LoRA
|
cs.LG
|
Low-Rank Adaptation (LoRA) enables parameter-efficient fine-tuning of large
language models by decomposing weight updates into low-rank matrices,
significantly reducing storage and computational overhead. While effective,
standard LoRA lacks mechanisms for uncertainty quantification, leading to
overconfident and poorly calibrated models. Bayesian variants of LoRA address
this limitation, but at the cost of a significantly increased number of
trainable parameters, partially offsetting the original efficiency gains.
Additionally, these models are harder to train and may suffer from unstable
convergence.
In this work, we propose a novel parameter-efficient Bayesian LoRA,
demonstrating that effective uncertainty quantification can be achieved in very
low-dimensional parameter spaces. The proposed method achieves strong
performance with improved calibration and generalization while maintaining
computational efficiency. Our empirical findings show that, with the
appropriate projection of the weight space: (1) uncertainty can be effectively
modeled in a low-dimensional space, and (2) weight covariances exhibit low
ranks.
|
2502.12123
|
On the Query Complexity of Verifier-Assisted Language Generation
|
cs.CL cs.LG
|
Recently, a plethora of works have proposed inference-time algorithms (e.g.
best-of-n), which incorporate verifiers to assist the generation process. Their
quality-efficiency trade-offs have been empirically benchmarked on a variety of
constrained generation tasks, but the algorithmic design landscape is still
largely poorly understood. In this paper, we develop a mathematical framework
for reasoning about constrained generation using a pre-trained language model
generator oracle and a process verifier--which can decide whether a prefix can
be extended to a string which satisfies the constraints of choice. We show that
even in very simple settings, access to a verifier can render an intractable
problem (information-theoretically or computationally) to a tractable one. In
fact, we show even simple algorithms, like tokenwise rejection sampling, can
enjoy significant benefits from access to a verifier. Empirically, we show that
a natural modification of tokenwise rejection sampling, in which the sampler is
allowed to "backtrack" (i.e., erase the final few generated tokens) has robust
and substantive benefits over natural baselines (e.g. (blockwise) rejection
sampling, nucleus sampling)--both in terms of computational efficiency,
accuracy and diversity.
|
2502.12124
|
RA-MTR: A Retrieval Augmented Multi-Task Reader based Approach for
Inspirational Quote Extraction from Long Documents
|
cs.CL
|
Inspirational quotes from famous individuals are often used to convey
thoughts in news articles, essays, and everyday conversations. In this paper,
we propose a novel context-based quote extraction system that aims to extract
the most relevant quote from a long text. We formulate this quote extraction as
an open domain question answering problem first by employing a vector-store
based retriever and then applying a multi-task reader. We curate three
context-based quote extraction datasets and introduce a novel multi-task
framework RA-MTR that improves the state-of-the-art performance, achieving a
maximum improvement of 5.08% in BoW F1-score.
|
2502.12125
|
Hypernym Bias: Unraveling Deep Classifier Training Dynamics through the
Lens of Class Hierarchy
|
cs.AI cs.LG
|
We investigate the training dynamics of deep classifiers by examining how
hierarchical relationships between classes evolve during training. Through
extensive experiments, we argue that the learning process in classification
problems can be understood through the lens of label clustering. Specifically,
we observe that networks tend to distinguish higher-level (hypernym) categories
in the early stages of training, and learn more specific (hyponym) categories
later. We introduce a novel framework to track the evolution of the feature
manifold during training, revealing how the hierarchy of class relations
emerges and refines across the network layers. Our analysis demonstrates that
the learned representations closely align with the semantic structure of the
dataset, providing a quantitative description of the clustering process.
Notably, we show that in the hypernym label space, certain properties of neural
collapse appear earlier than in the hyponym label space, helping to bridge the
gap between the initial and terminal phases of learning. We believe our
findings offer new insights into the mechanisms driving hierarchical learning
in deep networks, paving the way for future advancements in understanding deep
learning dynamics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.