id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.12320
|
Towards Fusing Point Cloud and Visual Representations for Imitation
Learning
|
cs.RO cs.CV
|
Learning for manipulation requires using policies that have access to rich
sensory information such as point clouds or RGB images. Point clouds
efficiently capture geometric structures, making them essential for
manipulation tasks in imitation learning. In contrast, RGB images provide rich
texture and semantic information that can be crucial for certain tasks.
Existing approaches for fusing both modalities assign 2D image features to
point clouds. However, such approaches often lose global contextual information
from the original images. In this work, we propose FPV-Net, a novel imitation
learning method that effectively combines the strengths of both point cloud and
RGB modalities. Our method conditions the point-cloud encoder on global and
local image tokens using adaptive layer norm conditioning, leveraging the
beneficial properties of both modalities. Through extensive experiments on the
challenging RoboCasa benchmark, we demonstrate the limitations of relying on
either modality alone and show that our method achieves state-of-the-art
performance across all tasks.
|
2502.12323
|
Adversarial Debiasing for Unbiased Parameter Recovery
|
cs.LG stat.ML
|
Advances in machine learning and the increasing availability of
high-dimensional data have led to the proliferation of social science research
that uses the predictions of machine learning models as proxies for measures of
human activity or environmental outcomes. However, prediction errors from
machine learning models can lead to bias in the estimates of regression
coefficients. In this paper, we show how this bias can arise, propose a test
for detecting bias, and demonstrate the use of an adversarial machine learning
algorithm in order to de-bias predictions. These methods are applicable to any
setting where machine-learned predictions are the dependent variable in a
regression. We conduct simulations and empirical exercises using ground truth
and satellite data on forest cover in Africa. Using the predictions from a
naive machine learning model leads to biased parameter estimates, while the
predictions from the adversarial model recover the true coefficients.
|
2502.12325
|
From Dense to Dynamic: Token-Difficulty Driven MoEfication of
Pre-Trained LLMs
|
cs.CL
|
Training large language models (LLMs) for different inference constraints is
computationally expensive, limiting control over efficiency-accuracy
trade-offs. Moreover, once trained, these models typically process tokens
uniformly, regardless of their complexity, leading to static and inflexible
behavior. In this paper, we introduce a post-training optimization framework,
DynaMoE, that adapts a pre-trained dense LLM to a token-difficulty-driven
Mixture-of-Experts model with minimal fine-tuning cost. This adaptation makes
the model dynamic, with sensitivity control to customize the balance between
efficiency and accuracy. DynaMoE features a token-difficulty-aware router that
predicts the difficulty of tokens and directs them to the appropriate
sub-networks or experts, enabling larger experts to handle more complex tokens
and smaller experts to process simpler ones. Our experiments demonstrate that
DynaMoE can generate a range of adaptive model variants of the existing trained
LLM with a single fine-tuning step, utilizing only $10B$ tokens, a minimal cost
compared to the base model's training. Each variant offers distinct trade-offs
between accuracy and performance. Compared to the baseline post-training
optimization framework, Flextron, our method achieves similar aggregated
accuracy across downstream tasks, despite using only $\frac{1}{9}\text{th}$ of
their fine-tuning cost.
|
2502.12326
|
Stability Bounds for Smooth Optimal Transport Maps and their Statistical
Implications
|
math.ST cs.LG stat.ME stat.ML stat.TH
|
We study estimators of the optimal transport (OT) map between two probability
distributions. We focus on plugin estimators derived from the OT map between
estimates of the underlying distributions. We develop novel stability bounds
for OT maps which generalize those in past work, and allow us to reduce the
problem of optimally estimating the transport map to that of optimally
estimating densities in the Wasserstein distance. In contrast, past work
provided a partial connection between these problems and relied on regularity
theory for the Monge-Ampere equation to bridge the gap, a step which required
unnatural assumptions to obtain sharp guarantees. We also provide some new
insights into the connections between stability bounds which arise in the
analysis of plugin estimators and growth bounds for the semi-dual functional
which arise in the analysis of Brenier potential-based estimators of the
transport map. We illustrate the applicability of our new stability bounds by
revisiting the smooth setting studied by Manole et al., analyzing two of their
estimators under more general conditions. Critically, our bounds do not require
smoothness or boundedness assumptions on the underlying measures. As an
illustrative application, we develop and analyze a novel tuning parameter-free
estimator for the OT map between two strongly log-concave distributions.
|
2502.12327
|
Learning Plasma Dynamics and Robust Rampdown Trajectories with
Predict-First Experiments at TCV
|
physics.plasm-ph cs.AI cs.LG cs.SY eess.SY
|
The rampdown in tokamak operations is a difficult to simulate phase during
which the plasma is often pushed towards multiple instability limits. To
address this challenge, and reduce the risk of disrupting operations, we
leverage recent advances in Scientific Machine Learning (SciML) to develop a
neural state-space model (NSSM) that predicts plasma dynamics during Tokamak
\`a Configuration Variable (TCV) rampdowns. By integrating simple physics
structure and data-driven models, the NSSM efficiently learns plasma dynamics
during the rampdown from a modest dataset of 311 pulses with only five pulses
in the reactor relevant high performance regime. The NSSM is parallelized
across uncertainties, and reinforcement learning (RL) is applied to design
trajectories that avoid multiple instability limits with high probability.
Experiments at TCV ramping down high performance plasmas show statistically
significant improvements in current and energy at plasma termination, with
improvements in speed through continuous re-training. A predict-first
experiment, increasing plasma current by 20\% from baseline, demonstrates the
NSSM's ability to make small extrapolations with sufficient accuracy to design
trajectories that successfully terminate the pulse. The developed approach
paves the way for designing tokamak controls with robustness to considerable
uncertainty, and demonstrates the relevance of the SciML approach to learning
plasma dynamics for rapidly developing robust trajectories and controls during
the incremental campaigns of upcoming burning plasma tokamaks.
|
2502.12328
|
LM Agents for Coordinating Multi-User Information Gathering
|
cs.CL cs.AI
|
This paper introduces PeopleJoin, a benchmark for evaluating LM-mediated
collaborative problem solving. Given a user request, PeopleJoin agents must
identify teammates who might be able to assist, converse with these teammates
to gather information, and finally compile a useful answer or summary for the
original user. PeopleJoin comprises two evaluation domains: PeopleJoin-QA,
focused on questions about tabular data, and PeopleJoin-DocCreation, focused on
document creation tasks. The two domains are adapted from existing NLP
benchmarks for database question answering and multi-document summarization;
here, however, the information needed to complete these tasks is distributed
across synthetic ``organizations'' of 2--20 users, simulating natural
multi-user collaboration scenarios. We implemented several popular LM agent
architectures, evaluating their accuracy and efficiency at completing tasks,
and highlight new research questions that can be studied using PeopleJoin.
|
2502.12329
|
A Novel Unified Parametric Assumption for Nonconvex Optimization
|
cs.LG cs.AI math.OC stat.ML
|
Nonconvex optimization is central to modern machine learning, but the general
framework of nonconvex optimization yields weak convergence guarantees that are
too pessimistic compared to practice. On the other hand, while convexity
enables efficient optimization, it is of limited applicability to many
practical problems. To bridge this gap and better understand the practical
success of optimization algorithms in nonconvex settings, we introduce a novel
unified parametric assumption. Our assumption is general enough to encompass a
broad class of nonconvex functions while also being specific enough to enable
the derivation of a unified convergence theorem for gradient-based methods.
Notably, by tuning the parameters of our assumption, we demonstrate its
versatility in recovering several existing function classes as special cases
and in identifying functions amenable to efficient optimization. We derive our
convergence theorem for both deterministic and stochastic optimization, and
conduct experiments to verify that our assumption can hold practically over
optimization trajectories.
|
2502.12330
|
X-IL: Exploring the Design Space of Imitation Learning Policies
|
cs.RO cs.LG
|
Designing modern imitation learning (IL) policies requires making numerous
decisions, including the selection of feature encoding, architecture, policy
representation, and more. As the field rapidly advances, the range of available
options continues to grow, creating a vast and largely unexplored design space
for IL policies. In this work, we present X-IL, an accessible open-source
framework designed to systematically explore this design space. The framework's
modular design enables seamless swapping of policy components, such as
backbones (e.g., Transformer, Mamba, xLSTM) and policy optimization techniques
(e.g., Score-matching, Flow-matching). This flexibility facilitates
comprehensive experimentation and has led to the discovery of novel policy
configurations that outperform existing methods on recent robot learning
benchmarks. Our experiments demonstrate not only significant performance gains
but also provide valuable insights into the strengths and weaknesses of various
design choices. This study serves as both a practical reference for
practitioners and a foundation for guiding future research in imitation
learning.
|
2502.12337
|
Stochastic Real-Time Deception in Nash Equilibrium Seeking for Games
with Quadratic Payoffs
|
eess.SY cs.SY
|
In multi-agent autonomous systems, deception is a fundamental concept which
characterizes the exploitation of unbalanced information to mislead victims
into choosing oblivious actions. This effectively alters the system's long term
behavior, leading to outcomes that may be beneficial to the deceiver but
detrimental to victim. We study this phenomenon for a class of model-free Nash
equilibrium seeking (NES) where players implement independent stochastic
exploration signals to learn the pseudogradient flow. In particular, we show
that deceptive players who obtain real-time measurements of other players'
stochastic perturbation can incorporate this information into their own NES
action update, consequentially steering the overall dynamics to a new operating
point that could potentially improve the payoffs of the deceptive players. We
consider games with quadratic payoff functions, as this restriction allows us
to derive a more explicit formulation of the capabilities of the deceptive
players. By leveraging results on multi-input stochastic averaging for
dynamical systems, we establish local exponential (in probability) convergence
for the proposed deceptive NES dynamics. To illustrate our results, we apply
them to a two player quadratic game.
|
2502.12340
|
Understanding Silent Data Corruption in LLM Training
|
cs.LG cs.DC
|
As the scale of training large language models (LLMs) increases, one emergent
failure is silent data corruption (SDC), where hardware produces incorrect
computations without explicit failure signals. In this work, we are the first
to investigate the impact of real-world SDCs on LLM training by comparing model
training between healthy production nodes and unhealthy nodes exhibiting SDCs.
With the help from a cloud computing platform, we access the unhealthy nodes
that were swept out from production by automated fleet management. Using
deterministic execution via XLA compiler and our proposed synchronization
mechanisms, we isolate and analyze the impact of SDC errors on these nodes at
three levels: at each submodule computation, at a single optimizer step, and at
a training period. Our results reveal that the impact of SDCs on computation
varies on different unhealthy nodes. Although in most cases the perturbations
from SDCs on submodule computation and gradients are relatively small, SDCs can
lead models to converge to different optima with different weights and even
cause spikes in the training loss. Our analysis sheds light on further
understanding and mitigating the impact of SDCs.
|
2502.12342
|
REAL-MM-RAG: A Real-World Multi-Modal Retrieval Benchmark
|
cs.IR cs.CV
|
Accurate multi-modal document retrieval is crucial for Retrieval-Augmented
Generation (RAG), yet existing benchmarks do not fully capture real-world
challenges with their current design. We introduce REAL-MM-RAG, an
automatically generated benchmark designed to address four key properties
essential for real-world retrieval: (i) multi-modal documents, (ii) enhanced
difficulty, (iii) Realistic-RAG queries and (iv) accurate labeling.
Additionally, we propose a multi-difficulty-level scheme based on query
rephrasing to evaluate models' semantic understanding beyond keyword matching.
Our benchmark reveals significant model weaknesses, particularly in handling
table-heavy documents and robustness to query rephrasing. To mitigate these
shortcomings, we curate a rephrased training set and introduce a new
finance-focused, table-heavy dataset. Fine-tuning on these datasets enables
models to achieve state-of-the-art retrieval performance on REAL-MM-RAG
benchmark. Our work offers a better way to evaluate and improve retrieval in
multi-modal RAG systems while also providing training data and models that
address current limitations.
|
2502.12343
|
Energy-Efficient Flat Precoding for MIMO Systems
|
cs.IT math.IT
|
This paper addresses the suboptimal energy efficiency of conventional digital
precoding schemes in multiple-input multiple-output (MIMO) systems. Through an
analysis of the power amplifier (PA) output power distribution associated with
conventional precoders, it is observed that these power distributions can be
quite uneven, resulting in large PA backoff (thus low efficiency) and high
power consumption. To tackle this issue, we propose a novel approach called
flat precoding, which aims to control the flatness of the power distribution
within a desired interval. In addition to reducing PA power consumption, flat
precoding offers the advantage of requiring smaller saturation levels for PAs,
which reduces the size of PAs and lowers the cost. To incorporate the concept
of flat power distribution into precoding design, we introduce a new
lower-bound per-antenna power constraint alongside the conventional sum power
constraint and the upper-bound per-antenna power constraint. By adjusting the
lower-bound and upper-bound values, we can effectively control the level of
flatness in the power distribution. We then seek to find a flat precoder that
satisfies these three sets of constraints while maximizing the weighted sum
rate (WSR). In particular, we develop efficient algorithms to design weighted
minimum mean squared error (WMMSE) and zero-forcing (ZF)-type precoders with
controllable flatness features that maximize WSR. Numerical results demonstrate
that complete flat precoding approaches, where the power distribution is a
straight line, achieve the best trade-off between spectral efficiency and
energy efficiency for existing PA technologies. We also show that the proposed
ZF and WMMSE precoding methods can approach the performance of their
conventional counterparts with only the sum power constraint, while
significantly reducing PA size and power consumption.
|
2502.12346
|
QuZO: Quantized Zeroth-Order Fine-Tuning for Large Language Models
|
cs.LG cs.AI
|
Language Models (LLMs) are often quantized to lower precision to reduce the
memory cost and latency in inference. However, quantization often degrades
model performance, thus fine-tuning is required for various down-stream tasks.
Traditional fine-tuning methods such as stochastic gradient descent and Adam
optimization require backpropagation, which are error-prone in the
low-precision settings. To overcome these limitations, we propose the Quantized
Zeroth-Order (QuZO) framework, specifically designed for fine-tuning LLMs
through low-precision (e.g., 4- or 8-bit) forward passes. Our method can avoid
the error-prone low-precision straight-through estimator, and utilizes
optimized stochastic rounding to mitigate the increased bias. QuZO simplifies
the training process, while achieving results comparable to first-order methods
in ${\rm FP}8$ and superior accuracy in ${\rm INT}8$ and ${\rm INT}4$ training.
Experiments demonstrate that low-bit training QuZO achieves performance
comparable to MeZO optimization on GLUE, Multi-Choice, and Generation tasks,
while reducing memory cost by $2.94 \times$ in LLaMA2-7B fine-tuning compared
to quantized first-order methods.
|
2502.12347
|
Improving Grip Stability Using Passive Compliant Microspine Arrays for
Soft Robots in Unstructured Terrain
|
cs.RO
|
Microspine grippers are small spines commonly found on insect legs that
reinforce surface interaction by engaging with asperities to increase shear
force and traction. An array of such microspines, when integrated into the
limbs or undercarriage of a robot, can provide the ability to maneuver uneven
terrains, traverse inclines, and even climb walls. Conformability and
adaptability of soft robots makes them ideal candidates for these applications
involving traversal of complex, unstructured terrains. However, there remains a
real-life realization gap for soft locomotors pertaining to their transition
from controlled lab environment to the field by improving grip stability
through effective integration of microspines. We propose a passive, compliant
microspine stacked array design to enhance the locomotion capabilities of
mobile soft robots, in our case, ones that are motor tendon actuated. We offer
a standardized microspine array integration method with effective
soft-compliant stiffness integration, and reduced complexity resulting from a
single actuator passively controlling them. The presented design utilizes a
two-row, stacked microspine array configuration that offers additional gripping
capabilities on extremely steep/irregular surfaces from the top row while not
hindering the effectiveness of the more frequently active bottom row. We
explore different configurations of the microspine array to account for
changing surface topologies and enable independent, adaptable gripping of
asperities per microspine. Field test experiments are conducted on various
rough surfaces including concrete, brick, compact sand, and tree roots with
three robots consisting of a baseline without microspines compared against two
robots with different combinations of microspine arrays. Tracking results
indicate that the inclusion of microspine arrays increases planar displacement
on average by 15 and 8 times.
|
2502.12350
|
Mamute: high-performance computing for geophysical methods
|
cs.CE
|
Due to their high computational cost, geophysical applications are typically
designed to run in large computing systems. Because of that, such applications
must implement several high-performance techniques to use the computational
resources better. In this paper, we present Mamute, a software that delivers
wave equation-based geophysical methods. Mamute implements two geophysical
methods: seismic modeling and full waveform inversion (FWI). It also supports
high-performance strategies such as fault tolerance, automatic parallel looping
scheduling, and distributed systems workload balancing. We demonstrate Mamute's
operation using both seismic modeling and FWI. Mamute is a C++ software readily
available under the MIT license.
|
2502.12352
|
Towards Mechanistic Interpretability of Graph Transformers via Attention
Graphs
|
cs.LG cs.AI
|
We introduce Attention Graphs, a new tool for mechanistic interpretability of
Graph Neural Networks (GNNs) and Graph Transformers based on the mathematical
equivalence between message passing in GNNs and the self-attention mechanism in
Transformers. Attention Graphs aggregate attention matrices across Transformer
layers and heads to describe how information flows among input nodes. Through
experiments on homophilous and heterophilous node classification tasks, we
analyze Attention Graphs from a network science perspective and find that: (1)
When Graph Transformers are allowed to learn the optimal graph structure using
all-to-all attention among input nodes, the Attention Graphs learned by the
model do not tend to correlate with the input/original graph structure; and (2)
For heterophilous graphs, different Graph Transformer variants can achieve
similar performance while utilising distinct information flow patterns. Open
source code: https://github.com/batu-el/understanding-inductive-biases-of-gnns
|
2502.12353
|
Stability-based Generalization Bounds for Variational Inference
|
cs.LG
|
Variational inference (VI) is widely used for approximate inference in
Bayesian machine learning. In addition to this practical success,
generalization bounds for variational inference and related algorithms have
been developed, mostly through the connection to PAC-Bayes analysis. A second
line of work has provided algorithm-specific generalization bounds through
stability arguments or using mutual information bounds, and has shown that the
bounds are tight in practice, but unfortunately these bounds do not directly
apply to approximate Bayesian algorithms. This paper fills this gap by
developing algorithm-specific stability based generalization bounds for a class
of approximate Bayesian algorithms that includes VI, specifically when using
stochastic gradient descent to optimize their objective. As in the non-Bayesian
case, the generalization error is bounded by by expected parameter differences
on a perturbed dataset. The new approach complements PAC-Bayes analysis and can
provide tighter bounds in some cases. An experimental illustration shows that
the new approach yields non-vacuous bounds on modern neural network
architectures and datasets and that it can shed light on performance
differences between variant approximate Bayesian algorithms.
|
2502.12354
|
Human-centered explanation does not fit all: The interplay of
sociotechnical, cognitive, and individual factors in the effect AI
explanations in algorithmic decision-making
|
cs.CY cs.AI cs.HC
|
Recent XAI studies have investigated what constitutes a \textit{good}
explanation in AI-assisted decision-making. Despite the widely accepted
human-friendly properties of explanations, such as contrastive and selective,
existing studies have yielded inconsistent findings. To address these gaps, our
study focuses on the cognitive dimensions of explanation evaluation, by
evaluating six explanations with different contrastive strategies and
information selectivity and scrutinizing factors behind their valuation
process. Our analysis results find that contrastive explanations are not the
most preferable or understandable in general; Rather, different contrastive and
selective explanations were appreciated to a different extent based on who they
are, when, how, and what to explain -- with different level of cognitive load
and engagement and sociotechnical contexts. Given these findings, we call for a
nuanced view of explanation strategies, with implications for designing AI
interfaces to accommodate individual and contextual differences in AI-assisted
decision-making.
|
2502.12355
|
Hovering Flight of Soft-Actuated Insect-Scale Micro Aerial Vehicles
using Deep Reinforcement Learning
|
cs.RO cs.LG cs.SY eess.SY
|
Soft-actuated insect-scale micro aerial vehicles (IMAVs) pose unique
challenges for designing robust and computationally efficient controllers. At
the millimeter scale, fast robot dynamics ($\sim$ms), together with system
delay, model uncertainty, and external disturbances significantly affect flight
performances. Here, we design a deep reinforcement learning (RL) controller
that addresses system delay and uncertainties. To initialize this neural
network (NN) controller, we propose a modified behavior cloning (BC) approach
with state-action re-matching to account for delay and domain-randomized expert
demonstration to tackle uncertainty. Then we apply proximal policy optimization
(PPO) to fine-tune the policy during RL, enhancing performance and smoothing
commands. In simulations, our modified BC substantially increases the mean
reward compared to baseline BC; and RL with PPO improves flight quality and
reduces command fluctuations. We deploy this controller on two different
insect-scale aerial robots that weigh 720 mg and 850 mg, respectively. The
robots demonstrate multiple successful zero-shot hovering flights, with the
longest lasting 50 seconds and root-mean-square errors of 1.34 cm in lateral
direction and 0.05 cm in altitude, marking the first end-to-end deep RL-based
flight on soft-driven IMAVs.
|
2502.12359
|
LanP: Rethinking the Impact of Language Priors in Large Vision-Language
Models
|
cs.CV
|
Large Vision-Language Models (LVLMs) have shown impressive performance in
various tasks. However, LVLMs suffer from hallucination, which hinders their
adoption in the real world. Existing studies emphasized that the strong
language priors of LVLMs can overpower visual information, causing
hallucinations. However, the positive role of language priors is the key to a
powerful LVLM. If the language priors are too weak, LVLMs will struggle to
leverage rich parameter knowledge and instruction understanding abilities to
complete tasks in challenging visual scenarios where visual information alone
is insufficient. Therefore, we propose a benchmark called LanP to rethink the
impact of Language Priors in LVLMs. It is designed to investigate how strong
language priors are in current LVLMs. LanP consists of 170 images and 340
corresponding well-designed questions. Extensive experiments on 25 popular
LVLMs reveal that many LVLMs' language priors are not strong enough to
effectively aid question answering when objects are partially hidden. Many
models, including GPT-4 Turbo, exhibit an accuracy below 0.5 in such a
scenario.
|
2502.12360
|
Detecting Systematic Weaknesses in Vision Models along Predefined
Human-Understandable Dimensions
|
cs.CV cs.AI cs.LG
|
Studying systematic weaknesses of DNNs has gained prominence in the last few
years with the rising focus on building safe AI systems. Slice discovery
methods (SDMs) are prominent algorithmic approaches for finding such systematic
weaknesses. They identify top-k semantically coherent slices/subsets of data
where a DNN-under-test has low performance. For being directly useful, e.g., as
evidences in a safety argumentation, slices should be aligned with
human-understandable (safety-relevant) dimensions, which, for example, are
defined by safety and domain experts as parts of the operational design domain
(ODD). While straightforward for structured data, the lack of semantic metadata
makes these investigations challenging for unstructured data. Therefore, we
propose a complete workflow which combines contemporary foundation models with
algorithms for combinatorial search that consider structured data and DNN
errors for finding systematic weaknesses in images. In contrast to existing
approaches, ours identifies weak slices that are in line with predefined
human-understandable dimensions. As the workflow includes foundation models,
its intermediate and final results may not always be exact. Therefore, we build
into our workflow an approach to address the impact of noisy metadata. We
evaluate our approach w.r.t. its quality on four popular computer vision
datasets, including autonomous driving datasets like Cityscapes, BDD100k, and
RailSem19, while using multiple state-of-the-art models as DNNs-under-test.
|
2502.12361
|
ConFit v2: Improving Resume-Job Matching using Hypothetical Resume
Embedding and Runner-Up Hard-Negative Mining
|
cs.CL
|
A reliable resume-job matching system helps a company recommend suitable
candidates from a pool of resumes and helps a job seeker find relevant jobs
from a list of job posts. However, since job seekers apply only to a few jobs,
interaction labels in resume-job datasets are sparse. We introduce ConFit v2,
an improvement over ConFit to tackle this sparsity problem. We propose two
techniques to enhance the encoder's contrastive training process: augmenting
job data with hypothetical reference resume generated by a large language
model; and creating high-quality hard negatives from unlabeled resume/job pairs
using a novel hard-negative mining strategy. We evaluate ConFit v2 on two
real-world datasets and demonstrate that it outperforms ConFit and prior
methods (including BM25 and OpenAI text-embedding-003), achieving an average
absolute improvement of 13.8% in recall and 17.5% in nDCG across job-ranking
and resume-ranking tasks.
|
2502.12362
|
Classifiers of Data Sharing Statements in Clinical Trial Records
|
cs.CL cs.AI
|
Digital individual participant data (IPD) from clinical trials are
increasingly distributed for potential scientific reuse. The identification of
available IPD, however, requires interpretations of textual data-sharing
statements (DSS) in large databases. Recent advancements in computational
linguistics include pre-trained language models that promise to simplify the
implementation of effective classifiers based on textual inputs. In a subset of
5,000 textual DSS from ClinicalTrials.gov, we evaluate how well classifiers
based on domain-specific pre-trained language models reproduce original
availability categories as well as manually annotated labels. Typical metrics
indicate that classifiers that predicted manual annotations outperformed those
that learned to output the original availability categories. This suggests that
the textual DSS descriptions contain applicable information that the
availability categories do not, and that such classifiers could thus aid the
automatic identification of available IPD in large trial databases.
|
2502.12365
|
On the Performance of Uplink Pinching Antenna Systems (PASS)
|
cs.IT math.IT
|
Pinching antenna (PA) is a flexible antenna composed of a waveguide and
multiple dielectric particles, which is capable of reconfiguring wireless
channels intelligently in line-of-sight links. By leveraging the unique
features of PAs, we exploit the uplink (UL) transmission in pinching antenna
systems (PASS). To comprehensively evaluate the performance gains of PASS in UL
transmissions, three scenarios, multiple PAs for a single user (MPSU), a single
PA for a single user (SPSU), and a single PA for multiple users (SPMU) are
considered. The positions of PAs are optimized to obtain the maximal channel
gains in the considered scenarios. For the MPSU and SPSU scenarios, by applying
the optimized position of PAs, closed-form expressions for analytical,
asymptotic and approximated ergodic rate are derived. As the further advance,
closed-form expressions of approximated ergodic rate is derived when a single
PA is fixed in the SPMU scenario. Our results demonstrate the following key
insights: i) The proposed PASS significantly outperforms conventional
Multiple-input Single-output networks by exploiting the flexibility of PAs; ii)
The PA distribution follows an asymmetric non-uniform distribution in the MPSU
scenario; iii) Optimizing PA positions significantly enhances the ergodic sum
rate performance.
|
2502.12366
|
ScriptoriumWS: A Code Generation Assistant for Weak Supervision
|
cs.LG
|
Weak supervision is a popular framework for overcoming the labeled data
bottleneck: the need to obtain labels for training data. In weak supervision,
multiple noisy-but-cheap sources are used to provide guesses of the label and
are aggregated to produce high-quality pseudolabels. These sources are often
expressed as small programs written by domain experts -- and so are expensive
to obtain. Instead, we argue for using code-generation models to act as coding
assistants for crafting weak supervision sources. We study prompting strategies
to maximize the quality of the generated sources, settling on a multi-tier
strategy that incorporates multiple types of information. We explore how to
best combine hand-written and generated sources. Using these insights, we
introduce ScriptoriumWS, a weak supervision system that, when compared to
hand-crafted sources, maintains accuracy and greatly improves coverage.
|
2502.12370
|
Positional Encoding in Transformer-Based Time Series Models: A Survey
|
cs.LG
|
Recent advancements in transformer-based models have greatly improved time
series analysis, providing robust solutions for tasks such as forecasting,
anomaly detection, and classification. A crucial element of these models is
positional encoding, which allows transformers to capture the intrinsic
sequential nature of time series data. This survey systematically examines
existing techniques for positional encoding in transformer-based time series
models. We investigate a variety of methods, including fixed, learnable,
relative, and hybrid approaches, and evaluate their effectiveness in different
time series classification tasks. Furthermore, we outline key challenges and
suggest potential research directions to enhance positional encoding
strategies. By delivering a comprehensive overview and quantitative
benchmarking, this survey intends to assist researchers and practitioners in
selecting and designing effective positional encoding methods for
transformer-based time series models.
|
2502.12371
|
IMLE Policy: Fast and Sample Efficient Visuomotor Policy Learning via
Implicit Maximum Likelihood Estimation
|
cs.RO cs.AI cs.LG
|
Recent advances in imitation learning, particularly using generative
modelling techniques like diffusion, have enabled policies to capture complex
multi-modal action distributions. However, these methods often require large
datasets and multiple inference steps for action generation, posing challenges
in robotics where the cost for data collection is high and computation
resources are limited. To address this, we introduce IMLE Policy, a novel
behaviour cloning approach based on Implicit Maximum Likelihood Estimation
(IMLE). IMLE Policy excels in low-data regimes, effectively learning from
minimal demonstrations and requiring 38\% less data on average to match the
performance of baseline methods in learning complex multi-modal behaviours. Its
simple generator-based architecture enables single-step action generation,
improving inference speed by 97.3\% compared to Diffusion Policy, while
outperforming single-step Flow Matching. We validate our approach across
diverse manipulation tasks in simulated and real-world environments, showcasing
its ability to capture complex behaviours under data constraints. Videos and
code are provided on our project page: https://imle-policy.github.io/.
|
2502.12372
|
Factual Inconsistency in Data-to-Text Generation Scales Exponentially
with LLM Size: A Statistical Validation
|
cs.CL cs.AI cs.LG
|
Monitoring factual inconsistency is essential for ensuring trustworthiness in
data-to-text generation (D2T). While large language models (LLMs) have
demonstrated exceptional performance across various D2T tasks, previous studies
on scaling laws have primarily focused on generalization error through power
law scaling to LLM size (i.e., the number of model parameters). However, no
research has examined the impact of LLM size on factual inconsistency in D2T.
In this paper, we investigate how factual inconsistency in D2T scales with LLM
size by exploring two scaling laws: power law and exponential scaling. To
rigorously evaluate and compare these scaling laws, we employ a statistical
validation framework consisting of three key stages: predictive performance
estimation, goodness-of-fit assessment, and comparative analysis. For a
comprehensive empirical study, we analyze three popular LLM families across
five D2T datasets, measuring factual inconsistency inversely using four
state-of-the-art consistency metrics. Our findings, based on exhaustive
empirical results and validated through our framework, reveal that, contrary to
the widely assumed power law scaling, factual inconsistency in D2T follows an
exponential scaling with LLM size.
|
2502.12373
|
Soft Robotics for Search and Rescue: Advancements, Challenges, and
Future Directions
|
cs.RO cs.AI
|
Soft robotics has emerged as a transformative technology in Search and Rescue
(SAR) operations, addressing challenges in navigating complex, hazardous
environments that often limit traditional rigid robots. This paper critically
examines advancements in soft robotic technologies tailored for SAR
applications, focusing on their unique capabilities in adaptability, safety,
and efficiency. By leveraging bio-inspired designs, flexible materials, and
advanced locomotion mechanisms, such as crawling, rolling, and shape morphing,
soft robots demonstrate exceptional potential in disaster scenarios. However,
significant barriers persist, including material durability, power
inefficiency, sensor integration, and control complexity. This comprehensive
review highlights the current state of soft robotics in SAR, discusses
simulation methodologies and hardware validations, and introduces performance
metrics essential for their evaluation. By bridging the gap between theoretical
advancements and practical deployment, this study underscores the potential of
soft robotic systems to revolutionize SAR missions and advocates for continued
interdisciplinary innovation to overcome existing limitations.
|
2502.12375
|
UltraGen: Extremely Fine-grained Controllable Generation via Attribute
Reconstruction and Global Preference Optimization
|
cs.CL
|
Fine granularity is an essential requirement for controllable text
generation, which has seen rapid growth with the ability of LLMs. However,
existing methods focus mainly on a small set of attributes like 3 to 5, and
their performance degrades significantly when the number of attributes
increases to the next order of magnitude. To address this challenge, we propose
a novel zero-shot approach for extremely fine-grained controllable generation
(EFCG), proposing auto-reconstruction (AR) and global preference optimization
(GPO). In the AR phase, we leverage LLMs to extract soft attributes (e.g.,
Emphasis on simplicity and minimalism in design) from raw texts, and combine
them with programmatically derived hard attributes (e.g., The text should be
between 300 and 400 words) to construct massive (around 45) multi-attribute
requirements, which guide the fine-grained text reconstruction process under
weak supervision. In the GPO phase, we apply direct preference optimization
(DPO) to refine text generation under diverse attribute combinations, enabling
efficient exploration of the global combination space. Additionally, we
introduce an efficient attribute sampling strategy to identify and correct
potentially erroneous attributes, further improving global optimization. Our
framework significantly improves the constraint satisfaction rate (CSR) and
text quality for EFCG by mitigating position bias and alleviating attention
dilution.
|
2502.12377
|
Alignment and Adversarial Robustness: Are More Human-Like Models More
Secure?
|
cs.CV
|
Representational alignment refers to the extent to which a model's internal
representations mirror biological vision, offering insights into both neural
similarity and functional correspondence. Recently, some more aligned models
have demonstrated higher resiliency to adversarial examples, raising the
question of whether more human-aligned models are inherently more secure. In
this work, we conduct a large-scale empirical analysis to systematically
investigate the relationship between representational alignment and adversarial
robustness. We evaluate 118 models spanning diverse architectures and training
paradigms, measuring their neural and behavioral alignment and engineering task
performance across 106 benchmarks as well as their adversarial robustness via
AutoAttack. Our findings reveal that while average alignment and robustness
exhibit a weak overall correlation, specific alignment benchmarks serve as
strong predictors of adversarial robustness, particularly those that measure
selectivity towards texture or shape. These results suggest that different
forms of alignment play distinct roles in model robustness, motivating further
investigation into how alignment-driven approaches can be leveraged to build
more secure and perceptually-grounded vision models.
|
2502.12378
|
Pragmatics in the Era of Large Language Models: A Survey on Datasets,
Evaluation, Opportunities and Challenges
|
cs.CL
|
Understanding pragmatics-the use of language in context-is crucial for
developing NLP systems capable of interpreting nuanced language use. Despite
recent advances in language technologies, including large language models,
evaluating their ability to handle pragmatic phenomena such as implicatures and
references remains challenging. To advance pragmatic abilities in models, it is
essential to understand current evaluation trends and identify existing
limitations. In this survey, we provide a comprehensive review of resources
designed for evaluating pragmatic capabilities in NLP, categorizing datasets by
the pragmatics phenomena they address. We analyze task designs, data collection
methods, evaluation approaches, and their relevance to real-world applications.
By examining these resources in the context of modern language models, we
highlight emerging trends, challenges, and gaps in existing benchmarks. Our
survey aims to clarify the landscape of pragmatic evaluation and guide the
development of more comprehensive and targeted benchmarks, ultimately
contributing to more nuanced and context-aware NLP models.
|
2502.12379
|
OCT Data is All You Need: How Vision Transformers with and without
Pre-training Benefit Imaging
|
cs.CV cs.LG
|
Optical Coherence Tomography (OCT) provides high-resolution cross-sectional
images useful for diagnosing various diseases, but their distinct
characteristics from natural images raise questions about whether large-scale
pre-training on datasets like ImageNet is always beneficial. In this paper, we
investigate the impact of ImageNet-based pre-training on Vision Transformer
(ViT) performance for OCT image classification across different dataset sizes.
Our experiments cover four-category retinal pathologies (CNV, DME, Drusen,
Normal). Results suggest that while pre-training can accelerate convergence and
potentially offer better performance in smaller datasets, training from scratch
may achieve comparable or even superior accuracy when sufficient OCT data is
available. Our findings highlight the importance of matching domain
characteristics in pre-training and call for further study on large-scale
OCT-specific pre-training.
|
2502.12381
|
Linear Diffusion Networks: Harnessing Diffusion Processes for Global
Interactions
|
cs.LG
|
Diffusion kernels capture global dependencies. We present Linear Diffusion
Networks (LDNs), a novel architecture that reinterprets sequential data
processing as a unified diffusion process. Our model integrates adaptive
diffusion modules with localized nonlinear updates and a diffusion-inspired
attention mechanism. This design enables efficient global information
propagation while preserving fine-grained temporal details. LDN overcomes the
limitations of conventional recurrent and transformer models by allowing full
parallelization across time steps and supporting robust multi-scale temporal
representations. Experiments on benchmark sequence modeling tasks demonstrate
that LDN delivers superior performance and scalability, setting a new standard
for global interaction in sequential data.
|
2502.12382
|
Hybrid Machine Learning Models for Intrusion Detection in IoT:
Leveraging a Real-World IoT Dataset
|
cs.CR cs.AI
|
The rapid growth of the Internet of Things (IoT) has revolutionized
industries, enabling unprecedented connectivity and functionality. However,
this expansion also increases vulnerabilities, exposing IoT networks to
increasingly sophisticated cyberattacks. Intrusion Detection Systems (IDS) are
crucial for mitigating these threats, and recent advancements in Machine
Learning (ML) offer promising avenues for improvement. This research explores a
hybrid approach, combining several standalone ML models such as Random Forest
(RF), XGBoost, K-Nearest Neighbors (KNN), and AdaBoost, in a voting-based
hybrid classifier for effective IoT intrusion detection. This ensemble method
leverages the strengths of individual algorithms to enhance accuracy and
address challenges related to data complexity and scalability. Using the
widely-cited IoT-23 dataset, a prominent benchmark in IoT cybersecurity
research, we evaluate our hybrid classifiers for both binary and multi-class
intrusion detection problems, ensuring a fair comparison with existing
literature. Results demonstrate that our proposed hybrid models, designed for
robustness and scalability, outperform standalone approaches in IoT
environments. This work contributes to the development of advanced, intelligent
IDS frameworks capable of addressing evolving cyber threats.
|
2502.12383
|
Locally-Deployed Chain-of-Thought (CoT) Reasoning Model in Chemical
Engineering: Starting from 30 Experimental Data
|
cs.LG stat.AP
|
In the field of chemical engineering, traditional data-processing and
prediction methods face significant challenges. Machine-learning and
large-language models (LLMs) also have their respective limitations. This paper
explores the application of the Chain-of-Thought (CoT) reasoning model in
chemical engineering, starting from 30 experimental data points. By integrating
traditional surrogate models like Gaussian processes and random forests with
powerful LLMs such as DeepSeek-R1, a hierarchical architecture is proposed. Two
CoT-building methods, Large Language Model-Chain of Thought (LLM-CoT) and
Machine Learning-Large Language Model-Chain of Thought (ML-LLM-CoT), are
studied. The LLM-CoT combines local models DeepSeek-r1:14b and Qwen2:7b with
Ollama. The ML-LLM-CoT integrates a pre-trained Gaussian ML model with the
LLM-based CoT framework. Our results show that during construction, ML-LLM-CoT
is more efficient. It only has 2 points that require rethink and a total of 4
rethink times, while LLM-CoT has 5 points that need to be re-thought and 34
total rethink times. In predicting the solubility of 20 molecules with
dissimilar structures, the number of molecules with a prediction deviation
higher than 100\% for the Gaussian model, LLM-CoT, and ML-LLM-CoT is 7, 6, and
4 respectively. These results indicate that ML-LLM-CoT performs better in
controlling the number of high-deviation molecules, optimizing the average
deviation, and achieving a higher success rate in solubility judgment,
providing a more reliable method for chemical engineering and molecular
property prediction. This study breaks through the limitations of traditional
methods and offers new solutions for rapid property prediction and process
optimization in chemical engineering.
|
2502.12384
|
Scalable Back-Propagation-Free Training of Optical Physics-Informed
Neural Networks
|
cs.LG
|
Physics-informed neural networks (PINNs) have shown promise in solving
partial differential equations (PDEs), with growing interest in their
energy-efficient, real-time training on edge devices. Photonic computing offers
a potential solution to achieve this goal because of its ultra-high operation
speed. However, the lack of photonic memory and the large device sizes prevent
training real-size PINNs on photonic chips. This paper proposes a completely
back-propagation-free (BP-free) and highly salable framework for training
real-size PINNs on silicon photonic platforms. Our approach involves three key
innovations: (1) a sparse-grid Stein derivative estimator to avoid the BP in
the loss evaluation of a PINN, (2) a dimension-reduced zeroth-order
optimization via tensor-train decomposition to achieve better scalability and
convergence in BP-free training, and (3) a scalable on-chip photonic PINN
training accelerator design using photonic tensor cores. We validate our
numerical methods on both low- and high-dimensional PDE benchmarks. Through
circuit simulation based on real device parameters, we further demonstrate the
significant performance benefit (e.g., real-time training, huge chip area
reduction) of our photonic accelerator.
|
2502.12386
|
Bridging the Data Gap in AI Reliability Research and Establishing
DR-AIR, a Comprehensive Data Repository for AI Reliability
|
stat.AP cs.AI
|
Artificial intelligence (AI) technology and systems have been advancing
rapidly. However, ensuring the reliability of these systems is crucial for
fostering public confidence in their use. This necessitates the modeling and
analysis of reliability data specific to AI systems. A major challenge in AI
reliability research, particularly for those in academia, is the lack of
readily available AI reliability data. To address this gap, this paper focuses
on conducting a comprehensive review of available AI reliability data and
establishing DR-AIR: a data repository for AI reliability. Specifically, we
introduce key measurements and data types for assessing AI reliability, along
with the methodologies used to collect these data. We also provide a detailed
description of the currently available datasets with illustrative examples.
Furthermore, we outline the setup of the DR-AIR repository and demonstrate its
practical applications. This repository provides easy access to datasets
specifically curated for AI reliability research. We believe these efforts will
significantly benefit the AI research community by facilitating access to
valuable reliability data and promoting collaboration across various academic
domains within AI. We conclude our paper with a call to action, encouraging the
research community to contribute and share AI reliability data to further
advance this critical field of study.
|
2502.12388
|
Achieving Upper Bound Accuracy of Joint Training in Continual Learning
|
cs.LG
|
Continual learning has been an active research area in machine learning,
focusing on incrementally learning a sequence of tasks. A key challenge is
catastrophic forgetting (CF), and most research efforts have been directed
toward mitigating this issue. However, a significant gap remains between the
accuracy achieved by state-of-the-art continual learning algorithms and the
ideal or upper-bound accuracy achieved by training all tasks together jointly.
This gap has hindered or even prevented the adoption of continual learning in
applications, as accuracy is often of paramount importance. Recently, another
challenge, termed inter-task class separation (ICS), was also identified, which
spurred a theoretical study into principled approaches for solving continual
learning. Further research has shown that by leveraging the theory and the
power of large foundation models, it is now possible to achieve upper-bound
accuracy, which has been empirically validated using both text and image
classification datasets. Continual learning is now ready for real-life
applications. This paper surveys the main research leading to this achievement,
justifies the approach both intuitively and from neuroscience research, and
discusses insights gained.
|
2502.12391
|
Reward-Safety Balance in Offline Safe RL via Diffusion Regularization
|
cs.LG
|
Constrained reinforcement learning (RL) seeks high-performance policies under
safety constraints. We focus on an offline setting where the agent has only a
fixed dataset -- common in realistic tasks to prevent unsafe exploration. To
address this, we propose Diffusion-Regularized Constrained Offline
Reinforcement Learning (DRCORL), which first uses a diffusion model to capture
the behavioral policy from offline data and then extracts a simplified policy
to enable efficient inference. We further apply gradient manipulation for
safety adaptation, balancing the reward objective and constraint satisfaction.
This approach leverages high-quality offline data while incorporating safety
requirements. Empirical results show that DRCORL achieves reliable safety
performance, fast inference, and strong reward outcomes across robot learning
tasks. Compared to existing safe offline RL methods, it consistently meets cost
limits and performs well with the same hyperparameters, indicating practical
applicability in real-world scenarios.
|
2502.12393
|
Time Series Treatment Effects Analysis with Always-Missing Controls
|
stat.ME cs.AI cs.LG stat.ML
|
Estimating treatment effects in time series data presents a significant
challenge, especially when the control group is always unobservable. For
example, in analyzing the effects of Christmas on retail sales, we lack direct
observation of what would have occurred in late December without the Christmas
impact. To address this, we try to recover the control group in the event
period while accounting for confounders and temporal dependencies. Experimental
results on the M5 Walmart retail sales data demonstrate robust estimation of
the potential outcome of the control group as well as accurate predicted
holiday effect. Furthermore, we provided theoretical guarantees for the
estimated treatment effect, proving its consistency and asymptotic normality.
The proposed methodology is applicable not only to this always-missing control
scenario but also in other conventional time series causal inference settings.
|
2502.12395
|
Efficient Neural SDE Training using Wiener-Space Cubature
|
cs.LG
|
A neural stochastic differential equation (SDE) is an SDE with drift and
diffusion terms parametrized by neural networks. The training procedure for
neural SDEs consists of optimizing the SDE vector field (neural network)
parameters to minimize the expected value of an objective functional on
infinite-dimensional path-space. Existing training techniques focus on methods
to efficiently compute path-wise gradients of the objective functional with
respect to these parameters, then pair this with Monte-Carlo simulation to
estimate the expectation, and stochastic gradient descent to optimize. In this
work we introduce a novel training technique which bypasses and improves upon
Monte-Carlo simulation; we extend results in the theory of Wiener-space
cubature to approximate the expected objective functional by a weighted sum of
deterministic ODE solutions. This allows us to compute gradients by efficient
ODE adjoint methods. Furthermore, we exploit a high-order recombination scheme
to drastically reduce the number of ODE solutions necessary to achieve a
reasonable approximation. We show that this Wiener-space cubature approach can
surpass the O(1/sqrt(n)) rate of Monte-Carlo simulation, or the O(log(n)/n)
rate of quasi-Monte-Carlo, to achieve a O(1/n) rate under reasonable
assumptions.
|
2502.12396
|
Scientific Machine Learning of Flow Resistance Using Universal Shallow
Water Equations with Differentiable Programming
|
physics.flu-dyn cs.CE cs.LG
|
Shallow water equations (SWEs) are the backbone of most hydrodynamics models
for flood prediction, river engineering, and many other water resources
applications. The estimation of flow resistance, i.e., the Manning's roughness
coefficient $n$, is crucial for ensuring model accuracy, and has been
previously determined using empirical formulas or tables. To better account for
temporal and spatial variability in channel roughness, inverse modeling of $n$
using observed flow data is more reliable and adaptable; however, it is
challenging when using traditional SWE solvers. Based on the concept of
universal differential equation (UDE), which combines physics-based
differential equations with neural networks (NNs), we developed a universal
SWEs (USWEs) solver, Hydrograd, for hybrid hydrodynamics modeling. It can do
accurate forward simulations, support automatic differentiation (AD) for
gradient-based sensitivity analysis and parameter inversion, and perform
scientific machine learning for physics discovery. In this work, we first
validated the accuracy of its forward modeling, then applied a real-world case
to demonstrate the ability of USWEs to capture model sensitivity (gradients)
and perform inverse modeling of Manning's $n$. Furthermore, we used a NN to
learn a universal relationship between $n$, hydraulic parameters, and flow in a
real river channel. Unlike inverse modeling using surrogate models, Hydrograd
uses a two-dimensional SWEs solver as its physics backbone, which eliminates
the need for data-intensive pretraining and resolves the generalization problem
when applied to out-of-sample scenarios. This differentiable modeling approach,
with seamless integration with NNs, provides a new pathway for solving complex
inverse problems and discovering new physics in hydrodynamics.
|
2502.12397
|
Could AI Leapfrog the Web? Evidence from Teachers in Sierra Leone
|
cs.CY cs.AI cs.HC econ.GN q-fin.EC
|
Access to digital information is a driver of economic development. But
although 85% of sub-Saharan Africa's population is covered by mobile broadband
signal, only 37% use the internet, and those who do seldom use the web. We
investigate whether AI can bridge this gap by analyzing how 469 teachers use an
AI chatbot in Sierra Leone. The chatbot, accessible via a common messaging app,
is compared against traditional web search. Teachers use AI more frequently
than web search for teaching assistance. Data cost is the most frequently cited
reason for low internet usage across Africa. The average web search result
consumes 3,107 times more data than an AI response, making AI 87% less
expensive than web search. Additionally, only 2% of results for corresponding
web searches contain content from Sierra Leone. In blinded evaluations, an
independent sample of teachers rate AI responses as more relevant, helpful, and
correct than web search results. These findings suggest that AI-driven
solutions can cost-effectively bridge information gaps in low-connectivity
regions.
|
2502.12398
|
Solving the Cold Start Problem on One's Own as an End User via
Preference Transfer
|
cs.IR cs.AI cs.LG
|
We propose a new approach that enables end users to directly solve the cold
start problem by themselves. The cold start problem is a common issue in
recommender systems, and many methods have been proposed to address the problem
on the service provider's side. However, when the service provider does not
take action, users are left with poor recommendations and no means to improve
their experience. We propose an algorithm, Pretender, that allows end users to
proactively solve the cold start problem on their own. Pretender does not
require any special support from the service provider and can be deployed
independently by users. We formulate the problem as minimizing the distance
between the source and target distributions and optimize item selection from
the target service accordingly. Furthermore, we establish theoretical
guarantees for Pretender based on a discrete quadrature problem. We conduct
experiments on real-world datasets to demonstrate the effectiveness of
Pretender.
|
2502.12401
|
Risk Assessment of Transmission Lines Against Grid-ignited Wildfires
|
cs.CE
|
Wildfires ignited by the power lines have become increasingly common over the
past decade. Enhancing the operational and financial resilience of power grids
against wildfires involves a multifaceted approach. Key proactive measures
include meticulous vegetation management, strategic grid hardening such as
infrastructure undergrounding, preemptive de-energization, and disaster risk
financing, among others. Each measure should be tailored to prioritize efforts
in mitigating the consequences of wildfires. This paper proposes a transmission
line risk assessment method for grid-ignited wildfires, identifying the
transmission lines that could potentially lead to damage to the natural and
built environment and to other transmission lines if igniting a wildfire. Grid,
meteorological, and topological datasets are combined to enable a comprehensive
analysis. Numerical analysis on the standard IEEE 30-bus system demonstrates
the effectiveness of the proposed method.
|
2502.12403
|
Sensing-based Robustness Challenges in Agricultural Robotic Harvesting
|
cs.RO cs.SY eess.SY
|
This paper presents the challenges agricultural robotic harvesters face in
detecting and localising fruits under various environmental disturbances. In
controlled laboratory settings, both the traditional HSV (Hue Saturation Value)
transformation and the YOLOv8 (You Only Look Once) deep learning model were
employed. However, only YOLOv8 was utilised in outdoor experiments, as the HSV
transformation was not capable of accurately drawing fruit contours.
Experiments include ten distinct fruit patterns with six apples and six
oranges. A grid structure for homography (perspective) transformation was
employed to convert detected midpoints into 3D world coordinates. The
experiments evaluated detection and localisation under varying lighting and
background disturbances, revealing accurate performance indoors, but
significant challenges outdoors. Our results show that indoor experiments using
YOLOv8 achieved 100% detection accuracy, while outdoor conditions decreased
performance, with an average accuracy of 69.15% for YOLOv8 under direct
sunlight. The study demonstrates that real-world applications reveal
significant limitations due to changing lighting, background disturbances, and
colour and shape variability. These findings underscore the need for further
refinement of algorithms and sensors to enhance the robustness of robotic
harvesters for agricultural use.
|
2502.12404
|
WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages &
Dialects
|
cs.CL
|
As large language models (LLM) become more and more capable in languages
other than English, it is important to collect benchmark datasets in order to
evaluate their multilingual performance, including on tasks like machine
translation (MT). In this work, we extend the WMT24 dataset to cover 55
languages by collecting new human-written references and post-edits for 46 new
languages and dialects in addition to post-edits of the references in 8 out of
9 languages in the original WMT24 dataset. The dataset covers four domains:
literary, news, social, and speech. We benchmark a variety of MT providers and
LLMs on the collected dataset using automatic metrics and find that LLMs are
the best-performing MT systems in all 55 languages. These results should be
confirmed using a human-based evaluation, which we leave for future work.
|
2502.12405
|
An Investment Prioritization Model for Wildfire Risk Mitigation Through
Power Line Undergrounding
|
cs.CE
|
Grid-ignited wildfires are one of the most destructive catastrophic events,
profoundly affecting the built and natural environments. Burying power lines is
an effective solution for mitigating the risk of wildfire ignition. However, it
is a costly capital expenditure (CapEx) requiring meticulous planning and
investment prioritization. This paper proposes a systematic approach to
estimate the potential wildfire ignition damage associated with each
transmission line and accordingly offers a priority list for undergrounding.
The proposed approach allows electric utilities to make risk-informed decisions
for grid modernization and resiliency improvement against wildfires. As a case
study, we examine the likelihood of wildfire ignition for each line segment,
i.e., between two high-voltage towers, under diverse weather conditions
throughout the year. The studies on the standard IEEE 30-bus test system,
simulated on 43,712 scenarios, demonstrate the effectiveness of the proposed
approach.
|
2502.12406
|
Multi-vision-based Picking Point Localisation of Target Fruit for
Harvesting Robots
|
cs.RO cs.CV
|
This paper presents multi-vision-based localisation strategies for harvesting
robots. Identifying picking points accurately is essential for robotic
harvesting because insecure grasping can lead to economic loss through fruit
damage and dropping. In this study, two multi-vision-based localisation
methods, namely the analytical approach and model-based algorithms, were
employed. The actual geometric centre points of fruits were collected using a
motion capture system (mocap), and two different surface points Cfix and Ceih
were extracted using two Red-Green-Blue-Depth (RGB-D) cameras. First, the
picking points of the target fruit were detected using analytical methods.
Second, various primary and ensemble learning methods were employed to predict
the geometric centre of target fruits by taking surface points as input.
Adaboost regression, the most successful model-based localisation algorithm,
achieved 88.8% harvesting accuracy with a Mean Euclidean Distance (MED) of 4.40
mm, while the analytical approach reached 81.4% picking success with a MED of
14.25 mm, both demonstrating better performance than the single-camera, which
had a picking success rate of 77.7% with a MED of 24.02 mm. To evaluate the
effect of picking point accuracy in collecting fruits, a series of robotic
harvesting experiments were performed utilising a collaborative robot (cobot).
It is shown that multi-vision systems can improve picking point localisation,
resulting in higher success rates of picking in robotic harvesting.
|
2502.12408
|
On the Robust Approximation of ASR Metrics
|
cs.CL
|
Recent advances in speech foundation models are largely driven by scaling
both model size and data, enabling them to perform a wide range of tasks,
including speech recognition. Traditionally, ASR models are evaluated using
metrics like Word Error Rate (WER) and Character Error Rate (CER), which depend
on ground truth labels. As a result of limited labeled data from diverse
domains and testing conditions, the true generalization capabilities of these
models beyond standard benchmarks remain unclear. Moreover, labeling data is
both costly and time-consuming. To address this, we propose a novel label-free
approach for approximating ASR performance metrics, eliminating the need for
ground truth labels. Our method utilizes multimodal embeddings in a unified
space for speech and transcription representations, combined with a
high-quality proxy model to compute proxy metrics. These features are used to
train a regression model to predict key ASR metrics like Word Error Rate (WER)
and Character Error Rate (CER). We experiment with over 40 models across 14
datasets representing both standard and in-the-wild testing conditions. Our
results show that we approximate the metrics within a single-digit absolute
difference across all experimental configurations, outperforming the most
recent baseline by more than 50\%.
|
2502.12411
|
Gradient Co-occurrence Analysis for Detecting Unsafe Prompts in Large
Language Models
|
cs.CL cs.AI
|
Unsafe prompts pose significant safety risks to large language models (LLMs).
Existing methods for detecting unsafe prompts rely on data-driven fine-tuning
to train guardrail models, necessitating significant data and computational
resources. In contrast, recent few-shot gradient-based methods emerge,
requiring only few safe and unsafe reference prompts. A gradient-based approach
identifies unsafe prompts by analyzing consistent patterns of the gradients of
safety-critical parameters in LLMs. Although effective, its restriction to
directional similarity (cosine similarity) introduces ``directional bias'',
limiting its capability to identify unsafe prompts. To overcome this
limitation, we introduce GradCoo, a novel gradient co-occurrence analysis
method that expands the scope of safety-critical parameter identification to
include unsigned gradient similarity, thereby reducing the impact of
``directional bias'' and enhancing the accuracy of unsafe prompt detection.
Comprehensive experiments on the widely-used benchmark datasets ToxicChat and
XStest demonstrate that our proposed method can achieve state-of-the-art (SOTA)
performance compared to existing methods. Moreover, we confirm the
generalizability of GradCoo in detecting unsafe prompts across a range of LLM
base models with various sizes and origins.
|
2502.12412
|
Incomplete Graph Learning: A Comprehensive Survey
|
cs.LG eess.IV
|
Graph learning is a prevalent field that operates on ubiquitous graph data.
Effective graph learning methods can extract valuable information from graphs.
However, these methods are non-robust and affected by missing attributes in
graphs, resulting in sub-optimal outcomes. This has led to the emergence of
incomplete graph learning, which aims to process and learn from incomplete
graphs to achieve more accurate and representative results. In this paper, we
conducted a comprehensive review of the literature on incomplete graph
learning. Initially, we categorize incomplete graphs and provide precise
definitions of relevant concepts, terminologies, and techniques, thereby
establishing a solid understanding for readers. Subsequently, we classify
incomplete graph learning methods according to the types of incompleteness: (1)
attribute-incomplete graph learning methods, (2) attribute-missing graph
learning methods, and (3) hybrid-absent graph learning methods. By
systematically classifying and summarizing incomplete graph learning methods,
we highlight the commonalities and differences among existing approaches,
aiding readers in selecting methods and laying the groundwork for further
advancements. In addition, we summarize the datasets, incomplete processing
modes, evaluation metrics, and application domains used by the current methods.
Lastly, we discuss the current challenges and propose future directions for
incomplete graph learning, with the aim of stimulating further innovations in
this crucial field. To our knowledge, this is the first review dedicated to
incomplete graph learning, aiming to offer valuable insights for researchers in
related fields.We developed an online resource to follow relevant research
based on this review, available at
https://github.com/cherry-a11y/Incomplete-graph-learning.git
|
2502.12413
|
DivIL: Unveiling and Addressing Over-Invariance for Out-of- Distribution
Generalization
|
cs.LG
|
Out-of-distribution generalization is a common problem that expects the model
to perform well in the different distributions even far from the train data. A
popular approach to addressing this issue is invariant learning (IL), in which
the model is compiled to focus on invariant features instead of spurious
features by adding strong constraints during training. However, there are some
potential pitfalls of strong invariant constraints. Due to the limited number
of diverse environments and over-regularization in the feature space, it may
lead to a loss of important details in the invariant features while alleviating
the spurious correlations, namely the over-invariance, which can also degrade
the generalization performance. We theoretically define the over-invariance and
observe that this issue occurs in various classic IL methods. To alleviate this
issue, we propose a simple approach Diverse Invariant Learning (DivIL) by
adding the unsupervised contrastive learning and the random masking mechanism
compensatory for the invariant constraints, which can be applied to various IL
methods. Furthermore, we conduct experiments across multiple modalities across
12 datasets and 6 classic models, verifying our over-invariance insight and the
effectiveness of our DivIL framework. Our code is available at
https://github.com/kokolerk/DivIL.
|
2502.12414
|
Lost in Transcription, Found in Distribution Shift: Demystifying
Hallucination in Speech Foundation Models
|
cs.CL
|
Speech foundation models trained at a massive scale, both in terms of model
and data size, result in robust systems capable of performing multiple speech
tasks, including automatic speech recognition (ASR). These models transcend
language and domain barriers, yet effectively measuring their performance
remains a challenge. Traditional metrics like word error rate (WER) and
character error rate (CER) are commonly used to evaluate ASR performance but
often fail to reflect transcription quality in critical contexts, particularly
when detecting fabricated outputs. This phenomenon, known as hallucination, is
especially concerning in high-stakes domains such as healthcare, legal, and
aviation, where errors can have severe consequences. In our work, we address
this gap by investigating hallucination in ASR models. We examine how factors
such as distribution shifts, model size, and model architecture influence the
hallucination error rate (HER), a metric we introduce to quantify
hallucinations. Our analysis of 20 ASR models reveals \numinsights~key
insights: (1) High WERs can mask low hallucination rates, while low WERs may
conceal dangerous hallucinations. (2) Synthetic noise, both adversarial and
common perturbations like white noise, pitch shift, and time stretching,
increase HER. (3) Distribution shift correlates strongly with HER ($\alpha =
0.91$). Our findings highlight the importance of incorporating HER alongside
traditional metrics like WER to better assess ASR model performance,
particularly in high-stakes domains.
|
2502.12415
|
Gaseous Object Detection
|
cs.CV
|
Object detection, a fundamental and challenging problem in computer vision,
has experienced rapid development due to the effectiveness of deep learning.
The current objects to be detected are mostly rigid solid substances with
apparent and distinct visual characteristics. In this paper, we endeavor on a
scarcely explored task named Gaseous Object Detection (GOD), which is
undertaken to explore whether the object detection techniques can be extended
from solid substances to gaseous substances. Nevertheless, the gas exhibits
significantly different visual characteristics: 1) saliency deficiency, 2)
arbitrary and ever-changing shapes, 3) lack of distinct boundaries. To
facilitate the study on this challenging task, we construct a GOD-Video dataset
comprising 600 videos (141,017 frames) that cover various attributes with
multiple types of gases. A comprehensive benchmark is established based on this
dataset, allowing for a rigorous evaluation of frame-level and video-level
detectors. Deduced from the Gaussian dispersion model, the physics-inspired
Voxel Shift Field (VSF) is designed to model geometric irregularities and
ever-changing shapes in potential 3D space. By integrating VSF into Faster
RCNN, the VSF RCNN serves as a simple but strong baseline for gaseous object
detection. Our work aims to attract further research into this valuable albeit
challenging area.
|
2502.12418
|
Boosting Illuminant Estimation in Deep Color Constancy through Enhancing
Brightness Robustness
|
cs.CV cs.AI
|
Color constancy estimates illuminant chromaticity to correct color-biased
images. Recently, Deep Neural Network-driven Color Constancy (DNNCC) models
have made substantial advancements. Nevertheless, the potential risks in DNNCC
due to the vulnerability of deep neural networks have not yet been explored. In
this paper, we conduct the first investigation into the impact of a key factor
in color constancy-brightness-on DNNCC from a robustness perspective. Our
evaluation reveals that several mainstream DNNCC models exhibit high
sensitivity to brightness despite their focus on chromaticity estimation. This
sheds light on a potential limitation of existing DNNCC models: their
sensitivity to brightness may hinder performance given the widespread
brightness variations in real-world datasets. From the insights of our
analysis, we propose a simple yet effective brightness robustness enhancement
strategy for DNNCC models, termed BRE. The core of BRE is built upon the
adaptive step-size adversarial brightness augmentation technique, which
identifies high-risk brightness variation and generates augmented images via
explicit brightness adjustment. Subsequently, BRE develops a
brightness-robustness-aware model optimization strategy that integrates
adversarial brightness training and brightness contrastive loss, significantly
bolstering the brightness robustness of DNNCC models. BRE is
hyperparameter-free and can be integrated into existing DNNCC models, without
incurring additional overhead during the testing phase. Experiments on two
public color constancy datasets-ColorChecker and Cube+-demonstrate that the
proposed BRE consistently enhances the illuminant estimation performance of
existing DNNCC models, reducing the estimation error by an average of 5.04%
across six mainstream DNNCC models, underscoring the critical role of enhancing
brightness robustness in these models.
|
2502.12420
|
Sens-Merging: Sensitivity-Guided Parameter Balancing for Merging Large
Language Models
|
cs.CL cs.AI
|
Recent advances in large language models have led to numerous
task-specialized fine-tuned variants, creating a need for efficient model
merging techniques that preserve specialized capabilities while avoiding costly
retraining. While existing task vector-based merging methods show promise, they
typically apply uniform coefficients across all parameters, overlooking varying
parameter importance both within and across tasks. We present Sens-Merging, a
sensitivity-guided coefficient adjustment method that enhances existing model
merging techniques by operating at both task-specific and cross-task levels.
Our method analyzes parameter sensitivity within individual tasks and evaluates
cross-task transferability to determine optimal merging coefficients. Extensive
experiments on Mistral 7B and LLaMA2-7B/13B models demonstrate that
Sens-Merging significantly improves performance across general knowledge,
mathematical reasoning, and code generation tasks. Notably, when combined with
existing merging techniques, our method enables merged models to outperform
specialized fine-tuned models, particularly in code generation tasks. Our
findings reveal important trade-offs between task-specific and cross-task
scalings, providing insights for future model merging strategies.
|
2502.12421
|
Wi-Chat: Large Language Model Powered Wi-Fi Sensing
|
cs.CL
|
Recent advancements in Large Language Models (LLMs) have demonstrated
remarkable capabilities across diverse tasks. However, their potential to
integrate physical model knowledge for real-world signal interpretation remains
largely unexplored. In this work, we introduce Wi-Chat, the first LLM-powered
Wi-Fi-based human activity recognition system. We demonstrate that LLMs can
process raw Wi-Fi signals and infer human activities by incorporating Wi-Fi
sensing principles into prompts. Our approach leverages physical model insights
to guide LLMs in interpreting Channel State Information (CSI) data without
traditional signal processing techniques. Through experiments on real-world
Wi-Fi datasets, we show that LLMs exhibit strong reasoning capabilities,
achieving zero-shot activity recognition. These findings highlight a new
paradigm for Wi-Fi sensing, expanding LLM applications beyond conventional
language tasks and enhancing the accessibility of wireless sensing for
real-world deployments.
|
2502.12425
|
Robust Disentangled Counterfactual Learning for Physical Audiovisual
Commonsense Reasoning
|
cs.CV
|
In this paper, we propose a new Robust Disentangled Counterfactual Learning
(RDCL) approach for physical audiovisual commonsense reasoning. The task aims
to infer objects' physics commonsense based on both video and audio input, with
the main challenge being how to imitate the reasoning ability of humans, even
under the scenario of missing modalities. Most of the current methods fail to
take full advantage of different characteristics in multi-modal data, and
lacking causal reasoning ability in models impedes the progress of implicit
physical knowledge inferring. To address these issues, our proposed RDCL method
decouples videos into static (time-invariant) and dynamic (time-varying)
factors in the latent space by the disentangled sequential encoder, which
adopts a variational autoencoder (VAE) to maximize the mutual information with
a contrastive loss function. Furthermore, we introduce a counterfactual
learning module to augment the model's reasoning ability by modeling physical
knowledge relationships among different objects under counterfactual
intervention. To alleviate the incomplete modality data issue, we introduce a
robust multimodal learning method to recover the missing data by decomposing
the shared features and model-specific features. Our proposed method is a
plug-and-play module that can be incorporated into any baseline including VLMs.
In experiments, we show that our proposed method improves the reasoning
accuracy and robustness of baseline methods and achieves the state-of-the-art
performance.
|
2502.12427
|
Multi Image Super Resolution Modeling for Earth System Models
|
cs.CV
|
Super-resolution (SR) techniques are essential for improving Earth System
Model (ESM) data's spatial resolution, which helps better understand complex
environmental processes. This paper presents a new algorithm, ViFOR, which
combines Vision Transformers (ViT) and Implicit Neural Representation Networks
(INRs) to generate High-Resolution (HR) images from Low-Resolution (LR) inputs.
ViFOR introduces a novel integration of Fourier-based activation functions
within the Vision Transformer architecture, enabling it to effectively capture
global context and high-frequency details critical for accurate SR
reconstruction. The results show that ViFOR outperforms state-of-the-art
methods such as ViT, Sinusoidal Representation Networks (SIREN), and SR
Generative Adversarial Networks (SRGANs) based on metrics like Peak
Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE) both for global as
well as the local imagery. ViFOR improves PSNR of up to 4.18 dB, 1.56 dB, and
1.73 dB over ViT for full images in the Source Temperature, Shortwave, and
Longwave Flux.
|
2502.12430
|
Bridge the Gaps between Machine Unlearning and AI Regulation
|
cs.LG cs.AI
|
The "right to be forgotten" and the data privacy laws that encode it have
motivated machine unlearning since its earliest days. Now, an inbound wave of
artificial intelligence regulations - like the European Union's Artificial
Intelligence Act (AIA) - potentially offer important new use cases for machine
unlearning. However, this position paper argues, this opportunity will only be
realized if researchers, aided by policymakers, proactively bridge the
(sometimes sizable) gaps between machine unlearning's state of the art and its
potential applications to AI regulation. To demonstrate this point, we use the
AIA as an example. Specifically, we deliver a "state of the union" as regards
machine unlearning's current potential for aiding compliance with the AIA. This
starts with a precise cataloging of the potential applications of machine
unlearning to AIA compliance. For each, we flag any legal ambiguities clouding
the potential application and, moreover, flag the technical gaps that exist
between the potential application and the state of the art of machine
unlearning. Finally, we end with a call to action: for both machine learning
researchers and policymakers, to, respectively, solve the open technical and
legal questions that will unlock machine unlearning's potential to assist
compliance with the AIA - and other AI regulation like it.
|
2502.12435
|
A Survey on Large Language Models for Automated Planning
|
cs.AI cs.CL
|
The planning ability of Large Language Models (LLMs) has garnered increasing
attention in recent years due to their remarkable capacity for multi-step
reasoning and their ability to generalize across a wide range of domains. While
some researchers emphasize the potential of LLMs to perform complex planning
tasks, others highlight significant limitations in their performance,
particularly when these models are tasked with handling the intricacies of
long-horizon reasoning. In this survey, we critically investigate existing
research on the use of LLMs in automated planning, examining both their
successes and shortcomings in detail. We illustrate that although LLMs are not
well-suited to serve as standalone planners because of these limitations, they
nonetheless present an enormous opportunity to enhance planning applications
when combined with other approaches. Thus, we advocate for a balanced
methodology that leverages the inherent flexibility and generalized knowledge
of LLMs alongside the rigor and cost-effectiveness of traditional planning
methods.
|
2502.12436
|
Should I Trust You? Detecting Deception in Negotiations using
Counterfactual RL
|
cs.CL
|
An increasingly prevalent socio-technical problem is people being taken in by
offers that sound ``too good to be true'', where persuasion and trust shape
decision-making. This paper investigates how \abr{ai} can help detect these
deceptive scenarios. We analyze how humans strategically deceive each other in
\textit{Diplomacy}, a board game that requires both natural language
communication and strategic reasoning. This requires extracting logical forms
of proposed agreements in player communications and computing the relative
rewards of the proposal using agents' value functions. Combined with text-based
features, this can improve our deception detection. Our method detects human
deception with a high precision when compared to a Large Language Model
approach that flags many true messages as deceptive. Future human-\abr{ai}
interaction tools can build on our methods for deception detection by
triggering \textit{friction} to give users a chance of interrogating suspicious
proposals.
|
2502.12442
|
HopRAG: Multi-Hop Reasoning for Logic-Aware Retrieval-Augmented
Generation
|
cs.IR cs.CL
|
Retrieval-Augmented Generation (RAG) systems often struggle with imperfect
retrieval, as traditional retrievers focus on lexical or semantic similarity
rather than logical relevance. To address this, we propose HopRAG, a novel RAG
framework that augments retrieval with logical reasoning through
graph-structured knowledge exploration. During indexing, HopRAG constructs a
passage graph, with text chunks as vertices and logical connections established
via LLM-generated pseudo-queries as edges. During retrieval, it employs a
retrieve-reason-prune mechanism: starting with lexically or semantically
similar passages, the system explores multi-hop neighbors guided by
pseudo-queries and LLM reasoning to identify truly relevant ones. Extensive
experiments demonstrate HopRAG's superiority, achieving 76.78\% higher answer
accuracy and 65.07\% improved retrieval F1 score compared to conventional
methods. The repository is available at https://github.com/LIU-Hao-2002/HopRAG.
|
2502.12444
|
SparAMX: Accelerating Compressed LLMs Token Generation on AMX-powered
CPUs
|
cs.LG cs.AI cs.AR cs.PF
|
Large language models have high compute, latency, and memory requirements.
While specialized accelerators such as GPUs and TPUs typically run these
workloads, CPUs are more widely available and consume less energy. Accelerating
LLMs with CPUs enables broader AI access at a lower cost and power consumption.
This acceleration potential for CPUs is especially relevant during the
memory-bound decoding stage of LLM inference, which processes one token at a
time and is becoming increasingly utilized with reasoning models. We utilize
Advanced Matrix Extensions (AMX) support on the latest Intel CPUs together with
unstructured sparsity to achieve a $1.42 \times$ reduction in end-to-end
latency compared to the current PyTorch implementation by applying our
technique in linear layers. We provide a set of open-source customized sparse
kernels that can speed up any PyTorch model by automatically replacing all
linear layers with our custom sparse implementation. Furthermore, we
demonstrate for the first time the use of unstructured sparsity in the
attention computation achieving a $1.14 \times$ speedup over the current
systems without compromising accuracy. Code:
https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SparAMX
|
2502.12445
|
Computational Safety for Generative AI: A Signal Processing Perspective
|
cs.AI cs.LG stat.ML
|
AI safety is a rapidly growing area of research that seeks to prevent the
harm and misuse of frontier AI technology, particularly with respect to
generative AI (GenAI) tools that are capable of creating realistic and
high-quality content through text prompts. Examples of such tools include large
language models (LLMs) and text-to-image (T2I) diffusion models. As the
performance of various leading GenAI models approaches saturation due to
similar training data sources and neural network architecture designs, the
development of reliable safety guardrails has become a key differentiator for
responsibility and sustainability. This paper presents a formalization of the
concept of computational safety, which is a mathematical framework that enables
the quantitative assessment, formulation, and study of safety challenges in
GenAI through the lens of signal processing theory and methods. In particular,
we explore two exemplary categories of computational safety challenges in GenAI
that can be formulated as hypothesis testing problems. For the safety of model
input, we show how sensitivity analysis and loss landscape analysis can be used
to detect malicious prompts with jailbreak attempts. For the safety of model
output, we elucidate how statistical signal processing and adversarial learning
can be used to detect AI-generated content. Finally, we discuss key open
research challenges, opportunities, and the essential role of signal processing
in computational AI safety.
|
2502.12446
|
Multi-Attribute Steering of Language Models via Targeted Intervention
|
cs.CL cs.AI cs.LG
|
Inference-time intervention (ITI) has emerged as a promising method for
steering large language model (LLM) behavior in a particular direction (e.g.,
improving helpfulness) by intervening on token representations without costly
updates to the LLM's parameters. However, existing ITI approaches fail to scale
to multi-attribute settings with conflicts, such as enhancing helpfulness while
also reducing toxicity. To address this, we introduce Multi-Attribute Targeted
Steering (MAT-Steer), a novel steering framework designed for selective
token-level intervention across multiple attributes. MAT-Steer learns steering
vectors using an alignment objective that shifts the model's internal
representations of undesirable outputs closer to those of desirable ones while
enforcing sparsity and orthogonality among vectors for different attributes,
thereby reducing inter-attribute conflicts. We evaluate MAT-Steer in two
distinct settings: (i) on question answering (QA) tasks where we balance
attributes like truthfulness, bias, and toxicity; (ii) on generative tasks
where we simultaneously improve attributes like helpfulness, correctness, and
coherence. MAT-Steer outperforms existing ITI and parameter-efficient
finetuning approaches across both task types (e.g., 3% average accuracy gain
across QA tasks and 55.82% win rate against the best ITI baseline).
|
2502.12448
|
From Principles to Applications: A Comprehensive Survey of Discrete
Tokenizers in Generation, Comprehension, Recommendation, and Information
Retrieval
|
cs.IR
|
Discrete tokenizers have emerged as indispensable components in modern
machine learning systems, particularly within the context of autoregressive
modeling and large language models (LLMs). These tokenizers serve as the
critical interface that transforms raw, unstructured data from diverse
modalities into discrete tokens, enabling LLMs to operate effectively across a
wide range of tasks. Despite their central role in generation, comprehension,
and recommendation systems, a comprehensive survey dedicated to discrete
tokenizers remains conspicuously absent in the literature. This paper addresses
this gap by providing a systematic review of the design principles,
applications, and challenges of discrete tokenizers. We begin by dissecting the
sub-modules of tokenizers and systematically demonstrate their internal
mechanisms to provide a comprehensive understanding of their functionality and
design. Building on this foundation, we synthesize state-of-the-art methods,
categorizing them into multimodal generation and comprehension tasks, and
semantic tokens for personalized recommendations. Furthermore, we critically
analyze the limitations of existing tokenizers and outline promising directions
for future research. By presenting a unified framework for understanding
discrete tokenizers, this survey aims to guide researchers and practitioners in
addressing open challenges and advancing the field, ultimately contributing to
the development of more robust and versatile AI systems.
|
2502.12449
|
YUNet: Improved YOLOv11 Network for Skyline Detection
|
cs.CV
|
Skyline detection plays an important role in geolocalizaion, flight control,
visual navigation, port security, etc. The appearance of the sky and non-sky
areas are variable, because of different weather or illumination environment,
which brings challenges to skyline detection. In this research, we proposed the
YUNet algorithm, which improved the YOLOv11 architecture to segment the sky
region and extract the skyline in complicated and variable circumstances. To
improve the ability of multi-scale and large range contextual feature fusion,
the YOLOv11 architecture is extended as an UNet-like architecture, consisting
of an encoder, neck and decoder submodule. The encoder extracts the multi-scale
features from the given images. The neck makes fusion of these multi-scale
features. The decoder applies the fused features to complete the prediction
rebuilding. To validate the proposed approach, the YUNet was tested on
Skyfinder and CH1 datasets for segmentation and skyline detection respectively.
Our test shows that the IoU of YUnet segmentation can reach 0.9858, and the
average error of YUnet skyline detection is just 1.36 pixels. The
implementation is published at
https://github.com/kuazhangxiaoai/SkylineDet-YOLOv11Seg.git.
|
2502.12450
|
Investigating and Extending Homans' Social Exchange Theory with Large
Language Model based Agents
|
cs.AI
|
Homans' Social Exchange Theory (SET) is widely recognized as a basic
framework for understanding the formation and emergence of human civilizations
and social structures. In social science, this theory is typically studied
based on simple simulation experiments or real-world human studies, both of
which either lack realism or are too expensive to control. In artificial
intelligence, recent advances in large language models (LLMs) have shown
promising capabilities in simulating human behaviors. Inspired by these
insights, we adopt an interdisciplinary research perspective and propose using
LLM-based agents to study Homans' SET. Specifically, we construct a virtual
society composed of three LLM agents and have them engage in a social exchange
game to observe their behaviors. Through extensive experiments, we found that
Homans' SET is well validated in our agent society, demonstrating the
consistency between the agent and human behaviors. Building on this foundation,
we intentionally alter the settings of the agent society to extend the
traditional Homans' SET, making it more comprehensive and detailed. To the best
of our knowledge, this paper marks the first step in studying Homans' SET with
LLM-based agents. More importantly, it introduces a novel and feasible research
paradigm that bridges the fields of social science and computer science through
LLM-based agents. Code is available at https://github.com/Paitesanshi/SET.
|
2502.12453
|
UniMatch: Universal Matching from Atom to Task for Few-Shot Drug
Discovery
|
cs.LG cs.AI q-bio.BM
|
Drug discovery is crucial for identifying candidate drugs for various
diseases.However, its low success rate often results in a scarcity of
annotations, posing a few-shot learning problem. Existing methods primarily
focus on single-scale features, overlooking the hierarchical molecular
structures that determine different molecular properties. To address these
issues, we introduce Universal Matching Networks (UniMatch), a dual matching
framework that integrates explicit hierarchical molecular matching with
implicit task-level matching via meta-learning, bridging multi-level molecular
representations and task-level generalization. Specifically, our approach
explicitly captures structural features across multiple levels, such as atoms,
substructures, and molecules, via hierarchical pooling and matching,
facilitating precise molecular representation and comparison. Additionally, we
employ a meta-learning strategy for implicit task-level matching, allowing the
model to capture shared patterns across tasks and quickly adapt to new ones.
This unified matching framework ensures effective molecular alignment while
leveraging shared meta-knowledge for fast adaptation. Our experimental results
demonstrate that UniMatch outperforms state-of-the-art methods on the
MoleculeNet and FS-Mol benchmarks, achieving improvements of 2.87% in AUROC and
6.52% in delta AUPRC. UniMatch also shows excellent generalization ability on
the Meta-MolNet benchmark.
|
2502.12454
|
Benchmarking Zero-Shot Facial Emotion Annotation with Large Language
Models: A Multi-Class and Multi-Frame Approach in DailyLife
|
cs.CV cs.AI cs.LG
|
This study investigates the feasibility and performance of using large
language models (LLMs) to automatically annotate human emotions in everyday
scenarios. We conducted experiments on the DailyLife subset of the publicly
available FERV39k dataset, employing the GPT-4o-mini model for rapid, zero-shot
labeling of key frames extracted from video segments. Under a seven-class
emotion taxonomy ("Angry," "Disgust," "Fear," "Happy," "Neutral," "Sad,"
"Surprise"), the LLM achieved an average precision of approximately 50%. In
contrast, when limited to ternary emotion classification
(negative/neutral/positive), the average precision increased to approximately
64%. Additionally, we explored a strategy that integrates multiple frames
within 1-2 second video clips to enhance labeling performance and reduce costs.
The results indicate that this approach can slightly improve annotation
accuracy. Overall, our preliminary findings highlight the potential application
of zero-shot LLMs in human facial emotion annotation tasks, offering new
avenues for reducing labeling costs and broadening the applicability of LLMs in
complex multimodal environments.
|
2502.12455
|
DSMoE: Matrix-Partitioned Experts with Dynamic Routing for
Computation-Efficient Dense LLMs
|
cs.CL
|
As large language models continue to scale, computational costs and resource
consumption have emerged as significant challenges. While existing
sparsification methods like pruning reduce computational overhead, they risk
losing model knowledge through parameter removal. This paper proposes DSMoE
(Dynamic Sparse Mixture-of-Experts), a novel approach that achieves
sparsification by partitioning pre-trained FFN layers into computational
blocks. We implement adaptive expert routing using sigmoid activation and
straight-through estimators, enabling tokens to flexibly access different
aspects of model knowledge based on input complexity. Additionally, we
introduce a sparsity loss term to balance performance and computational
efficiency. Extensive experiments on LLaMA models demonstrate that under
equivalent computational constraints, DSMoE achieves superior performance
compared to existing pruning and MoE approaches across language modeling and
downstream tasks, particularly excelling in generation tasks. Analysis reveals
that DSMoE learns distinctive layerwise activation patterns, providing new
insights for future MoE architecture design.
|
2502.12456
|
Not-So-Optimal Transport Flows for 3D Point Cloud Generation
|
cs.CV cs.AI
|
Learning generative models of 3D point clouds is one of the fundamental
problems in 3D generative learning. One of the key properties of point clouds
is their permutation invariance, i.e., changing the order of points in a point
cloud does not change the shape they represent. In this paper, we analyze the
recently proposed equivariant OT flows that learn permutation invariant
generative models for point-based molecular data and we show that these models
scale poorly on large point clouds. Also, we observe learning (equivariant) OT
flows is generally challenging since straightening flow trajectories makes the
learned flow model complex at the beginning of the trajectory. To remedy these,
we propose not-so-optimal transport flow models that obtain an approximate OT
by an offline OT precomputation, enabling an efficient construction of OT pairs
for training. During training, we can additionally construct a hybrid coupling
by combining our approximate OT and independent coupling to make the target
flow models easier to learn. In an extensive empirical study, we show that our
proposed model outperforms prior diffusion- and flow-based approaches on a wide
range of unconditional generation and shape completion on the ShapeNet
benchmark.
|
2502.12458
|
An Empirical Evaluation of Encoder Architectures for Fast Real-Time Long
Conversational Understanding
|
cs.CL
|
Analyzing long text data such as customer call transcripts is a
cost-intensive and tedious task. Machine learning methods, namely Transformers,
are leveraged to model agent-customer interactions. Unfortunately, Transformers
adhere to fixed-length architectures and their self-attention mechanism scales
quadratically with input length. Such limitations make it challenging to
leverage traditional Transformers for long sequence tasks, such as
conversational understanding, especially in real-time use cases. In this paper
we explore and evaluate recently proposed efficient Transformer variants (e.g.
Performer, Reformer) and a CNN-based architecture for real-time and near
real-time long conversational understanding tasks. We show that CNN-based
models are dynamic, ~2.6x faster to train, ~80% faster inference and ~72% more
memory efficient compared to Transformers on average. Additionally, we evaluate
the CNN model using the Long Range Arena benchmark to demonstrate
competitiveness in general long document analysis.
|
2502.12459
|
Stress Testing Generalization: How Minor Modifications Undermine Large
Language Model Performance
|
cs.CL cs.AI cs.LG
|
This paper investigates the fragility of Large Language Models (LLMs) in
generalizing to novel inputs, specifically focusing on minor perturbations in
well-established benchmarks (e.g., slight changes in question format or
distractor length). Despite high benchmark scores, LLMs exhibit significant
accuracy drops and unexpected biases (e.g., preference for longer distractors)
when faced with these minor but content-preserving modifications. For example,
Qwen 2.5 1.5B's MMLU score rises from 60 to 89 and drops from 89 to 36 when
option lengths are changed without altering the question. Even GPT-4
experiences a 25-point accuracy loss when question types are changed, with a
6-point drop across all three modification categories. These analyses suggest
that LLMs rely heavily on superficial cues rather than forming robust, abstract
representations that generalize across formats, lexical variations, and
irrelevant content shifts. This work aligns with the ACL 2025 theme track on
the Generalization of NLP models, proposing a "Generalization Stress Test" to
assess performance shifts under controlled perturbations. The study calls for
reevaluating benchmarks and developing more reliable evaluation methodologies
to capture LLM generalization abilities better.
|
2502.12460
|
LMN: A Tool for Generating Machine Enforceable Policies from Natural
Language Access Control Rules using LLMs
|
cs.CR cs.LG
|
Organizations often lay down rules or guidelines called Natural Language
Access Control Policies (NLACPs) for specifying who gets access to which
information and when. However, these cannot be directly used in a target access
control model like Attribute-based Access Control (ABAC). Manually translating
the NLACP rules into Machine Enforceable Security Policies (MESPs) is both time
consuming and resource intensive, rendering it infeasible especially for large
organizations. Automated machine translation workflows, on the other hand,
require information security officers to be adept at using such processes. To
effectively address this problem, we have developed a free web-based publicly
accessible tool called LMN (LLMs for generating MESPs from NLACPs) that takes
an NLACP as input and converts it into a corresponding MESP. Internally, LMN
uses the GPT 3.5 API calls and an appropriately chosen prompt. Extensive
experiments with different prompts and performance metrics firmly establish the
usefulness of LMN.
|
2502.12462
|
Emulating Retrieval Augmented Generation via Prompt Engineering for
Enhanced Long Context Comprehension in LLMs
|
cs.CL
|
This paper addresses the challenge of comprehending very long contexts in
Large Language Models (LLMs) by proposing a method that emulates Retrieval
Augmented Generation (RAG) through specialized prompt engineering and
chain-of-thought (CoT) reasoning. While recent LLMs support over 100,000 tokens
in a single prompt, simply enlarging context windows has not guaranteed robust
multi-hop reasoning when key details are scattered across massive input. Our
approach treats the model as both the retriever and the reasoner: it first tags
relevant segments within a long passage, then employs a stepwise CoT workflow
to integrate these pieces of evidence. This single-pass method thereby reduces
reliance on an external retriever, yet maintains focus on crucial segments. We
evaluate our approach on selected tasks from BABILong, which interleaves
standard bAbI QA problems with large amounts of distractor text. Compared to
baseline (no retrieval) and naive RAG pipelines, our approach more accurately
handles multi-fact questions such as object location tracking, counting, and
indefinite knowledge. Furthermore, we analyze how prompt structure, including
the order of question, relevant-text tags, and overall instructions,
significantly affects performance. These findings underscore that optimized
prompt engineering, combined with guided reasoning, can enhance LLMs'
long-context comprehension and serve as a lightweight alternative to
traditional retrieval pipelines.
|
2502.12464
|
SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety
Guardrails in Large Language Models
|
cs.CL
|
Deploying large language models (LLMs) in real-world applications requires
robust safety guard models to detect and block harmful user prompts. While
large safety guard models achieve strong performance, their computational cost
is substantial. To mitigate this, smaller distilled models are used, but they
often underperform on "hard" examples where the larger model provides accurate
predictions. We observe that many inputs can be reliably handled by the smaller
model, while only a small fraction require the larger model's capacity.
Motivated by this, we propose SafeRoute, a binary router that distinguishes
hard examples from easy ones. Our method selectively applies the larger safety
guard model to the data that the router considers hard, improving efficiency
while maintaining accuracy compared to solely using the larger safety guard
model. Experimental results on multiple benchmark datasets demonstrate that our
adaptive model selection significantly enhances the trade-off between
computational cost and safety performance, outperforming relevant baselines.
|
2502.12465
|
Computational-Statistical Tradeoffs at the Next-Token Prediction
Barrier: Autoregressive and Imitation Learning under Misspecification
|
cs.LG cs.DS
|
Next-token prediction with the logarithmic loss is a cornerstone of
autoregressive sequence modeling, but, in practice, suffers from error
amplification, where errors in the model compound and generation quality
degrades as sequence length $H$ increases. From a theoretical perspective, this
phenomenon should not appear in well-specified settings, and, indeed, a growing
body of empirical work hypothesizes that misspecification, where the learner is
not sufficiently expressive to represent the target distribution, may be the
root cause. Under misspecification -- where the goal is to learn as well as the
best-in-class model up to a multiplicative approximation factor $C\geq 1$ -- we
confirm that $C$ indeed grows with $H$ for next-token prediction, lending
theoretical support to this empirical hypothesis. We then ask whether this mode
of error amplification is avoidable algorithmically, computationally, or
information-theoretically, and uncover inherent computational-statistical
tradeoffs. We show:
(1) Information-theoretically, one can avoid error amplification and achieve
$C=O(1)$.
(2) Next-token prediction can be made robust so as to achieve $C=\tilde
O(H)$, representing moderate error amplification, but this is an inherent
barrier: any next-token prediction-style objective must suffer $C=\Omega(H)$.
(3) For the natural testbed of autoregressive linear models, no
computationally efficient algorithm can achieve sub-polynomial approximation
factor $C=e^{(\log H)^{1-\Omega(1)}}$; however, at least for binary token
spaces, one can smoothly trade compute for statistical power and improve on
$C=\Omega(H)$ in sub-exponential time.
Our results have consequences in the more general setting of imitation
learning, where the widely-used behavior cloning algorithm generalizes
next-token prediction.
|
2502.12466
|
EquiBench: Benchmarking Code Reasoning Capabilities of Large Language
Models via Equivalence Checking
|
cs.LG cs.AI cs.CL cs.PL cs.SE
|
Equivalence checking, i.e., determining whether two programs produce
identical outputs for all possible inputs, underpins a broad range of
applications, including software refactoring, testing, and optimization. We
present the task of equivalence checking as a new way to evaluate the code
reasoning abilities of large language models (LLMs). We introduce EquiBench, a
dataset of 2400 program pairs spanning four programming languages and six
equivalence categories. These pairs are systematically generated through
program analysis, compiler scheduling, and superoptimization, covering
nontrivial structural transformations that demand deep semantic reasoning
beyond simple syntactic variations. Our evaluation of 17 state-of-the-art LLMs
shows that OpenAI o3-mini achieves the highest overall accuracy of 78.0%. In
the most challenging categories, the best accuracies are 62.3% and 68.8%, only
modestly above the 50% random baseline for binary classification, indicating
significant room for improvement in current models' code reasoning
capabilities.
|
2502.12468
|
MCTS-Judge: Test-Time Scaling in LLM-as-a-Judge for Code Correctness
Evaluation
|
cs.LG cs.AI
|
The LLM-as-a-Judge paradigm shows promise for evaluating generative content
but lacks reliability in reasoning-intensive scenarios, such as programming.
Inspired by recent advances in reasoning models and shifts in scaling laws, we
pioneer bringing test-time computation into LLM-as-a-Judge, proposing
MCTS-Judge, a resource-efficient, System-2 thinking framework for code
correctness evaluation. MCTS-Judge leverages Monte Carlo Tree Search (MCTS) to
decompose problems into simpler, multi-perspective evaluations. Through a
node-selection strategy that combines self-assessment based on historical
actions in the current trajectory and the Upper Confidence Bound for Trees
based on prior rollouts, MCTS-Judge balances global optimization and refinement
of the current trajectory. We further designed a high-precision,
unit-test-level reward mechanism to encourage the Large Language Model (LLM) to
perform line-by-line analysis. Extensive experiments on three benchmarks and
five LLMs demonstrate the effectiveness of MCTS-Judge, which improves the base
model's accuracy from 41% to 80%, surpassing the o1-series models with 3x fewer
tokens. Further evaluations validate the superiority of its reasoning
trajectory in logic, analytics, thoroughness, and overall quality, while
revealing the test-time scaling law of the LLM-as-a-Judge paradigm.
|
2502.12470
|
Reasoning on a Spectrum: Aligning LLMs to System 1 and System 2 Thinking
|
cs.CL
|
Large Language Models (LLMs) exhibit impressive reasoning abilities, yet
their reliance on structured step-by-step processing reveals a critical
limitation. While human cognition fluidly adapts between intuitive, heuristic
(System 1) and analytical, deliberative (System 2) reasoning depending on the
context, LLMs lack this dynamic flexibility. This rigidity can lead to brittle
and unreliable performance when faced with tasks that deviate from their
trained patterns. To address this, we create a dataset of 2,000 samples with
valid System 1 and System 2 answers, explicitly align LLMs with these reasoning
styles, and evaluate their performance across reasoning benchmarks. Our results
reveal an accuracy-efficiency trade-off: System 2-aligned models excel in
arithmetic and symbolic reasoning, while System 1-aligned models perform better
in commonsense tasks. A mechanistic analysis of model responses shows that
System 1 models employ more definitive answers, whereas System 2 models
demonstrate greater uncertainty. Interpolating between these extremes produces
a monotonic transition in reasoning accuracy, preserving coherence. This work
challenges the assumption that step-by-step reasoning is always optimal and
highlights the need for adapting reasoning strategies based on task demands.
|
2502.12476
|
CoCo-CoLa: Evaluating Language Adherence in Multilingual LLMs
|
cs.CL
|
Multilingual Large Language Models (LLMs) develop cross-lingual abilities
despite being trained on limited parallel data. However, they often struggle to
generate responses in the intended language, favoring high-resource languages
such as English. In this work, we introduce CoCo-CoLa (Correct Concept -
Correct Language), a novel metric to evaluate language adherence in
multilingual LLMs. Using fine-tuning experiments on a closed-book QA task
across seven languages, we analyze how training in one language affects others'
performance. Our findings reveal that multilingual models share task knowledge
across languages but exhibit biases in the selection of output language. We
identify language-specific layers, showing that final layers play a crucial
role in determining output language. Accordingly, we propose a partial training
strategy that selectively fine-tunes key layers, improving language adherence
while significantly reducing computational cost. Our method achieves comparable
or superior performance to full fine-tuning, particularly for low-resource
languages, offering a more efficient multilingual adaptation.
|
2502.12477
|
Savaal: Scalable Concept-Driven Question Generation to Enhance Human
Learning
|
cs.CL
|
Assessing and enhancing human learning through question-answering is vital,
yet automating this process remains challenging. While large language models
(LLMs) excel at summarization and query responses, their ability to generate
meaningful questions for learners is underexplored.
We propose Savaal, a scalable question-generation system with three
objectives: (i) scalability, enabling question generation from hundreds of
pages of text (ii) depth of understanding, producing questions beyond factual
recall to test conceptual reasoning, and (iii) domain-independence,
automatically generating questions across diverse knowledge areas. Instead of
providing an LLM with large documents as context, Savaal improves results with
a three-stage processing pipeline. Our evaluation with 76 human experts on 71
papers and PhD dissertations shows that Savaal generates questions that better
test depth of understanding by 6.5X for dissertations and 1.5X for papers
compared to a direct-prompting LLM baseline. Notably, as document length
increases, Savaal's advantages in higher question quality and lower cost become
more pronounced.
|
2502.12478
|
MSE-Adapter: A Lightweight Plugin Endowing LLMs with the Capability to
Perform Multimodal Sentiment Analysis and Emotion Recognition
|
cs.CL
|
Current Multimodal Sentiment Analysis (MSA) and Emotion Recognition in
Conversations (ERC) methods based on pre-trained language models exhibit two
primary limitations:
1) Once trained for MSA and ERC tasks, these pre-trained language models lose
their original generalized capabilities. 2) They demand considerable
computational resources. As the size of pre-trained language models continues
to grow, training larger multimodal sentiment analysis models using previous
approaches could result in unnecessary computational cost. In response to this
challenge, we propose \textbf{M}ultimodal \textbf{S}entiment Analysis and
\textbf{E}motion Recognition \textbf{Adapter} (MSE-Adapter), a lightweight and
adaptable plugin. This plugin enables a large language model (LLM) to carry out
MSA or ERC tasks with minimal computational overhead (only introduces
approximately 2.6M to 2.8M trainable parameters upon the 6/7B models), while
preserving the intrinsic capabilities of the LLM. In the MSE-Adapter, the
Text-Guide-Mixer (TGM) module is introduced to establish explicit connections
between non-textual and textual modalities through the Hadamard product. This
allows non-textual modalities to better align with textual modalities at the
feature level, promoting the generation of higher-quality pseudo tokens.
Extensive experiments were conducted on four public English and Chinese
datasets using consumer-grade GPUs and open-source LLMs (Qwen-1.8B,
ChatGLM3-6B-base, and LLaMA2-7B) as the backbone. The results demonstrate the
effectiveness of the proposed plugin. The code will be released on GitHub after
a blind review.
|
2502.12479
|
MotifBench: A standardized protein design benchmark for
motif-scaffolding problems
|
cs.LG q-bio.BM
|
The motif-scaffolding problem is a central task in computational protein
design: Given the coordinates of atoms in a geometry chosen to confer a desired
biochemical function (a motif), the task is to identify diverse protein
structures (scaffolds) that include the motif and maintain its geometry.
Significant recent progress on motif-scaffolding has been made due to
computational evaluation with reliable protein structure prediction and
fixed-backbone sequence design methods. However, significant variability in
evaluation strategies across publications has hindered comparability of
results, challenged reproducibility, and impeded robust progress. In response
we introduce MotifBench, comprising (1) a precisely specified pipeline and
evaluation metrics, (2) a collection of 30 benchmark problems, and (3) an
implementation of this benchmark and leaderboard at
github.com/blt2114/MotifBench. The MotifBench test cases are more difficult
compared to earlier benchmarks, and include protein design problems for which
solutions are known but on which, to the best of our knowledge,
state-of-the-art methods fail to identify any solution.
|
2502.12481
|
Predicate Hierarchies Improve Few-Shot State Classification
|
cs.CV cs.AI cs.LG cs.RO
|
State classification of objects and their relations is core to many
long-horizon tasks, particularly in robot planning and manipulation. However,
the combinatorial explosion of possible object-predicate combinations, coupled
with the need to adapt to novel real-world environments, makes it a desideratum
for state classification models to generalize to novel queries with few
examples. To this end, we propose PHIER, which leverages predicate hierarchies
to generalize effectively in few-shot scenarios. PHIER uses an object-centric
scene encoder, self-supervised losses that infer semantic relations between
predicates, and a hyperbolic distance metric that captures hierarchical
structure; it learns a structured latent space of image-predicate pairs that
guides reasoning over state classification queries. We evaluate PHIER in the
CALVIN and BEHAVIOR robotic environments and show that PHIER significantly
outperforms existing methods in few-shot, out-of-distribution state
classification, and demonstrates strong zero- and few-shot generalization from
simulated to real-world tasks. Our results demonstrate that leveraging
predicate hierarchies improves performance on state classification tasks with
limited data.
|
2502.12483
|
The Knowledge Microscope: Features as Better Analytical Lenses than
Neurons
|
cs.CL
|
Previous studies primarily utilize MLP neurons as units of analysis for
understanding the mechanisms of factual knowledge in Language Models (LMs);
however, neurons suffer from polysemanticity, leading to limited knowledge
expression and poor interpretability. In this paper, we first conduct
preliminary experiments to validate that Sparse Autoencoders (SAE) can
effectively decompose neurons into features, which serve as alternative
analytical units. With this established, our core findings reveal three key
advantages of features over neurons: (1) Features exhibit stronger influence on
knowledge expression and superior interpretability. (2) Features demonstrate
enhanced monosemanticity, showing distinct activation patterns between related
and unrelated facts. (3) Features achieve better privacy protection than
neurons, demonstrated through our proposed FeatureEdit method, which
significantly outperforms existing neuron-based approaches in erasing
privacy-sensitive information from LMs.Code and dataset will be available.
|
2502.12484
|
LocalEscaper: A Weakly-supervised Framework with Regional Reconstruction
for Scalable Neural TSP Solvers
|
cs.LG cs.AI
|
Neural solvers have shown significant potential in solving the Traveling
Salesman Problem (TSP), yet current approaches face significant challenges.
Supervised learning (SL)-based solvers require large amounts of high-quality
labeled data, while reinforcement learning (RL)-based solvers, though less
dependent on such data, often suffer from inefficiencies. To address these
limitations, we propose LocalEscaper, a novel weakly-supervised learning
framework for large-scale TSP. LocalEscaper effectively combines the advantages
of both SL and RL, enabling effective training on datasets with low-quality
labels. To further enhance solution quality, we introduce a regional
reconstruction strategy, which mitigates the problem of local optima, a common
issue in existing local reconstruction methods. Additionally, we propose a
linear-complexity attention mechanism that reduces computational overhead,
enabling the efficient solution of large-scale TSPs without sacrificing
performance. Experimental results on both synthetic and real-world datasets
demonstrate that LocalEscaper outperforms existing neural solvers, achieving
state-of-the-art results. Notably, it sets a new benchmark for scalability and
efficiency, solving TSP instances with up to 50,000 cities.
|
2502.12485
|
Safe at the Margins: A General Approach to Safety Alignment in
Low-Resource English Languages -- A Singlish Case Study
|
cs.CL cs.AI
|
To ensure safe usage, Large Language Models (LLMs) typically undergo
alignment with human-defined values. However, this alignment often relies on
primarily English data and is biased towards Western-centric values, limiting
its effectiveness in low-resource language settings. In this paper, we describe
our approach for aligning SEA-Lion-v2.1-Instruct (a Llama3-8B variant) to
minimize toxicity in Singlish, an English creole specific to Singapore. We find
that supervised fine-tuning and Kahneman-Tversky Optimization (KTO) on paired
and unpaired preferences is more sample efficient and yields significantly
better results than Direct Preference Optimization (DPO). Our analysis reveals
that DPO implicitly enforces a weaker safety objective than KTO, and that SFT
complements KTO by improving training stability. Finally, we introduce a simple
but novel modification to KTO, KTO-S, which improves training stability through
better gradient exploitation. Overall, we present a general approach for safety
alignment conducive to low-resource English languages, successfully reducing
toxicity by 99\% on our Singlish benchmark, with gains generalizing to the
broader TOXIGEN dataset while maintaining strong performance across standard
LLM benchmarks.
|
2502.12486
|
EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via
Reinforcement Learning
|
cs.CL
|
Large Language Models (LLMs) have shown impressive reasoning capabilities in
well-defined problems with clear solutions, such as mathematics and coding.
However, they still struggle with complex real-world scenarios like business
negotiations, which require strategic reasoning-an ability to navigate dynamic
environments and align long-term goals amidst uncertainty. Existing methods for
strategic reasoning face challenges in adaptability, scalability, and
transferring strategies to new contexts. To address these issues, we propose
explicit policy optimization (EPO) for strategic reasoning, featuring an LLM
that provides strategies in open-ended action space and can be plugged into
arbitrary LLM agents to motivate goal-directed behavior. To improve
adaptability and policy transferability, we train the strategic reasoning model
via multi-turn reinforcement learning (RL) using process rewards and iterative
self-play, without supervised fine-tuning (SFT) as a preliminary step.
Experiments across social and physical domains demonstrate EPO's ability of
long-term goal alignment through enhanced strategic reasoning, achieving
state-of-the-art performance on social dialogue and web navigation tasks. Our
findings reveal various collaborative reasoning mechanisms emergent in EPO and
its effectiveness in generating novel strategies, underscoring its potential
for strategic reasoning in real-world applications.
|
2502.12488
|
Enhancing Audio-Visual Spiking Neural Networks through
Semantic-Alignment and Cross-Modal Residual Learning
|
cs.CV
|
Humans interpret and perceive the world by integrating sensory information
from multiple modalities, such as vision and hearing. Spiking Neural Networks
(SNNs), as brain-inspired computational models, exhibit unique advantages in
emulating the brain's information processing mechanisms. However, existing SNN
models primarily focus on unimodal processing and lack efficient cross-modal
information fusion, thereby limiting their effectiveness in real-world
multimodal scenarios. To address this challenge, we propose a
semantic-alignment cross-modal residual learning (S-CMRL) framework, a
Transformer-based multimodal SNN architecture designed for effective
audio-visual integration. S-CMRL leverages a spatiotemporal spiking attention
mechanism to extract complementary features across modalities, and incorporates
a cross-modal residual learning strategy to enhance feature integration.
Additionally, a semantic alignment optimization mechanism is introduced to
align cross-modal features within a shared semantic space, improving their
consistency and complementarity. Extensive experiments on three benchmark
datasets CREMA-D, UrbanSound8K-AV, and MNISTDVS-NTIDIGITS demonstrate that
S-CMRL significantly outperforms existing multimodal SNN methods, achieving the
state-of-the-art performance. The code is publicly available at
https://github.com/Brain-Cog-Lab/S-CMRL.
|
2502.12489
|
A Comprehensive Survey on Generative AI for Video-to-Music Generation
|
eess.AS cs.AI cs.MM
|
The burgeoning growth of video-to-music generation can be attributed to the
ascendancy of multimodal generative models. However, there is a lack of
literature that comprehensively combs through the work in this field. To fill
this gap, this paper presents a comprehensive review of video-to-music
generation using deep generative AI techniques, focusing on three key
components: visual feature extraction, music generation frameworks, and
conditioning mechanisms. We categorize existing approaches based on their
designs for each component, clarifying the roles of different strategies.
Preceding this, we provide a fine-grained classification of video and music
modalities, illustrating how different categories influence the design of
components within the generation pipelines. Furthermore, we summarize available
multimodal datasets and evaluation metrics while highlighting ongoing
challenges in the field.
|
2502.12490
|
UniGenCoder: Merging Seq2Seq and Seq2Tree Paradigms for Unified Code
Generation
|
cs.CL
|
Deep learning-based code generation has completely transformed the way
developers write programs today. Existing approaches to code generation have
focused either on the Sequence-to-Sequence paradigm, which generates target
code as a sequence of tokens, or the Sequence-to-Tree paradigm, which outputs
code as a sequence of actions. While these two paradigms are intuitively
complementary, their combination has not been previously explored. By comparing
the code generated under these two paradigms, we find that integrating them
holds significant potential. In this paper, we propose UniGenCoder for
code-related generation tasks, which consists of a shared encoder, a shared
decoder with a minimal set of additional parameters to unify two paradigms, and
a selector that dynamically chooses optimal paradigm for each instance. Also,
during the model training, we first perform the multi-task learning and
distillation strategies to facilitate knowledge transfer between two paradigms,
and then leverage contrastive learning to train the selector. Experimental
results on the text-to-code and code-to-code generation tasks demonstrate the
effectiveness of our proposed model. We release our code at
https://github.com/DeepLearnXMU/UniGenCoder.
|
2502.12492
|
Boost, Disentangle, and Customize: A Robust System2-to-System1 Pipeline
for Code Generation
|
cs.AI
|
Large language models (LLMs) have demonstrated remarkable capabilities in
various domains, particularly in system 1 tasks, yet the intricacies of their
problem-solving mechanisms in system 2 tasks are not sufficiently explored.
Recent research on System2-to-System1 methods surge, exploring the System 2
reasoning knowledge via inference-time computation and compressing the explored
knowledge into System 1 process. In this paper, we focus on code generation,
which is a representative System 2 task, and identify two primary challenges:
(1) the complex hidden reasoning processes and (2) the heterogeneous data
distributions that complicate the exploration and training of robust LLM
solvers. To tackle these issues, we propose a novel BDC framework that explores
insightful System 2 knowledge of LLMs using a MC-Tree-Of-Agents algorithm with
mutual \textbf{B}oosting, \textbf{D}isentangles the heterogeneous training data
for composable LoRA-experts, and obtain \textbf{C}ustomized problem solver for
each data instance with an input-aware hypernetwork to weight over the
LoRA-experts, offering effectiveness, flexibility, and robustness. This
framework leverages multiple LLMs through mutual verification and boosting,
integrated into a Monte-Carlo Tree Search process enhanced by reflection-based
pruning and refinement. Additionally, we introduce the DisenLora algorithm,
which clusters heterogeneous data to fine-tune LLMs into composable Lora
experts, enabling the adaptive generation of customized problem solvers through
an input-aware hypernetwork. This work lays the groundwork for advancing LLM
capabilities in complex reasoning tasks, offering a novel System2-to-System1
solution.
|
2502.12493
|
Optimal and Almost Optimal Locally Repairable Codes from Hyperelliptic
Curves
|
cs.IT math.IT
|
Locally repairable codes are widely applicable in contemporary large-scale
distributed cloud storage systems and various other areas. By making use of
some algebraic structures of elliptic curves, Li et al. developed a series of
$q$-ary optimal locally repairable codes with lengths that can extend to
$q+2\sqrt{q}$. In this paper, we generalize their methods to hyperelliptic
curves of genus $2$, resulting in the construction of several new families of
$q$-ary optimal or almost optimal locally repairable codes. Our codes feature
lengths that can approach $q+4\sqrt{q}$, and the locality can reach up to
$239$.
|
2502.12494
|
EDGE: Efficient Data Selection for LLM Agents via Guideline
Effectiveness
|
cs.LG cs.AI
|
Large Language Models (LLMs) have shown remarkable capabilities as AI agents.
However, existing methods for enhancing LLM-agent abilities often lack a focus
on data quality, leading to inefficiencies and suboptimal results in both
fine-tuning and prompt engineering. To address this issue, we introduce EDGE, a
novel approach for identifying informative samples without needing golden
answers. We propose the Guideline Effectiveness (GE) metric, which selects
challenging samples by measuring the impact of human-provided guidelines in
multi-turn interaction tasks. A low GE score indicates that the human expertise
required for a sample is missing from the guideline, making the sample more
informative. By selecting samples with low GE scores, we can improve the
efficiency and outcomes of both prompt engineering and fine-tuning processes
for LLMs. Extensive experiments validate the performance of our method. Our
method achieves competitive results on the HotpotQA and WebShop and datasets,
requiring 75\% and 50\% less data, respectively, while outperforming existing
methods. We also provide a fresh perspective on the data quality of LLM-agent
fine-tuning.
|
2502.12498
|
USPilot: An Embodied Robotic Assistant Ultrasound System with Large
Language Model Enhanced Graph Planner
|
cs.RO
|
In the era of Large Language Models (LLMs), embodied artificial intelligence
presents transformative opportunities for robotic manipulation tasks.
Ultrasound imaging, a widely used and cost-effective medical diagnostic
procedure, faces challenges due to the global shortage of professional
sonographers. To address this issue, we propose USPilot, an embodied robotic
assistant ultrasound system powered by an LLM-based framework to enable
autonomous ultrasound acquisition. USPilot is designed to function as a virtual
sonographer, capable of responding to patients' ultrasound-related queries and
performing ultrasound scans based on user intent. By fine-tuning the LLM,
USPilot demonstrates a deep understanding of ultrasound-specific questions and
tasks. Furthermore, USPilot incorporates an LLM-enhanced Graph Neural Network
(GNN) to manage ultrasound robotic APIs and serve as a task planner.
Experimental results show that the LLM-enhanced GNN achieves unprecedented
accuracy in task planning on public datasets. Additionally, the system
demonstrates significant potential in autonomously understanding and executing
ultrasound procedures. These advancements bring us closer to achieving
autonomous and potentially unmanned robotic ultrasound systems, addressing
critical resource gaps in medical imaging.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.